Science.gov

Sample records for acoustic vowel space

  1. Reference Data for the American English Acoustic Vowel Space

    ERIC Educational Resources Information Center

    Flipsen, Peter, Jr.; Lee, Sungbok

    2012-01-01

    Reference data for the acoustic vowel space area (VSA) in children and adolescents do not currently appear to be available in a form suitable for normative comparisons. In the current study, individual speaker formant data for the four corner vowels of American English (/i, u, ae, [alpha]/) were used to compute individual speaker VSAs. The sample…

  2. Vowel Acoustic Space Development in Children: A Synthesis of Acoustic and Anatomic Data

    ERIC Educational Resources Information Center

    Vorperian, Houri K.; Kent, Ray D.

    2007-01-01

    Purpose: This article integrates published acoustic data on the development of vowel production. Age specific data on formant frequencies are considered in the light of information on the development of the vocal tract (VT) to create an anatomic-acoustic description of the maturation of the vowel acoustic space for English. Method: Literature…

  3. The influence of phonetic context and formant measurement location on acoustic vowel space

    NASA Astrophysics Data System (ADS)

    Turner, Greg S.; Hutchings, David T.; Sylvester, Betsy; Weismer, Gary

    2003-04-01

    One way of depicting vowel production is by describing vowels within an F1/F2 acoustic vowel space. This acoustic measure illustrates the dispersion of F1 and F2 values at a specific moment in time (e.g., the temporal midpoint of a vowel) for the vowels of a given language. This measure has recently been used to portray vowel production in individuals with communication disorders such as dysarthria and is moderately related to the severity of the speech disorder. Studies aimed at identifying influential factors effecting measurement stability of vowel space have yet to be completed. The focus of the present study is to evaluate the influence of phonetic context and spectral measurement location on vowel space in a group of neurologically normal American English speakers. For this study, vowel space was defined in terms of the dispersion of the four corner vowels produced within a CVC syllable frame, where C includes six stop consonants in all possible combinations with each vowel. Spectral measures were made at the midpoint and formant extremes of the vowels. A discussion will focus on individual and group variation in vowel space as a function of phonetic context and temporal measurement location.

  4. A modeling investigation of vowel-to-vowel movement planning in acoustic and muscle spaces

    NASA Astrophysics Data System (ADS)

    Zandipour, Majid

    The primary objective of this research was to explore the coordinate space in which speech movements are planned. A two dimensional biomechanical model of the vocal tract (tongue, lips, jaw, and pharynx) was constructed based on anatomical and physiological data from a subject. The model transforms neural command signals into the actions of muscles. The tongue was modeled by a 221-node finite element mesh. Each of the eight tongue muscles defined within the mesh was controlled by a virtual muscle model. The other vocal-tract components were modeled as simple 2nd-order systems. The model's geometry was adapted to a speaker, using MRI scans of the speaker's vocal tract. The vocal tract model, combined with an adaptive controller that consisted of a forward model (mapping 12-dimensional motor commands to a 64-dimensional acoustic spectrum) and an inverse model (mapping acoustic trajectories to motor command trajectories), was used to simulate and explore the implications of two planning hypotheses: planning in motor space vs. acoustic space. The acoustic, kinematic, and muscle activation (EMG) patterns of vowel-to-vowel sequences generated by the model were compared to data from the speaker whose acoustic, kinematic and EMG were also recorded. The simulation results showed that: (a) modulations of the motor commands effectively accounted for the effects of speaking rate on EMG, kinematic, and acoustic outputs; (b) the movement and acoustic trajectories were influenced by vocal tract biomechanics; and (c) both planning schemes produced similar articulatory movement, EMG, muscle length, force, and acoustic trajectories, which were also comparable to the subject's data under normal speaking conditions. In addition, the effects of a bite-block on measured EMG, kinematics and formants were simulated by the model. Acoustic planning produced successful simulations but motor planning did not. The simulation results suggest that with somatosensory feedback but no auditory

  5. Can acoustic vowel space predict the habitual speech rate of the speaker?

    PubMed

    Tsao, Y-C; Iqbal, K

    2005-01-01

    This study aims to find whether the acoustic vowel space reflect the habitual speaking rate of the speaker. The vowel space is defined as the area of the quadrilateral formed by the four corner vowels (i.e.,/i/,/æ/,/u/,/α) in the F1F2- 2 plane. The study compares the acoustic vowel space in the speech of habitually slow and fast talkers and further analyzes them by gender. In addition to the measurement of vowel duration and midpoint frequencies of F1 and F2, the F1/F2 vowel space areas were measured and compared across speakers. The results indicate substantial overlap in vowel space area functions between slow and fast talkers, though the slow speakers were found to have larger vowel spaces. Furthermore, large variability in vowel space area functions was noted among interspeakers in each group. Both F1 and F2 formant frequencies were found to be gender sensitive in consistence with the existing data. No predictive relation between vowel duration and formant frequencies was observed among speakers.

  6. Investigating the relationship between average speaker fundamental frequency and acoustic vowel space size.

    PubMed

    Weirich, Melanie; Simpson, Adrian

    2013-10-01

    The purpose of this study is to investigate the potential relationship between speaking fundamental frequency and acoustic vowel space size, thus testing a possible perceptual source of sex-specific differences in acoustic vowel space size based on the greater inter-harmonic spacing and a poorer definition of the spectral envelope of higher pitched voices. Average fundamental frequencies and acoustic vowel spaces of 56 female German speakers are analyzed. Several parameters are used to quantify the size and shape of the vowel space defined by /iː ε aː [symbol: see text] uː/ such as the area of the polygon spanned by the five vowels, the absolute difference in F1 or F2 between /iː/ and /uː/ or /aː/, and the Euclidian distance between /iː/ and /aː/. In addition, the potential impact of nasality on the vowel space size is examined. Results reveal no significant correlation between fundamental frequency and vowel space size suggesting other factors must be responsible for the larger female acoustic vowel space.

  7. An acoustic analysis of the vowel space in young and old cochlear-implant speakers.

    PubMed

    Neumeyer, Veronika; Harrington, Jonathan; Draxler, Christoph

    2010-09-01

    The main purpose of this study was to compare acoustically the vowel spaces of two groups of cochlear implantees (CI) with two age-matched normal hearing groups. Five young test persons (15-25 years) and five older test persons (55-70 years) with CI and two control groups of the same age with normal hearing were recorded. The speech material consisted of five German vowels V = /a, e, i, o, u/ in bilabial and alveolar contexts. The results showed no differences between the two groups on Euclidean distances for the first formant frequency. In contrast, Euclidean distances for F2 of the CI group were shorter than those of the control group, causing their overall vowel space to be compressed. The main differences between the groups are interpreted in terms of the extent to which the formants are associated with visual cues to the vowels. Further results were partially longer vowel durations for the CI speakers.

  8. The effect of intertalker speech rate variation on acoustic vowel space.

    PubMed

    Tsao, Ying-Chiao; Weismer, Gary; Iqbal, Kamran

    2006-02-01

    The present study aimed to examine the size of the acoustic vowel space in talkers who had previously been identified as having slow and fast habitual speaking rates [Tsao, Y.-C. and Weismer, G. (1997) J. Speech Lang. Hear. Res. 40, 858-866]. Within talkers, it is fairly well known that faster speaking rates result in a compression of the vowel space relative to that measured for slower rates, so the current study was completed to determine if the same differences in the size of the vowel space occur across talkers who differ significantly in their habitual speaking rates. Results indicated that there was no difference in the average size of the vowel space for slow vs fast talkers, and no relationship across talkers between vowel duration and formant frequencies. One difference between the slow and fast talkers was in intertalker variability of the vowel spaces, which was clearly greater for the slow talkers, for both speaker sexes. Results are discussed relative to theories of speech production and vowel normalization in speech perception.

  9. Acoustic Analysis of Vowels Following Glossectomy

    ERIC Educational Resources Information Center

    Whitehill, Tara L.; Ciocca, Valter; Chan, Judy C-T.; Samman, Nabil

    2006-01-01

    This study examined the acoustic characteristics of vowels produced by speakers with partial glossectomy. Acoustic variables investigated included first formant (F1) frequency, second formant (F2) frequency, F1 range, F2 range and vowel space area. Data from the speakers with partial glossectomy were compared with age- and gender-matched controls.…

  10. Vowel Space Characteristics and Vowel Identification Accuracy

    ERIC Educational Resources Information Center

    Neel, Amy T.

    2008-01-01

    Purpose: To examine the relation between vowel production characteristics and intelligibility. Method: Acoustic characteristics of 10 vowels produced by 45 men and 48 women from the J. M. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler (1995) study were examined and compared with identification accuracy. Global (mean f0, F1, and F2;…

  11. Articulation Rate and Vowel Space Characteristics of Young Males with Fragile X Syndrome: Preliminary Acoustic Findings

    ERIC Educational Resources Information Center

    Zajac, David J.; Roberts, Joanne E.; Hennon, Elizabeth A.; Harris, Adrianne A.; Barnes, Elizabeth F.; Misenheimer, Jan

    2006-01-01

    Purpose: Increased speaking rate is a commonly reported perceptual characteristic among males with fragile X syndrome (FXS). The objective of this preliminary study was to determine articulation rate--one component of perceived speaking rate--and vowel space characteristics of young males with FXS. Method: Young males with FXS (n = 38), …

  12. Acoustic and Durational Properties of Indian English Vowels

    ERIC Educational Resources Information Center

    Maxwell, Olga; Fletcher, Janet

    2009-01-01

    This paper presents findings of an acoustic phonetic analysis of vowels produced by speakers of English as a second language from northern India. The monophthongal vowel productions of a group of male speakers of Hindi and male speakers of Punjabi were recorded, and acoustic phonetic analyses of vowel formant frequencies and vowel duration were…

  13. Contextual variation in the acoustic and perceptual similarity of North German and American English vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred; Bohn, Ocke-Schwen; Nishi, Kanae; Trent, Sonja A.

    2005-09-01

    Strange et al. [J. Acoust. Soc. Am. 115, 1791-1807 (2004)] reported that North German (NG) front-rounded vowels in hVp syllables were acoustically intermediate between front and back American English (AE) vowels. However, AE listeners perceptually assimilated them as poor exemplars of back AE vowels. In this study, speaker- and context-independent cross-language discriminant analyses of NG and AE vowels produced in CVC syllables (C=labial, alveolar, velar stops) in sentences showed that NG front-rounded vowels fell within AE back-vowel distributions, due to the ``fronting'' of AE back vowels in alveolar/velar contexts. NG [smcapi, e, ɛ, openo] were located relatively ``higher'' in acoustic vowel space than their AE counterparts and varied in cross-language similarity across consonantal contexts. In a perceptual assimilation task, naive listeners classified NG vowels in terms of native AE categories and rated their goodness on a 7-point scale (very foreign to very English sounding). Both front- and back-rounded NG vowels were perceptually assimilated overwhelmingly to back AE categories and judged equally good exemplars. Perceptual assimilation patterns did not vary with context, and were not always predictable from acoustic similarity. These findings suggest that listeners adopt a context-independent strategy when judging the cross-language similarity of vowels produced and presented in continuous speech contexts.

  14. Temporal and acoustic characteristics of Greek vowels produced by adults with cerebral palsy

    NASA Astrophysics Data System (ADS)

    Botinis, Antonis; Orfanidou, Ioanna; Fourakis, Marios; Fourakis, Marios

    2005-09-01

    The present investigation examined the temporal and spectral characteristics of Greek vowels as produced by speakers with intact (NO) versus cerebral palsy affected (CP) neuromuscular systems. Six NO and six CP native speakers of Greek produced the Greek vowels [i, e, a, o, u] in the first syllable of CVCV nonsense words in a short carrier phrase. Stress could be on either the first or second syllable. There were three female and three male speakers in each group. In terms of temporal characteristics, the results showed that: vowels produced by CP speakers were longer than vowels produced by NO speakers; stressed vowels were longer than unstressed vowels; vowels produced by female speakers were longer than vowels produced by male speakers. In terms of spectral characteristics the results showed that the vowel space of the CP speakers was smaller than that of the NO speakers. This is similar to the results recently reported by Liu et al. [J. Acoust. Soc. Am. 117, 3879-3889 (2005)] for CP speakers of Mandarin. There was also a reduction of the acoustic vowel space defined by unstressed vowels, but this reduction was much more pronounced in the vowel productions of CP speakers than NO speakers.

  15. Acoustic properties of vowel production in prelingually deafened Mandarin-speaking children with cochlear implants

    PubMed Central

    Yang, Jing; Brown, Emily; Fox, Robert A.; Xu, Li

    2015-01-01

    The present study examined the acoustic features of vowel production in Mandarin-speaking children with cochlear implants (CIs). The subjects included 14 native Mandarin-speaking, prelingually deafened children with CIs (2.9–8.3 yr old) and 60 age-matched, normal-hearing (NH) children (3.1–9.0 years old). Each subject produced a list of monosyllables containing seven Mandarin vowels: [i, a, u, y, ɤ, ʅ, ɿ]. Midpoint F1 and F2 of each vowel token were extracted and normalized to eliminate the effects of different vocal tract sizes. Results showed that the CI children produced significantly longer vowels and less compact vowel categories than the NH children did. The CI children's acoustic vowel space was reduced due to a retracted production of the vowel [i]. The vowel space area showed a strong negative correlation with age at implantation (r = −0.80). The analysis of acoustic distance showed that the CI children produced corner vowels [a, u] similarly to the NH children, but other vowels (e.g., [ʅ, ɿ]) differently from the NH children, which suggests that CI children generally follow a similar developmental path of vowel acquisition as NH children. These findings highlight the importance of early implantation and have implications in clinical aural habilitation in young children with CIs. PMID:26627755

  16. The acoustic effects of vowel equalization training in singers.

    PubMed

    Dromey, Christopher; Heaton, Emily; Hopkin, J Arden

    2011-11-01

    Vowel equalization is a technique that can be used by singers to achieve a more balanced vocal resonance, or chiaroscuro, by balancing corresponding front and back vowels, which share approximate tongue heights, and also high and low vowels by means of a more neutral or centralized lingual posture. The goal of this single group study was to quantify acoustic changes in vowels after a brief training session in vowel equalization. Fifteen young adults with amateur singing experience sang a passage and sustained isolated vowels both before and after a 15-minute training session in vowel equalization. The first two formants of the target vowels /e, i, ɑ, o, u/ were measured from microphone recordings. An analysis of variance was used to test for changes in formant values after the training session. These formant values mostly changed in a manner reflective of a more central tongue posture. For the sustained vowels, all formant changes suggested a more neutral tongue position after the training session. The vowels in the singing passage mostly changed in the expected direction, with exceptions possibly attributable to coarticulation. The changes in the vowel formants indicated that even a brief training session can result in significant changes in vowel acoustics. Further work to explore the perceptual consequences of vowel equalization is warranted.

  17. Degraded Vowel Acoustics and the Perceptual Consequences in Dysarthria

    NASA Astrophysics Data System (ADS)

    Lansford, Kaitlin L.

    Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third

  18. Effect of body position on vocal tract acoustics: Acoustic pharyngometry and vowel formants.

    PubMed

    Vorperian, Houri K; Kurtzweil, Sara L; Fourakis, Marios; Kent, Ray D; Tillman, Katelyn K; Austin, Diane

    2015-08-01

    The anatomic basis and articulatory features of speech production are often studied with imaging studies that are typically acquired in the supine body position. It is important to determine if changes in body orientation to the gravitational field alter vocal tract dimensions and speech acoustics. The purpose of this study was to assess the effect of body position (upright versus supine) on (1) oral and pharyngeal measurements derived from acoustic pharyngometry and (2) acoustic measurements of fundamental frequency (F0) and the first four formant frequencies (F1-F4) for the quadrilateral point vowels. Data were obtained for 27 male and female participants, aged 17 to 35 yrs. Acoustic pharyngometry showed a statistically significant effect of body position on volumetric measurements, with smaller values in the supine than upright position, but no changes in length measurements. Acoustic analyses of vowels showed significantly larger values in the supine than upright position for the variables of F0, F3, and the Euclidean distance from the centroid to each corner vowel in the F1-F2-F3 space. Changes in body position affected measurements of vocal tract volume but not length. Body position also affected the aforementioned acoustic variables, but the main vowel formants were preserved.

  19. Effect of body position on vocal tract acoustics: Acoustic pharyngometry and vowel formants

    PubMed Central

    Vorperian, Houri K.; Kurtzweil, Sara L.; Fourakis, Marios; Kent, Ray D.; Tillman, Katelyn K.; Austin, Diane

    2015-01-01

    The anatomic basis and articulatory features of speech production are often studied with imaging studies that are typically acquired in the supine body position. It is important to determine if changes in body orientation to the gravitational field alter vocal tract dimensions and speech acoustics. The purpose of this study was to assess the effect of body position (upright versus supine) on (1) oral and pharyngeal measurements derived from acoustic pharyngometry and (2) acoustic measurements of fundamental frequency (F0) and the first four formant frequencies (F1–F4) for the quadrilateral point vowels. Data were obtained for 27 male and female participants, aged 17 to 35 yrs. Acoustic pharyngometry showed a statistically significant effect of body position on volumetric measurements, with smaller values in the supine than upright position, but no changes in length measurements. Acoustic analyses of vowels showed significantly larger values in the supine than upright position for the variables of F0, F3, and the Euclidean distance from the centroid to each corner vowel in the F1-F2-F3 space. Changes in body position affected measurements of vocal tract volume but not length. Body position also affected the aforementioned acoustic variables, but the main vowel formants were preserved. PMID:26328699

  20. The effect of reduced vowel working space on speech intelligibility in Mandarin-speaking young adults with cerebral palsy

    NASA Astrophysics Data System (ADS)

    Liu, Huei-Mei; Tsao, Feng-Ming; Kuhl, Patricia K.

    2005-06-01

    The purpose of this study was to examine the effect of reduced vowel working space on dysarthric talkers' speech intelligibility using both acoustic and perceptual approaches. In experiment 1, the acoustic-perceptual relationship between vowel working space area and speech intelligibility was examined in Mandarin-speaking young adults with cerebral palsy. Subjects read aloud 18 bisyllabic words containing the vowels /eye/, /aye/, and /you/ using their normal speaking rate. Each talker's words were identified by three normal listeners. The percentage of correct vowel and word identification were calculated as vowel intelligibility and word intelligibility, respectively. Results revealed that talkers with cerebral palsy exhibited smaller vowel working space areas compared to ten age-matched controls. The vowel working space area was significantly correlated with vowel intelligibility (r=0.632, p<0.005) and with word intelligibility (r=0.684, p<0.005). Experiment 2 examined whether tokens of expanded vowel working spaces were perceived as better vowel exemplars and represented with greater perceptual spaces than tokens of reduced vowel working spaces. The results of the perceptual experiment support this prediction. The distorted vowels of talkers with cerebral palsy compose a smaller acoustic space that results in shrunken intervowel perceptual distances for listeners. .

  1. Acoustic and articulatory analysis of French vowels produced by congenitally blind adults and sighted adults.

    PubMed

    Ménard, Lucie; Toupin, Corinne; Baum, Shari R; Drouin, Serge; Aubin, Jérôme; Tiede, Mark

    2013-10-01

    In a previous paper [Ménard et al., J. Acoust. Soc. Am. 126, 1406-1414 (2009)], it was demonstrated that, despite enhanced auditory discrimination abilities for synthesized vowels, blind adult French speakers produced vowels that were closer together in the acoustic space than those produced by sighted adult French speakers, suggesting finer control of speech production in the sighted speakers. The goal of the present study is to further investigate the articulatory effects of visual deprivation on vowels produced by 11 blind and 11 sighted adult French speakers. Synchronous ultrasound, acoustic, and video recordings of the participants articulating the ten French oral vowels were made. Results show that sighted speakers produce vowels that are spaced significantly farther apart in the acoustic vowel space than blind speakers. Furthermore, blind speakers use smaller differences in lip protrusion but larger differences in tongue position and shape than their sighted peers to produce rounding and place of articulation contrasts. Trade-offs between lip and tongue positions were examined. Results are discussed in the light of the perception-for-action control theory.

  2. Vowel Acoustics in Dysarthria: Mapping to Perception

    ERIC Educational Resources Information Center

    Lansford, Kaitlin L.; Liss, Julie M.

    2014-01-01

    Purpose: The aim of the present report was to explore whether vowel metrics, demonstrated to distinguish dysarthric and healthy speech in a companion article (Lansford & Liss, 2014), are able to predict human perceptual performance. Method: Vowel metrics derived from vowels embedded in phrases produced by 45 speakers with dysarthria were…

  3. Some acoustic features of nasal and nasalized vowels: a target for vowel nasalization.

    PubMed

    Feng, G; Castelli, E

    1996-06-01

    In order to characterize acoustic properties of nasal and nasalized vowels, these sounds will be considered as a dynamic trend from an oral configuration toward an [n]-like configuration. The latter can be viewed as a target for vowel nasalization. This target corresponds to the pharyngonasal tract and it can be modeled, with some simplifications, by a single tract without any parallel paths. Thus the first two resonance frequencies (at about 300 and 1000 Hz) characterize this target well. A series of measurements has been carried out in order to describe the acoustic characteristics of the target. Measured transfer functions confirm the resonator nature of the low-frequency peak. The introduction of such a target allows the conception of the nasal vowels as a trend beginning with a simple configuration, which is terminated in the same manner, so allowing the complex nasal phenomena to be bounded. A complete study of pole-zero evolutions for the nasalization of the 11 French vowels is presented. It allows the proposition of a common strategy for the nasalization of all vowels, so a true nasal vowel can be placed in this nasalization frame. The measured transfer functions for several French nasal vowels are also given.

  4. Variability in English vowels is comparable in articulation and acoustics.

    PubMed

    Noiray, Aude; Iskarous, Khalil; Whalen, D H

    2014-05-01

    The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1-F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ε, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ε/ and /ε-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that was also reflected in acoustics with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast.

  5. Acoustic and Perceptual Characteristics of Vowels Produced during Simultaneous Communication

    ERIC Educational Resources Information Center

    Schiavetti, Nicholas; Metz, Dale Evan; Whitehead, Robert L.; Brown, Shannon; Borges, Janie; Rivera, Sara; Schultz, Christine

    2004-01-01

    This study investigated the acoustical and perceptual characteristics of vowels in speech produced during simultaneous communication (SC). Twelve normal hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking a set of sentences containing monosyllabic words designed for measurement of vowel…

  6. Talker Differences in Clear and Conversational Speech: Acoustic Characteristics of Vowels

    ERIC Educational Resources Information Center

    Ferguson, Sarah Hargus; Kewley-Port, Diane

    2007-01-01

    Purpose: To determine the specific acoustic changes that underlie improved vowel intelligibility in clear speech. Method: Seven acoustic metrics were measured for conversational and clear vowels produced by 12 talkers--6 who previously were found (S. H. Ferguson, 2004) to produce a large clear speech vowel intelligibility effect for listeners with…

  7. Diphthongs in the repopulated vowel space

    NASA Astrophysics Data System (ADS)

    Bogacka, Anna

    2005-04-01

    The study examined 8 British English diphthongs produced by Polish learners of English, testing the diphthongs' quality, duration, nasalization, and occurrence of glottal stops before the diphthongs. There were twelve conditions in which the diphthongs were tested: word-initial, word-final, before a voiced obstruent, before a voiceless obstruent, before a nasal consonant, and before a nasal consonant followed by a fricative, and each of these conditions was tested in a stressed and unstressed position. The diphthongs were tested in real words, embedded in sentences, controlled for the stress position, rhythmic units, and length. The sentences were read by 8 female and 8 male Polish learners of English and control subjects. The aim of the phonetic analysis done with Praat, and employing the methodologies used by Flege (1995) for SLA and Peeters (1991) and Jacewicz, Fujimara, and Fox (2003) for diphthongs, is to examine the shape of the restructured vowel space (Liljencrants and Lindblom 1972; Stevens 1989). The approach taken here is termed Vowel Space Repopulation to emphasize that the vowel space of Polish speakers of English is re-structured by new categories in complex ways which are not adequately captured by traditional notions such as ``transfer,'' ``interference,'' or ``interlanguage.''

  8. An Evaluation of Articulatory Working Space Area in Vowel Production of Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Bunton, Kate; Leddy, Mark

    2011-01-01

    Many adolescents and adults with Down syndrome have reduced speech intelligibility. Reasons for this reduction may relate to differences in anatomy and physiology, both of which are important for creating an intelligible speech signal. The purpose of this study was to document acoustic vowel space and articulatory working space for two adult…

  9. The Acoustic Properties of Vowels: A Tool for Improving Articulation and Comprehension of English

    ERIC Educational Resources Information Center

    McCombs, Candalene J.

    2006-01-01

    Correct pronunciation is often a later step in the process of teaching English as a second language. However, a focus on the correct articulation of vowels can significantly improve listening and comprehension skills as well as articulatory skills. Vowels and consonants differ in their acoustic properties. Unlike consonants, vowel sounds are…

  10. Acoustic and Perceptual Description of Vowels in a Speaker with Congenital Aglossia

    ERIC Educational Resources Information Center

    McMicken, Betty; Von Berg, Shelley; Iskarous, Khalil

    2012-01-01

    The goals of this study were to (a) compare the vowel space produced by a person with congenital aglossia (PWCA) with a typical vowel space; (b) investigate listeners' intelligibility for single vowels produced by the PWCA, with and without visual information; and (c) determine whether there is a correlation between scores of speech…

  11. An acoustic study of the tongue root contrast in Degema vowels.

    PubMed

    Fulop, S A; Kari, E; Ladefoged, P

    1998-01-01

    Degema is an Edoid language of Nigeria whose ten vowels are organized phonologically into two sets of five. The two sets are thought to be differentiated by the degree of tongue root advancing. This paper examines the acoustic nature of these vowels as represented in field recordings of six speakers. The most consistent acoustic correlate of the tongue root contrast was found to be the first formant frequency which consistently distinguishes four of the five vowel pairs, the exception being the two low vowels. Three of the five pairs could also be distinguished by F2, though the direction of the difference was not consistent. Additionally, a comparison of corresponding advanced and retracted vowels using a normalized measure of relative formant intensity demonstrated that this correlate could also distinguish them in general, but only operated reliably in two of the five vowel pairs. The pair of low vowels could not be distinguished from each other by any of these measures. Finally, a perceptual study was conducted which demonstrates that Degema speakers do not classify their vowels very well using formant frequencies as the sole acoustic variable; only the two pairs of mid vowels were reliably singled out by native listeners from an array of synthesized vowels.

  12. Perceptual parsing of acoustic consequences of velum lowering from information for vowels.

    PubMed

    Fowler, C A; Brown, J M

    2000-01-01

    Three experiments were designed to investigate how listeners to coarticulated speech use the acoustic speech signal during a vowel to extract information about a forthcoming oral or nasal consonant. A first experiment showed that listeners use evidence of nasalization in a vowel as information for a forthcoming nasal consonant. A second and third experiment attempted to distinguish two accounts of their ability to do so. According to one account, listeners hear nasalization in the vowel as such and use it to predict that a forthcoming nasal consonant is nasal. According to a second, they perceive speech gestures and hear nasalization in the acoustic domain of a vowel as the onset of a nasal consonant. Therefore, they parse nasal information from a vowel and hear the vowel as oral. In Experiment 2, evidence in favor of the parsing hypothesis was found. Experiment 3 showed, however, that parsing is incomplete.

  13. Acoustic and perceptual similarity of North German and American English vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred; Bohn, Ocke-Schwen; Trent, Sonja A.; Nishi, Kanae

    2004-04-01

    Current theories of cross-language speech perception claim that patterns of perceptual assimilation of non-native segments to native categories predict relative difficulties in learning to perceive (and produce) non-native phones. Cross-language spectral similarity of North German (NG) and American English (AE) vowels produced in isolated hVC(a) (di)syllables (study 1) and in hVC syllables embedded in a short sentence (study 2) was determined by discriminant analyses, to examine the extent to which acoustic similarity was predictive of perceptual similarity patterns. The perceptual assimilation of NG vowels to native AE vowel categories by AE listeners with no German language experience was then assessed directly. Both studies showed that acoustic similarity of AE and NG vowels did not always predict perceptual similarity, especially for ``new'' NG front rounded vowels and for ``similar'' NG front and back mid and mid-low vowels. Both acoustic and perceptual similarity of NG and AE vowels varied as a function of the prosodic context, although vowel duration differences did not affect perceptual assimilation patterns. When duration and spectral similarity were in conflict, AE listeners assimilated vowels on the basis of spectral similarity in both prosodic contexts.

  14. Acquisition of vowel articulation in childhood investigated by acoustic-to-articulatory inversion.

    PubMed

    Oohashi, Hiroki; Watanabe, Hama; Taga, Gentaro

    2017-02-01

    While the acoustical features of speech sounds in children have been extensively studied, limited information is available as to their articulation during speech production. Instead of directly measuring articulatory movements, this study used an acoustic-to-articulatory inversion model with scalable vocal tract size to estimate developmental changes in articulatory state during vowel production. Using a pseudo-inverse Jacobian matrix of a model mapping seven articulatory parameters to acoustic ones, the formant frequencies of each vowel produced by three Japanese children over time at ages between 6 and 60 months were transformed into articulatory parameters. We conducted the discriminant analysis to reveal differences in articulatory states for production of each vowel. The analysis suggested that development of vowel production went through gradual functionalization of articulatory parameters. At 6-9 months, the coordination of position of tongue body and lip aperture forms three vowels: front, back, and central. At 10-17 months, recruitments of jaw and tongue apex enable differentiation of these three vowels into five. At 18 months and older, recruitment of tongue shape produces more distinct vowels specific to Japanese. These results suggest that the jaw and tongue apex contributed to speech production by young children regardless of kinds of vowel. Moreover, initial articulatory states for each vowel could be distinguished by the manner of coordination between lip and tongue, and these initial states are differentiated and refined into articulations adjusted to the native language over the course of development.

  15. A Comprehensive Three-Dimensional Cortical Map of Vowel Space

    ERIC Educational Resources Information Center

    Scharinger, Mathias; Idsardi, William J.; Poe, Samantha

    2011-01-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space…

  16. Acoustic Typology of Vowel Inventories and Dispersion Theory: Insights from a Large Cross-Linguistic Corpus

    ERIC Educational Resources Information Center

    Becker-Kristal, Roy

    2010-01-01

    This dissertation examines the relationship between the structural, phonemic properties of vowel inventories and their acoustic phonetic realization, with particular focus on the adequacy of Dispersion Theory, which maintains that inventories are structured so as to maximize perceptual contrast between their component vowels. In order to assess…

  17. Quantitative and Descriptive Comparison of Four Acoustic Analysis Systems: Vowel Measurements

    ERIC Educational Resources Information Center

    Burris, Carlyn; Vorperian, Houri K.; Fourakis, Marios; Kent, Ray D.; Bolt, Daniel M.

    2014-01-01

    Purpose: This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Method: Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9…

  18. Acoustic Properties Predict Perception of Unfamiliar Dutch Vowels by Adult Australian English and Peruvian Spanish Listeners

    PubMed Central

    Alispahic, Samra; Mulak, Karen E.; Escudero, Paola

    2017-01-01

    Research suggests that the size of the second language (L2) vowel inventory relative to the native (L1) inventory may affect the discrimination and acquisition of L2 vowels. Models of non-native and L2 vowel perception stipulate that naïve listeners' non-native and L2 perceptual patterns may be predicted by the relationship in vowel inventory size between the L1 and the L2. Specifically, having a smaller L1 vowel inventory than the L2 impedes L2 vowel perception, while having a larger one often facilitates it. However, the Second Language Linguistic Perception (L2LP) model specifies that it is the L1–L2 acoustic relationships that predict non-native and L2 vowel perception, regardless of L1 vowel inventory. To test the effects of vowel inventory size vs. acoustic properties on non-native vowel perception, we compared XAB discrimination and categorization of five Dutch vowel contrasts between monolinguals whose L1 contains more (Australian English) or fewer (Peruvian Spanish) vowels than Dutch. No effect of language background was found, suggesting that L1 inventory size alone did not account for performance. Instead, participants in both language groups were more accurate in discriminating contrasts that were predicted to be perceptually easy based on L1–L2 acoustic relationships, and were less accurate for contrasts likewise predicted to be difficult. Further, cross-language discriminant analyses predicted listeners' categorization patterns which in turn predicted listeners' discrimination difficulty. Our results show that listeners with larger vowel inventories appear to activate multiple native categories as reflected in lower accuracy scores for some Dutch vowels, while listeners with a smaller vowel inventory seem to have higher accuracy scores for those same vowels. In line with the L2LP model, these findings demonstrate that L1–L2 acoustic relationships better predict non-native and L2 perceptual performance and that inventory size alone is not a good

  19. Acoustic-articulatory mapping in vowels by locally weighted regression.

    PubMed

    McGowan, Richard S; Berger, Michael A

    2009-10-01

    A method for mapping between simultaneously measured articulatory and acoustic data is proposed. The method uses principal components analysis on the articulatory and acoustic variables, and mapping between the domains by locally weighted linear regression, or loess [Cleveland, W. S. (1979). J. Am. Stat. Assoc. 74, 829-836]. The latter method permits local variation in the slopes of the linear regression, assuming that the function being approximated is smooth. The methodology is applied to vowels of four speakers in the Wisconsin X-ray Microbeam Speech Production Database, with formant analysis. Results are examined in terms of (1) examples of forward (articulation-to-acoustics) mappings and inverse mappings, (2) distributions of local slopes and constants, (3) examples of correlations among slopes and constants, (4) root-mean-square error, and (5) sensitivity of formant frequencies to articulatory change. It is shown that the results are qualitatively correct and that loess performs better than global regression. The forward mappings show different root-mean-square error properties than the inverse mappings indicating that this method is better suited for the forward mappings than the inverse mappings, at least for the data chosen for the current study. Some preliminary results on sensitivity of the first two formant frequencies to the two most important articulatory principal components are presented.

  20. Vowel space development in a child acquiring English and Spanish from birth

    NASA Astrophysics Data System (ADS)

    Andruski, Jean; Kim, Sahyang; Nathan, Geoffrey; Casielles, Eugenia; Work, Richard

    2005-04-01

    To date, research on bilingual first language acquisition has tended to focus on the development of higher levels of language, with relatively few analyses of the acoustic characteristics of bilingual infants' and childrens' speech. Since monolingual infants begin to show perceptual divisions of vowel space that resemble adult native speakers divisions by about 6 months of age [Kuhl et al., Science 255, 606-608 (1992)], bilingual childrens' vowel production may provide evidence of their awareness of language differences relatively early during language development. This paper will examine the development of vowel categories in a child whose mother is a native speaker of Castilian Spanish, and whose father is a native speaker of American English. Each parent speaks to the child only in her/his native language. For this study, recordings made at the ages of 2;5 and 2;10 were analyzed and F1-F2 measurements were made of vowels from the stressed syllables of content words. The development of vowel space is compared across ages within each language, and across languages at each age. In addition, the child's productions are compared with the mother's and father's vocalic productions, which provide the predominant input in Spanish and English respectively.

  1. Vowel Acoustics in Dysarthria: Speech Disorder Diagnosis and Classification

    ERIC Educational Resources Information Center

    Lansford, Kaitlin L.; Liss, Julie M.

    2014-01-01

    Purpose: The purpose of this study was to determine the extent to which vowel metrics are capable of distinguishing healthy from dysarthric speech and among different forms of dysarthria. Method: A variety of vowel metrics were derived from spectral and temporal measurements of vowel tokens embedded in phrases produced by 45 speakers with…

  2. Does Vowel Inventory Density Affect Vowel-to-Vowel Coarticulation?

    ERIC Educational Resources Information Center

    Mok, Peggy P. K.

    2013-01-01

    This study tests the output constraints hypothesis that languages with a crowded phonemic vowel space would allow less vowel-to-vowel coarticulation than languages with a sparser vowel space to avoid perceptual confusion. Mandarin has fewer vowel phonemes than Cantonese, but their allophonic vowel spaces are similarly crowded. The hypothesis…

  3. Effect of Domain Initial Strengthening on Vowel Height and Backness Contrasts in French: Acoustic and Ultrasound Data

    ERIC Educational Resources Information Center

    Georgeton, Laurianne; Antolík, Tanja Kocjancic; Fougeron, Cécile

    2016-01-01

    Purpose: Phonetic variation due to domain initial strengthening was investigated with respect to the acoustic and articulatory distinctiveness of vowels within a subset of the French oral vowel system /i, e, ?, a, o, u/, organized along 4 degrees of height for the front vowels and 2 degrees of backness at the close and midclose height levels.…

  4. A comparative study of human and parrot phonation: acoustic and articulatory correlates of vowels.

    PubMed

    Patterson, D K; Pepperberg, I M

    1994-08-01

    General acoustic and articulatory parallels between human and avian production of human vowels have been identified. A complete set of vowels from an African Grey parrot (Psittacus erithacus) and a limited set from a Yellow-naped Amazon parrot (Amazonica ochrocephala auropalliata) have been analyzed. Comparison of human and avian acoustic parameters demonstrated both differences (e.g., absolute values of first formant frequencies) and similarities (e.g., separation of vowels into back and front categories with respect to tongue placement) in acoustic properties of avian and human speech. Similarities and differences were also found in articulatory mechanisms: Parrots, for example, use their tongues in some but not all the ways used by humans to produce vowels. Because humans perceive and correctly label vowels produced by psittacids despite differences in avian and human articulatory and acoustic parameters, the findings (a) are consistent with research that demonstrates the flexibility of vowel perception by humans and (b) suggest that the perceptual discontinuities that are exploited by speech may be basic to vertebrates rather than to mammals.

  5. A study of high front vowels with articulatory data and acoustic simulations

    PubMed Central

    Jackson, Michel T.-T.; McGowan, Richard S.

    2012-01-01

    The purpose of this study is to test a methodology for describing the articulation of vowels. High front vowels are a test case because some theories suggest that high front vowels have little cross-linguistic variation. Acoustic studies appear to show counterexamples to these predictions, but purely acoustic studies are difficult to interpret because of the many-to-one relation between articulation and acoustics. In this study, vocal tract dimensions, including constriction degree and position, are measured from cinéradiographic and x-ray data on high front vowels from three different languages (North American English, French, and Mandarin Chinese). Statistical comparisons find several significant articulatory differences between North American English /i/ and Mandarin Chinese and French /i/. In particular, differences in constriction degree were found, but not constriction position. Articulatory synthesis is used to model the acoustic consequences of some of the significant articulatory differences, finding that the articulatory differences may have the acoustic consequences of making the latter languages’ /i/ perceptually sharper by shifting the frequencies of F2 and F3 upwards. In addition, the vowel /y/ has specific articulations that differ from those for /i/, including a wider tongue constriction, and substantially different acoustic sensitivity functions for F2 and F3. PMID:22501077

  6. Relationship between semivowels and vowels: cross-linguistic investigations of acoustic difference and coarticulation.

    PubMed

    Maddieson, I; Emmorey, K

    1985-01-01

    Formant frequencies of the semivowels /j/ and /w/ in Amharic, Yoruba and Zuni were measured in three vowel environments. Cross-language differences were found between what are described as the same semivowels, i.e. different languages have different acoustic targets for /j/ and /w/. These cross-language differences in semivowels correlate with cross-language differences in the respective cognate vowels /i/ and /u/. Nonetheless, the semivowels differ in systematic ways from the vowels in directions that make them more 'consonantal'. These languages also differ in their patterns of coarticulation between semivowels and adjacent vowels. This shows, inter alia, that palatal segments differ from language to language in their degree of resistance to coarticulation. Because of these language-specific coarticulatory patterns, cross-language differences in acoustic targets can only be established after careful consideration of the effect of context.

  7. Acoustic Analysis on the Palatalized Vowels of Modern Mongolian

    ERIC Educational Resources Information Center

    Bulgantamir, Sangidkhorloo

    2015-01-01

    In Modern Mongolian the palatalized vowels [a?, ??, ?? ] before palatalized consonants are considered as phoneme allophones according to the most scholars. Nevertheless theses palatalized vowels have the distinctive features what could be proved by the minimal pairs and nowadays this question is open and not profoundly studied. The purpose of this…

  8. Vowel Acoustics in Adults with Apraxia of Speech

    ERIC Educational Resources Information Center

    Jacks, Adam; Mathes, Katey A.; Marquardt, Thomas P.

    2010-01-01

    Purpose: To investigate the hypothesis that vowel production is more variable in adults with acquired apraxia of speech (AOS) relative to healthy individuals with unimpaired speech. Vowel formant frequency measures were selected as the specific target of focus. Method: Seven adults with AOS and aphasia produced 15 repetitions of 6 American English…

  9. Functional Connectivity Associated with Acoustic Stability During Vowel Production: Implications for Vocal-Motor Control

    PubMed Central

    2015-01-01

    Abstract Vowels provide the acoustic foundation of communication through speech and song, but little is known about how the brain orchestrates their production. Positron emission tomography was used to study regional cerebral blood flow (rCBF) during sustained production of the vowel /a/. Acoustic and blood flow data from 13, normal, right-handed, native speakers of American English were analyzed to identify CBF patterns that predicted the stability of the first and second formants of this vowel. Formants are bands of resonance frequencies that provide vowel identity and contribute to voice quality. The results indicated that formant stability was directly associated with blood flow increases and decreases in both left- and right-sided brain regions. Secondary brain regions (those associated with the regions predicting formant stability) were more likely to have an indirect negative relationship with first formant variability, but an indirect positive relationship with second formant variability. These results are not definitive maps of vowel production, but they do suggest that the level of motor control necessary to produce stable vowels is reflected in the complexity of an underlying neural system. These results also extend a systems approach to functional image analysis, previously applied to normal and ataxic speech rate that is solely based on identifying patterns of brain activity associated with specific performance measures. Understanding the complex relationships between multiple brain regions and the acoustic characteristics of vocal stability may provide insight into the pathophysiology of the dysarthrias, vocal disorders, and other speech changes in neurological and psychiatric disorders. PMID:25295385

  10. Control of Spoken Vowel Acoustics and the Influence of Phonetic Context in Human Speech Sensorimotor Cortex

    PubMed Central

    Bouchard, Kristofer E.

    2014-01-01

    Speech production requires the precise control of vocal tract movements to generate individual speech sounds (phonemes) which, in turn, are rapidly organized into complex sequences. Multiple productions of the same phoneme can exhibit substantial variability, some of which is inherent to control of the vocal tract and its biomechanics, and some of which reflects the contextual effects of surrounding phonemes (“coarticulation”). The role of the CNS in these aspects of speech motor control is not well understood. To address these issues, we recorded multielectrode cortical activity directly from human ventral sensory-motor cortex (vSMC) during the production of consonant-vowel syllables. We analyzed the relationship between the acoustic parameters of vowels (pitch and formants) and cortical activity on a single-trial level. We found that vSMC activity robustly predicted acoustic parameters across vowel categories (up to 80% of variance), as well as different renditions of the same vowel (up to 25% of variance). Furthermore, we observed significant contextual effects on vSMC representations of produced phonemes that suggest active control of coarticulation: vSMC representations for vowels were biased toward the representations of the preceding consonant, and conversely, representations for consonants were biased toward upcoming vowels. These results reveal that vSMC activity for phonemes are not invariant and provide insight into the cortical mechanisms of coarticulation. PMID:25232105

  11. From EMG to formant patterns of vowels: the implication of vowel spaces.

    PubMed

    Maeda, S; Honda, K

    1994-01-01

    With a few exceptions, EMG data are interpreted with reference to the intended output, such as the phonetic description of utterances spoken by speakers. For a more rigorous interpretation, the data should also be analysed in terms of the displacement of the articulators and the acoustic patterns. In this paper, we describe our attempts to calculate the formant patterns from EMG activity patterns via an articulatory model. The value of the model parameters, such as the tongue body position or tongue body shape, is derived from the EMG activities of the specific pairs of antagonistic tongue muscles. The model-calculated F1-F2 patterns for 11 American English vowels correspond rather well with those measured from the acoustic signals. What strikes us is the simplicity of the mappings from the muscle activities to vocal-tract configurations and to the formant patterns. We speculate that the brain optimally exploits the morphology of the vocal tract and the kinematic functions of the tongue muscles so that the mappings from the muscle activities (production) to the acoustic patterns (perception) are simple and robust.

  12. Acoustic analysis of vowel sounds before and after orthognathic surgery.

    PubMed

    Ahn, Jaemyung; Kim, Gunjong; Kim, Young Ho; Hong, Jongrak

    2015-01-01

    The purpose of this study was to compare the articular structures and vowel sounds of patients with mandibular prognathism before and after bilateral sagittal split ramus osteotomy (BSSRO). Eight patients who underwent BSSRO to correct mandibular prognathism were selected for inclusion in this study. All patients were asked to read short words (vowels), and these sounds were recorded. Every utterance was repeated twice in four different sessions before the operation and at 6 weeks, 3 months, and 6 months after the operation. The data were analysed using Praat (ver. 5.1.31), and the formant frequencies (F1, F2) of the eight vowels were extracted. PlotFormant (ver. 1.0) was used to draw formant diagrams. The F1 and F2 of front-low vowels were reduced after BSSRO, and the articulating positions of the patients shifted in a posterior-superior direction after the procedure. Additionally, the area of vowel articulation was dramatically reduced after BSSRO but increased slowly over time.

  13. Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech

    ERIC Educational Resources Information Center

    Tyson, Na'im R.

    2012-01-01

    In an attempt to understand what acoustic/auditory feature sets motivated transcribers towards certain labeling decisions, I built machine learning models that were capable of discriminating between canonical and non-canonical vowels excised from the Buckeye Corpus. Specifically, I wanted to model when the dictionary form and the transcribed-form…

  14. The Impact of Contrastive Stress on Vowel Acoustics and Intelligibility in Dysarthria

    ERIC Educational Resources Information Center

    Connaghan, Kathryn P.; Patel, Rupal

    2017-01-01

    Purpose: To compare vowel acoustics and intelligibility in words produced with and without contrastive stress by speakers with spastic (mixed-spastic) dysarthria secondary to cerebral palsy (DYS[subscript CP]) and healthy controls (HCs). Method: Fifteen participants (9 men, 6 women; age M = 42 years) with DYS[subscript CP] and 15 HCs (9 men, 6…

  15. The development of vowel spaces in English- and Korean-learning infants' speech

    NASA Astrophysics Data System (ADS)

    Lee, Soyoung

    2005-04-01

    A previous study (Yang, 1996) revealed that the vowel spaces of adult speech differ between English and Korean. This study longitudinally investigated whether vowel spaces of English- and Korean-learning infants' speech demonstrated similar patterns to their ambient languages. Speech samples of English- and Korean-learning infants were collected at 12 and 24 months and transcribed by either native English- or Korean-speakers, respectively. First and second formants of each vowel were measured using LPC, spectral peak value, and spectrographic formant mid points. The vowel spaces between the two groups displayed similar patterns at 12 months although the frequency of occurrence of each vowel differed (e.g., [i] occurs more frequently in English than in Korean). However, the vowel spaces showed different patterns at 24 months. F2 values for front vowels [i, e] were higher in English-learning infants' speech than those in Korean. [a] in Korean was located at a central position of vowel space while it was located at a back position in English. These patterns were similar to the adult vowel space of Korean and English. This study suggests that infants form vowel space similar to their own languages at around 24 months.

  16. Subthalamic Stimulation Reduces Vowel Space at the Initiation of Sustained Production: Implications for Articulatory Motor Control in Parkinson’s Disease

    PubMed Central

    Sidtis, John J.; Alken, Amy G.; Tagliati, Michele; Alterman, Ron; Van Lancker Sidtis, Diana

    2016-01-01

    Background: Stimulation of the subthalamic nuclei (STN) is an effective treatment for Parkinson’s disease, but complaints of speech difficulties after surgery have been difficult to quantify. Speech measures do not convincingly account for such reports. Objective: This study examined STN stimulation effects on vowel production, in order to probe whether DBS affects articulatory posturing. The objective was to compare positioning during the initiation phase with the steady prolongation phase by measuring vowel spaces for three “corner” vowels at these two time frames. Methods: Vowel space was measured over the initial 0.25 sec of sustained productions of high front (/i/), high back (/u/) and low vowels (/a/), and again during a 2 sec segment at the midpoint. Eight right-handed male subjects with bilateral STN stimulation and seven age-matched male controls were studied based on their participation in a larger study that included functional imaging. Mean values: age = 57±4.6 yrs; PD duration = 12.3±2.7 yrs; duration of DBS = 25.6±21.2 mos, and UPDRS III speech score = 1.6±0.7. STN subjects were studied off medication at their therapeutic DBS settings and again with their stimulators off, counter-balanced order. Results: Vowel space was larger in the initiation phase compared to the midpoint for both the control and the STN subjects off stimulation. With stimulation on, however, the initial vowel space was significantly reduced to the area measured at the mid-point. For the three vowels, the acoustics were differentially affected, in accordance with expected effects of front versus back position in the vocal tract. Conclusions: STN stimulation appears to constrain initial articulatory gestures for vowel production, raising the possibility that articulatory positions normally used in speech are similarly constrained. PMID:27003219

  17. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels.

    PubMed

    Zourmand, Alireza; Mirhassani, Seyed Mostafa; Ting, Hua-Nong; Bux, Shaik Ismail; Ng, Kwan Hoong; Bilgen, Mehmet; Jalaludin, Mohd Amin

    2014-07-25

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.

  18. Characterizing the distribution of the quadrilateral vowel space area

    PubMed Central

    Berisha, Visar; Sandoval, Steven; Utianski, Rene; Liss, Julie; Spanias, Andreas

    2014-01-01

    The vowel space area (VSA) has been studied as a quantitative index of intelligibility to the extent it captures articulatory working space and reductions therein. The majority of such studies have been empirical wherein measures of VSA are correlated with perceptual measures of intelligibility. However, the literature contains minimal mathematical analysis of the properties of this metric. This paper further develops the theoretical underpinnings of this metric by presenting a detailed analysis of the statistical properties of the VSA and characterizing its distribution through the moment generating function. The theoretical analysis is confirmed by a series of experiments where empirically estimated and theoretically predicted statistics of this function are compared. The results show that on the Hillenbrand and TIMIT data, the theoretically predicted values of the higher-order statistics of the VSA match very well with the empirical estimates of the same. PMID:24437782

  19. Emotions in freely varying and mono-pitched vowels, acoustic and EGG analyses.

    PubMed

    Waaramaa, Teija; Palo, Pertti; Kankare, Elina

    2015-12-01

    Vocal emotions are expressed either by speech or singing. The difference is that in singing the pitch is predetermined while in speech it may vary freely. It was of interest to study whether there were voice quality differences between freely varying and mono-pitched vowels expressed by professional actors. Given their profession, actors have to be able to express emotions both by speech and singing. Electroglottogram and acoustic analyses of emotional utterances embedded in expressions of freely varying vowels [a:], [i:], [u:] (96 samples) and mono-pitched protracted vowels (96 samples) were studied. Contact quotient (CQEGG) was calculated using 35%, 55%, and 80% threshold levels. Three different threshold levels were used in order to evaluate their effects on emotions. Genders were studied separately. The results suggested significant gender differences for CQEGG 80% threshold level. SPL, CQEGG, and F4 were used to convey emotions, but to a lesser degree, when F0 was predetermined. Moreover, females showed fewer significant variations than males. Both genders used more hypofunctional phonation type in mono-pitched utterances than in the expressions with freely varying pitch. The present material warrants further study of the interplay between CQEGG threshold levels and formant frequencies, and listening tests to investigate the perceptual value of the mono-pitched vowels in the communication of emotions.

  20. Second language vowel training using vowel subsets: Order of training and choice of contrasts

    NASA Astrophysics Data System (ADS)

    Nishi, Kanae; Kewley-Port, Diane

    2005-09-01

    Our previous vowel training study for Japanese learners of American English [J. Acoust. Soc. Am. 117, 2401 (2005)] compared training for two vowel subsets: nine vowels covering the entire vowel space (9V condition); and the three more difficult vowels (3V condition). Trainees in 9V condition improved on all vowels, but their identification of the three more difficult vowels was lower than that of 3V trainees. Trainees in 3V condition improved identification of the trained three vowels but not the other vowels. In order to further explore more effective training protocols, the present study compared two groups of native Korean trainees using two different training orders for the two vowel subsets: 3V then 9V (3V-9V) and 9V then 3V (9V-3V). The groups were compared in terms of their performance on all nine vowels for pre-, mid-, and post-test scores. Average test scores across the two groups were not different from each other. A closer examination indicated that group 3V-9V did not improve on one of the three more difficult vowels, whereas group 9V-3V improved on all three vowels, indicating the importance of training subset order. [Work supported by NIH DC-006313 and DC-02229.

  1. A note on the acoustic-phonetic characteristics of non-native English vowels produced in noise

    NASA Astrophysics Data System (ADS)

    Li, Chi-Nin; Munro, Murray J.

    2003-10-01

    The Lombard reflex occurs when people unconsciously raise their vocal levels in the presence of loud background noise. Previous work has established that utterances produced in noisy environments exhibit increases in vowel duration and fundamental frequency (F0), and a shift in formant center frequencies for F1 and F2. Most studies of the Lombard reflex have been conducted with native speakers; research with second-language speakers is much less common. The present study examined the effects of the Lombard reflex on foreign-accented English vowel productions. Seven female Cantonese speakers and a comparison group of English speakers were recorded producing three vowels (/i u a/) in /bVt/ context in quiet and in 70 dB of masking noise. Vowel durations, F0, and the first two formants for each of the three vowels were measured. Analyses revealed that vowel durations and F0 were greater in the vowels produced in noise than those produced in quiet in most cases. First formants, but not F2, were consistently higher in Lombard speech than in normal speech. The findings suggest that non-native English speakers exhibit acoustic-phonetic patterns similar to those of native speakers when producing English vowels in noisy conditions.

  2. English Vowel Spaces Produced by Japanese Speakers: The Smaller Point Vowels' and the Greater Schwas'

    ERIC Educational Resources Information Center

    Tomita, Kaoru; Yamada, Jun; Takatsuka, Shigenobu

    2010-01-01

    This study investigated how Japanese-speaking learners of English pronounce the three point vowels /i/, /u/, and /a/ appearing in the first and second monosyllabic words of English noun phrases, and the schwa /[image omitted]/ appearing in English disyllabic words. First and second formant (F1 and F2) values were measured for four Japanese…

  3. The relationship between perception and acoustics for a high-low vowel contrast produced by speakers with dysarthria.

    PubMed

    Bunton, K; Weismer, G

    2001-12-01

    This study was designed to explore the relationship between perception of a high-low vowel contrast and its acoustic correlates in tokens produced by persons with motor speech disorders. An intelligibility test designed by Kent, Weismer, Kent, and Rosenbek (1989a) groups target and error words in minimal-pair contrasts. This format allows for construction of phonetic error profiles based on listener responses, thus allowing for a direct comparison of the acoustic characteristics of vowels perceived as the intended target with those heard as something other than the target. The high-low vowel contrast was found to be a consistent error across clinical groups and therefore was selected for acoustic analysis. The contrast was expected to have well-defined acoustic measures or correlates, derived from the literature, that directly relate to a listeners' responses for that token. These measures include the difference between the second and first formant frequency (F2-F1), the difference between F1 and the fundamental frequency (FO), and vowel duration. Results showed that the acoustic characteristics of tongue-height errors were not clearly differentiated from the acoustic characteristics of targets. Rather, the acoustic characteristics of errors often looked like noisy (nonprototypical) versions of the targets. Results are discussed in terms of the test from which the errors were derived and within the framework of speech perception theory.

  4. Characteristics of the Lax Vowel Space in Dysarthria

    ERIC Educational Resources Information Center

    Tjaden, Kris; Rivera, Deanna; Wilding, Gregory; Turner, Greg S.

    2005-01-01

    It has been hypothesized that lax vowels may be relatively unaffected by dysarthria, owing to the reduced vocal tract shapes required for these phonetic events (G. S. Turner, K. Tjaden, & G. Weismer, 1995). It also has been suggested that lax vowels may be especially susceptible to speech mode effects (M. A. Picheny, N. I. Durlach, & L. D. Braida,…

  5. Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age

    PubMed Central

    Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.

    2015-01-01

    Purpose A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method In 2 studies, mothers read a storybook containing target vowels in ID and AD speech conditions. Study 1 was longitudinal, with 11 mothers recorded when their infants were 3, 6, and 9 months old. Study 2 was cross-sectional, with 48 mothers recorded when their infants were 3, 9, 13, or 20 months old (n = 12 per group). The 1st and 2nd formants of vowels /i/, /ɑ/, and /u/ were measured, and vowel space area and dispersion were calculated. Results Across both studies, 1st and/or 2nd formant frequencies shifted systematically for /i/ and /u/ vowels in ID compared with AD speech. No difference in vowel space area or dispersion was found. Conclusions The results suggest that a variety of communication and situational factors may affect phonetic modifications in ID speech, but that vowel space characteristics in speech to infants stay consistent across the first 2 years of life. PMID:25659121

  6. Acoustic Context Alters Vowel Categorization in Perception of Noise-Vocoded Speech.

    PubMed

    Stilp, Christian E

    2017-03-09

    Normal-hearing listeners' speech perception is widely influenced by spectral contrast effects (SCEs), where perception of a given sound is biased away from stable spectral properties of preceding sounds. Despite this influence, it is not clear how these contrast effects affect speech perception for cochlear implant (CI) users whose spectral resolution is notoriously poor. This knowledge is important for understanding how CIs might better encode key spectral properties of the listening environment. Here, SCEs were measured in normal-hearing listeners using noise-vocoded speech to simulate poor spectral resolution. Listeners heard a noise-vocoded sentence where low-F1 (100-400 Hz) or high-F1 (550-850 Hz) frequency regions were amplified to encourage "eh" (/ɛ/) or "ih" (/ɪ/) responses to the following target vowel, respectively. This was done by filtering with +20 dB (experiment 1a) or +5 dB gain (experiment 1b) or filtering using 100 % of the difference between spectral envelopes of /ɛ/ and /ɪ/ endpoint vowels (experiment 2a) or only 25 % of this difference (experiment 2b). SCEs influenced identification of noise-vocoded vowels in each experiment at every level of spectral resolution. In every case but one, SCE magnitudes exceeded those reported for full-spectrum speech, particularly when spectral peaks in the preceding sentence were large (+20 dB gain, 100 % of the spectral envelope difference). Even when spectral resolution was insufficient for accurate vowel recognition, SCEs were still evident. Results are suggestive of SCEs influencing CI users' speech perception as well, encouraging further investigation of CI users' sensitivity to acoustic context.

  7. How Native Do They Sound? An Acoustic Analysis of the Spanish Vowels of Elementary Spanish Immersion Students

    ERIC Educational Resources Information Center

    Menke, Mandy R.

    2015-01-01

    Language immersion students' lexical, syntactic, and pragmatic competencies are well documented, yet their phonological skill has remained relatively unexplored. This study investigates the Spanish vowel productions of a cross-sectional sample of 35 one-way Spanish immersion students. Learner productions were analyzed acoustically and compared to…

  8. Vowel Acoustics in Parkinson's Disease and Multiple Sclerosis: Comparison of Clear, Loud, and Slow Speaking Conditions

    ERIC Educational Resources Information Center

    Tjaden, Kris; Lam, Jennifer; Wilding, Greg

    2013-01-01

    Purpose: The impact of clear speech, increased vocal intensity, and rate reduction on acoustic characteristics of vowels was compared in speakers with Parkinson's disease (PD), speakers with multiple sclerosis (MS), and healthy controls. Method: Speakers read sentences in habitual, clear, loud, and slow conditions. Variations in clarity,…

  9. Optimizing Vowel Formant Measurements in Four Acoustic Analysis Systems for Diverse Speaker Groups

    PubMed Central

    Derdemezis, Ekaterini; Kent, Ray D.; Fourakis, Marios; Reinicke, Emily L.; Bolt, Daniel M.

    2016-01-01

    Purpose This study systematically assessed the effects of select linear predictive coding (LPC) analysis parameter manipulations on vowel formant measurements for diverse speaker groups using 4 trademarked Speech Acoustic Analysis Software Packages (SAASPs): CSL, Praat, TF32, and WaveSurfer. Method Productions of 4 words containing the corner vowels were recorded from 4 speaker groups with typical development (male and female adults and male and female children) and 4 speaker groups with Down syndrome (male and female adults and male and female children). Formant frequencies were determined from manual measurements using a consensus analysis procedure to establish formant reference values, and from the 4 SAASPs (using both the default analysis parameters and with adjustments or manipulations to select parameters). Smaller differences between values obtained from the SAASPs and the consensus analysis implied more optimal analysis parameter settings. Results Manipulations of default analysis parameters in CSL, Praat, and TF32 yielded more accurate formant measurements, though the benefit was not uniform across speaker groups and formants. In WaveSurfer, manipulations did not improve formant measurements. Conclusions The effects of analysis parameter manipulations on accuracy of formant-frequency measurements varied by SAASP, speaker group, and formant. The information from this study helps to guide clinical and research applications of SAASPs. PMID:26501214

  10. Information conveyed by vowels about other vowels

    NASA Astrophysics Data System (ADS)

    Javkin, Hector R.; Drom, Elaine; Christie, Carol; Cangiano, Gaston R.

    2004-05-01

    Rapid adaptation to different speakers has become an important issue in speech recognition (ASR) but has been known in human listeners for a long time. Ladefoged and Broadbent [J. Acoust. Soc. Am. 29, 98-104 (1957)] demonstrated that human perception of synthesized vowels occurring in monosyllabic words (bit, bet, bat, but) can be changed by changing the formants of an introductory phrase (Please say what this word is). These stimuli meant that the introductory phrase ranged over the same portion of the vowel space (front vowels, or relatively high F2), thus facilitating listeners adaptation. To further test the limits of human adaptation, we replicated the experiment keeping the same words, substituting introductory phrases consisting of back (low F2) vowels, and maintaining a similar level of low-quality synthesized speech. The effects are difficult to replicate with natural or high-quality synthetic speech. However, we will suggest that low quality speech is analogous to the low-dimensionality representation of speech of many ASR front ends, which discard, for example, information as to the higher formants. Therefore, these findings are relevant to the use of adaptation in improving ASR. [Work supported by a Faculty Small Grant from San Jose State University.

  11. Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels

    PubMed Central

    Caballero-Morales, Santiago-Omar

    2013-01-01

    An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech. PMID:23935410

  12. Recognition of emotions in Mexican Spanish speech: an approach based on acoustic modelling of emotion-specific vowels.

    PubMed

    Caballero-Morales, Santiago-Omar

    2013-01-01

    An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87-100% was achieved for the recognition of emotional state of Mexican Spanish speech.

  13. Can Nonnative Speakers Reduce English Vowels in a Native-Like Fashion? Evidence from L1-Spanish L2-English Bilinguals.

    PubMed

    Rallo Fabra, Lucrecia

    2015-01-01

    This paper investigates the production of English unstressed vowels by two groups of early (ESp) and late Spanish (LSp) bilinguals and a control group of native English (NE) monolinguals. Three acoustic measurements were obtained: duration and intensity ratios of unstressed to stressed vowels, normalized vowel formants and euclidean distances. Both groups of bilinguals showed significantly fewer differences in duration between stressed and unstressed vowels than the NE monolinguals. Intensity differences depended on whether the stress pattern of the target English words matched the stress pattern of their Spanish cognates. As for vowel quality, the early bilinguals reduced the unstressed vowels, which clustered around the midcenter area of the vowel space, in the same fashion as the NE monolinguals, suggesting that vowel reduction might be operating at the phonological level. However, the late bilinguals showed a context-dependent, phonetic-level pattern with vowels that were more peripheral in the vowel space.

  14. Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age

    ERIC Educational Resources Information Center

    Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.; Dilley, Laura C.

    2015-01-01

    Purpose: A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method: In 2…

  15. Pitch (F0) and formant profiles of human vowels and vowel-like baboon grunts: The role of vocalizer body size and voice-acoustic allometry

    NASA Astrophysics Data System (ADS)

    Rendall, Drew; Kollias, Sophie; Ney, Christina; Lloyd, Peter

    2005-02-01

    Key voice features-fundamental frequency (F0) and formant frequencies-can vary extensively between individuals. Much of the variation can be traced to differences in the size of the larynx and vocal-tract cavities, but whether these differences in turn simply reflect differences in speaker body size (i.e., neutral vocal allometry) remains unclear. Quantitative analyses were therefore undertaken to test the relationship between speaker body size and voice F0 and formant frequencies for human vowels. To test the taxonomic generality of the relationships, the same analyses were conducted on the vowel-like grunts of baboons, whose phylogenetic proximity to humans and similar vocal production biology and voice acoustic patterns recommend them for such comparative research. For adults of both species, males were larger than females and had lower mean voice F0 and formant frequencies. However, beyond this, F0 variation did not track body-size variation between the sexes in either species, nor within sexes in humans. In humans, formant variation correlated significantly with speaker height but only in males and not in females. Implications for general vocal allometry are discussed as are implications for speech origins theories, and challenges to them, related to laryngeal position and vocal tract length. .

  16. A Neural Substrate for Rapid Timbre Recognition? Neural and Behavioral Discrimination of Very Brief Acoustic Vowels.

    PubMed

    Occelli, F; Suied, C; Pressnitzer, D; Edeline, J-M; Gourévitch, B

    2016-06-01

    The timbre of a sound plays an important role in our ability to discriminate between behaviorally relevant auditory categories, such as different vowels in speech. Here, we investigated, in the primary auditory cortex (A1) of anesthetized guinea pigs, the neural representation of vowels with impoverished timbre cues. Five different vowels were presented with durations ranging from 2 to 128 ms. A psychophysical experiment involving human listeners showed that identification performance was near ceiling for the longer durations and degraded close to chance level for the shortest durations. This was likely due to spectral splatter, which reduced the contrast between the spectral profiles of the vowels at short durations. Effects of vowel duration on cortical responses were well predicted by the linear frequency responses of A1 neurons. Using mutual information, we found that auditory cortical neurons in the guinea pig could be used to reliably identify several vowels for all durations. Information carried by each cortical site was low on average, but the population code was accurate even for durations where human behavioral performance was poor. These results suggest that a place population code is available at the level of A1 to encode spectral profile cues for even very short sounds.

  17. Effects of Long-Term Tracheostomy on Spectral Characteristics of Vowel Production.

    ERIC Educational Resources Information Center

    Kamen, Ruth Saletsky; Watson, Ben C.

    1991-01-01

    Eight preschool children who underwent tracheotomy during the prelingual period were compared to matched controls on a variety of speech measures. Children with tracheotomies showed reduced acoustic vowel space, suggesting they were limited in their ability to produce extreme vocal tract configurations for vowels postdecannulation. Oral motor…

  18. The Acoustic Correlates of Breathy Voice: a Study of Source-Vowel INTERACTION{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}.

    NASA Astrophysics Data System (ADS)

    Lin, Yeong-Fen Emily

    This thesis is the result of an investigation of the source-vowel interaction from the point of view of perception. Major objectives include the identification of the acoustic correlates of breathy voice and the disclosure of the interdependent relationship between the perception of vowel identity and breathiness. Two experiments were conducted to achieve these objectives. In the first experiment, voice samples from one control group and seven patient groups were compared. The control group consisted of five female and five male adults. The ten normals were recruited to perform a sustained vowel phonation task with constant pitch and loudness. The voice samples of seventy patients were retrieved from a hospital data base, with vowels extracted from sentences repeated by patients at their habitual pitch and loudness. The seven patient groups were divided, based on a unique combination of patients' measures on mean flow rate and glottal resistance. Eighteen acoustic variables were treated with a three-way (Gender x Group x Vowel) ANOVA. Parameters showing a significant female-male difference as well as group differences, especially those between the presumed breathy group and the other groups, were identified as relevant to the distinction of breathy voice. As a result, F1-F3 amplitude difference and slope were found to be most effective in distinguishing breathy voice. Other acoustic correlates of breathy voice included F1 bandwidth, RMS-H1 amplitude difference, and F1-F2 amplitude difference and slope. In the second experiment, a formant synthesizer was used to generate vowel stimuli with varying spectral tilt and F1 bandwidth. Thirteen native American English speakers made dissimilarity judgements on paired stimuli in terms of vowel identity and breathiness. Listeners' perceptual vowel spaces were found to be affected by changes in the acoustic correlates of breathy voice. The threshold of detecting a change of vocal quality in the breathiness domain was also

  19. The influence of sexual orientation on vowel production (L)

    NASA Astrophysics Data System (ADS)

    Pierrehumbert, Janet B.; Bent, Tessa; Munson, Benjamin; Bradlow, Ann R.; Bailey, J. Michael

    2004-10-01

    Vowel production in gay, lesbian, bisexual (GLB), and heterosexual speakers was examined. Differences in the acoustic characteristics of vowels were found as a function of sexual orientation. Lesbian and bisexual women produced less fronted /u/ and /opena/ than heterosexual women. Gay men produced a more expanded vowel space than heterosexual men. However, the vowels of GLB speakers were not generally shifted toward vowel patterns typical of the opposite sex. These results are inconsistent with the conjecture that innate biological factors have a broadly feminizing influence on the speech of gay men and a broadly masculinizing influence on the speech of lesbian/bisexual women. They are consistent with the idea that innate biological factors influence GLB speech patterns indirectly by causing selective adoption of certain speech patterns characteristic of the opposite sex. .

  20. Effects of Age on Concurrent Vowel Perception in Acoustic and Simulated Electroacoustic Hearing

    ERIC Educational Resources Information Center

    Arehart, Kathryn H.; Souza, Pamela E.; Muralimanohar, Ramesh Kumar; Miller, Christi Wise

    2011-01-01

    Purpose: In this study, the authors investigated the effects of age on the use of fundamental frequency differences([delta]F[subscript 0]) in the perception of competing synthesized vowels in simulations of electroacoustic and cochlear-implant hearing. Method: Twelve younger listeners with normal hearing and 13 older listeners with (near) normal…

  1. The Effects of Inventory on Vowel Perception in French and Spanish: An MEG Study

    ERIC Educational Resources Information Center

    Hacquard, Valentine; Walter, Mary Ann; Marantz, Alec

    2007-01-01

    Production studies have shown that speakers of languages with larger phoneme inventories expand their acoustic space relative to languages with smaller inventories [Bradlow, A. (1995). A comparative acoustic study of English and Spanish vowels. "Journal of the Acoustical Society of America," 97(3), 1916-1924; Jongman, A., Fourakis, M., & Sereno,…

  2. Acoustic correlates of caller identity and affect intensity in the vowel-like grunt vocalizations of baboons

    NASA Astrophysics Data System (ADS)

    Rendall, Drew

    2003-06-01

    Comparative, production-based research on animal vocalizations can allow assessments of continuity in vocal communication processes across species, including humans, and may aid in the development of general frameworks relating specific constitutional attributes of callers to acoustic-structural details of their vocal output. Analyses were undertaken on vowel-like baboon grunts to examine variation attributable to caller identity and the intensity of the affective state underlying call production. Six hundred six grunts from eight adult females were analyzed. Grunts derived from 128 bouts of calling in two behavioral contexts: concerted group movements and social interactions involving mothers and their young infants. Each context was subdivided into a high- and low-arousal condition. Thirteen acoustic features variously predicted to reflect variation in either caller identity or arousal intensity were measured for each grunt bout, including tempo-, source- and filter-related features. Grunt bouts were highly individually distinctive, differing in a variety of acoustic dimensions but with some indication that filter-related features contributed disproportionately to individual distinctiveness. In contrast, variation according to arousal condition was associated primarily with tempo- and source-related features, many matching those identified as vehicles of affect expression in other nonhuman primate species and in human speech and other nonverbal vocal signals.

  3. Talker sex mediates the influence of neighborhood density on vowel articulation

    NASA Astrophysics Data System (ADS)

    Munson, Benjamin

    2005-09-01

    Words with high phonological neighborhood densities (ND) are more difficult to perceive than words with low NDs [P. Luce and D. Pisoni (1998)]. Previous research has shown that the F1/F2 acoustic vowel space is larger for vowels in words with high ND relative to words with low NDs [B. Munson and N.P. Solomon (2004); R. Wright (2004)]. This may represent talkers' tacit attempts to partially counter the perception difficulties associated with high-ND words. If so, then we would expect to see a larger effect of ND on vowel articulation in women, who have been observed to produce more intelligible speech than men, and to accommodate more to conversational partners than men [V. Hazan and D. Markham (2004); L. Namy et al. (2002)]. This hypothesis was tested by examining the influence of ND on vowel-space articulation by 22 women and 22 men. As expected, women produced overall more-dispersed vowel spaces than men, and vowel spaces associated with high-ND words were more-dispersed than low-ND words. Contrary to expectations, the influence of ND on vowel-space expansion was strongest in men. This appeared to be due a tendency for women not to produce contracted vowel spaces for low-ND words.

  4. Articulatory Changes in Muscle Tension Dysphonia: Evidence of Vowel Space Expansion Following Manual Circumlaryngeal Therapy

    ERIC Educational Resources Information Center

    Roy, Nelson; Nissen, Shawn L.; Dromey, Christopher; Sapir, Shimon

    2009-01-01

    In a preliminary study, we documented significant changes in formant transitions associated with successful manual circumlaryngeal treatment (MCT) of muscle tension dysphonia (MTD), suggesting improvement in speech articulation. The present study explores further the effects of MTD on vowel articulation by means of additional vowel acoustic…

  5. Vowel Development in an Emergent Mandarin-English Bilingual Child: A Longitudinal Study

    ERIC Educational Resources Information Center

    Yang, Jing; Fox, Robert A.; Jacewicz, Ewa

    2015-01-01

    This longitudinal case study documents the emergence of bilingualism in a young monolingual Mandarin boy on the basis of an acoustic analysis of his vowel productions recorded via a picture-naming task over 20 months following his enrollment in an all-English (L2) preschool at the age of 3;7. The study examined (1) his initial L2 vowel space, (2)…

  6. Acoustic rainbow trapping by coiling up space

    NASA Astrophysics Data System (ADS)

    Ni, Xu; Wu, Ying; Chen, Ze-Guo; Zheng, Li-Yang; Xu, Ye-Long; Nayar, Priyanka; Liu, Xiao-Ping; Lu, Ming-Hui; Chen, Yan-Feng

    2014-11-01

    We numerically realize the acoustic rainbow trapping effect by tapping an air waveguide with space-coiling metamaterials. Due to the high refractive-index of the space-coiling metamaterials, our device is more compact compared to the reported trapped-rainbow devices. A numerical model utilizing effective parameters is also calculated, whose results are consistent well with the direct numerical simulation of space-coiling structure. Moreover, such device with the capability of dropping different frequency components of a broadband incident temporal acoustic signal into different channels can function as an acoustic wavelength division de-multiplexer. These results may have potential applications in acoustic device design such as an acoustic filter and an artificial cochlea.

  7. Thresholds for vowel formant discrimination using a sentence classification task

    NASA Astrophysics Data System (ADS)

    Kewley-Port, Diane; Oglesbee, Eric; Lee, Jae Hee

    2005-09-01

    Accurate classification of vowels in sentences is challenging because American English has many acoustically similar vowels. Using a 2AFC paradigm, our previous research estimated thresholds for vowel formant discrimination in sentences that were two times smaller than the measured formant distance between close vowels. A new paradigm has been developed to estimate the ability to detect formant differences in a sentence classification task. A seven-token continuum of changes in either F1 or F2 was generated from natural productions of ``bid'' and ``bed'' using the synthesizer STRAIGHT. These tokens were spliced into a nine-word sentence at different positions that also contained two other test words, one each from pairs ``cot/cut'' and ``hack/hawk.'' Listeners were asked to identify the three words they heard in the sentence. Listeners also identified whether ``bid'' or ``bed'' was heard when only the isolated tokens were presented. Thresholds to detect a change from ``bid'' were obtained from psychometric functions fit to the data. Thresholds were similar for the sentence and word-only tasks. Overall, thresholds in both classification tasks were worse than those from the 2AFC tasks. Results will be discussed in terms of the relation between these discrimination thresholds, vowel identification, and vowel spaces. [Work supported by NIHDCD-02229.

  8. The Acoustic Characteristics of Diphthongs in Indian English

    ERIC Educational Resources Information Center

    Maxwell, Olga; Fletcher, Janet

    2010-01-01

    This paper presents the results of an acoustic analysis of English diphthongs produced by three L1 speakers of Hindi and four L1 speakers of Punjabi. Formant trajectories of rising and falling diphthongs (i.e., vowels where there is a clear rising or falling trajectory through the F1/F2 vowel space) were analysed in a corpus of citation-form…

  9. Comparing Deaf and Hearing Dutch Infants: Changes in the Vowel Space in the First 2 Years

    ERIC Educational Resources Information Center

    van der Stelt, Jeannette M.; Wempe, Ton G.; Pols, Louis C. W.

    2008-01-01

    The influence of the mother tongue on vowel productions in infancy is different for deaf and hearing babies. Audio material of five hearing and five deaf infants acquiring Dutch was collected monthly from month 5-18, and at 24 months. Fifty unlabelled utterances were digitized for each recording. This study focused on developmental paths in vowel…

  10. A vowel is a vowel: Generalizing newly-learned phonotactic constraints to new contexts

    PubMed Central

    Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia

    2010-01-01

    Adults can learn novel phonotactic constraints from brief listening experience. We investigated the representations underlying phonotactic learning by testing generalization to syllables containing new vowels. Adults heard consonant-vowel-consonant (CVC) study syllables in which particular consonants were artificially restricted to onset or coda position (e.g., /f/ is an onset, /s/ is a coda). Subjects were quicker to repeat novel constraint-following (legal) than constraint-violating (illegal) test syllables whether they contained a vowel used in the study syllables (training vowel) or a new (transfer) vowel. This effect emerged regardless of the acoustic similarity between training and transfer vowels. Listeners thus learned and generalized phonotactic constraints that can be characterized as simple first-order constraints on consonant position. Rapid generalization independent of vowel context provides evidence that vowels and consonants are represented independently by processes underlying phonotactic learning. PMID:20438279

  11. International Space Station Acoustics - A Status Report

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.; Denham, Samuel A.

    2011-01-01

    It is important to control acoustic noise aboard the International Space Station (ISS) to provide a satisfactory environment for voice communications, crew productivity, and restful sleep, and to minimize the risk for temporary and permanent hearing loss. Acoustic monitoring is an important part of the noise control process on ISS, providing critical data for trend analysis, noise exposure analysis, validation of acoustic analysis and predictions, and to provide strong evidence for ensuring crew health and safety, thus allowing Flight Certification. To this purpose, sound level meter (SLM) measurements and acoustic noise dosimetry are routinely performed. And since the primary noise sources on ISS include the environmental control and life support system (fans and airflow) and active thermal control system (pumps and water flow), acoustic monitoring will indicate changes in hardware noise emissions that may indicate system degradation or performance issues. This paper provides the current acoustic levels in the ISS modules and sleep stations, and is an update to the status presented in 20031. Many new modules, and sleep stations have been added to the ISS since that time. In addition, noise mitigation efforts have reduced noise levels in some areas. As a result, the acoustic levels on the ISS have improved.

  12. International Space Station Acoustics - A Status Report

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.

    2015-01-01

    It is important to control acoustic noise aboard the International Space Station (ISS) to provide a satisfactory environment for voice communications, crew productivity, alarm audibility, and restful sleep, and to minimize the risk for temporary and permanent hearing loss. Acoustic monitoring is an important part of the noise control process on ISS, providing critical data for trend analysis, noise exposure analysis, validation of acoustic analyses and predictions, and to provide strong evidence for ensuring crew health and safety, thus allowing Flight Certification. To this purpose, sound level meter (SLM) measurements and acoustic noise dosimetry are routinely performed. And since the primary noise sources on ISS include the environmental control and life support system (fans and airflow) and active thermal control system (pumps and water flow), acoustic monitoring will reveal changes in hardware noise emissions that may indicate system degradation or performance issues. This paper provides the current acoustic levels in the ISS modules and sleep stations and is an update to the status presented in 2011. Since this last status report, many payloads (science experiment hardware) have been added and a significant number of quiet ventilation fans have replaced noisier fans in the Russian Segment. Also, noise mitigation efforts are planned to reduce the noise levels of the T2 treadmill and levels in Node 3, in general. As a result, the acoustic levels on the ISS continue to improve.

  13. Shifting Perceptual Weights in L2 Vowel Identification after Training

    PubMed Central

    Hu, Wei; Mi, Lin; Yang, Zhen; Tao, Sha; Li, Mingshuang; Wang, Wenjing; Dong, Qi; Liu, Chang

    2016-01-01

    Difficulties with second-language vowel perception may be related to the significant challenges in using acoustic-phonetic cues. This study investigated the effects of perception training with duration-equalized vowels on native Chinese listeners’ English vowel perception and their use of acoustic-phonetic cues. Seventeen native Chinese listeners were perceptually trained with duration-equalized English vowels, and another 17 native Chinese listeners watched English videos as a control group. Both groups were tested with English vowel identification and vowel formant discrimination before training, immediately after training, and three months later. The results showed that the training effect was greater for the vowel training group than for the control group, while both groups improved their English vowel identification and vowel formant discrimination after training. Moreover, duration-equalized vowel perception training significantly reduced listeners’ reliance on duration cues and improved their use of spectral cues in identifying English vowels, but video-watching did not help. The results suggest that duration-equalized English vowel perception training may improve non-native listeners’ English vowel perception by changing their perceptual weights of acoustic-phonetic cues. PMID:27649413

  14. Speechant: A Vowel Notation System to Teach English Pronunciation

    ERIC Educational Resources Information Center

    dos Reis, Jorge; Hazan, Valerie

    2012-01-01

    This paper introduces a new vowel notation system aimed at aiding the teaching of English pronunciation. This notation system, designed as an enhancement to orthographic text, was designed to use concepts borrowed from the representation of musical notes and is also linked to the acoustic characteristics of vowel sounds. Vowel timbre is…

  15. Some Consequences of Velarization on Catalan Vowels.

    ERIC Educational Resources Information Center

    Widdison, Kirk

    The acoustic effects of the syllable-final /l/ significantly alter the vocalic timbre of the preceding vowel in Catalan. Vowel quality is modified anticipatory to the articulatory gestures required by the /l/, resulting in a lowered second formant. Syllable-final /l/ in Catalan is heavily velarized as a result of tongue tip-tongue back coupling…

  16. English vowel learning by speakers of Mandarin

    NASA Astrophysics Data System (ADS)

    Thomson, Ron I.

    2005-04-01

    One of the most influential models of second language (L2) speech perception and production [Flege, Speech Perception and Linguistic Experience (York, Baltimore, 1995) pp. 233-277] argues that during initial stages of L2 acquisition, perceptual categories sharing the same or nearly the same acoustic space as first language (L1) categories will be processed as members of that L1 category. Previous research has generally been limited to testing these claims on binary L2 contrasts, rather than larger portions of the perceptual space. This study examines the development of 10 English vowel categories by 20 Mandarin L1 learners of English. Imitation of English vowel stimuli by these learners, at 6 data collection points over the course of one year, were recorded. Using a statistical pattern recognition model, these productions were then assessed against native speaker norms. The degree to which the learners' perception/production shifted toward the target English vowels and the degree to which they matched L1 categories in ways predicted by theoretical models are discussed. The results of this experiment suggest that previous claims about perceptual assimilation of L2 categories to L1 categories may be too strong.

  17. Space Time Processing, Environmental-Acoustic Effects

    DTIC Science & Technology

    1987-08-15

    5) In the cases of a harmonic field which is steady or for a random field which is spatially homogeneous and temporally stationary, one can infer...relationships define the acoustic-space-time field for the class of harmonic and random functions which are spatially homogeneous and temporally stationary...When the field is homogeneous and sta- tionary, then (in large average limits) spatial and temporal average values approach the statistically

  18. Learning to pronounce Vowel Sounds in a Foreign Language Using Acoustic Measurements of the Vocal Tract as Feedback in Real Time.

    ERIC Educational Resources Information Center

    Dowd, Annette; Smith, John; Wolfe, Joe

    1998-01-01

    Measured the first two vowel-tract resonances of a sample of native-French speakers for the non-nasalized vowels of that language. Values measured for native speakers for a particular vowel were used as target parameters for subjects who used a visual display of an impedance spectrum of their own vocal tracts as real time feedback to realize the…

  19. Sex differences in the acoustic structure of vowel-like grunt vocalizations in baboons and their perceptual discrimination by baboon listeners

    NASA Astrophysics Data System (ADS)

    Rendall, Drew; Owren, Michael J.; Weerts, Elise; Hienz, Robert D.

    2004-01-01

    This study quantifies sex differences in the acoustic structure of vowel-like grunt vocalizations in baboons (Papio spp.) and tests the basic perceptual discriminability of these differences to baboon listeners. Acoustic analyses were performed on 1028 grunts recorded from 27 adult baboons (11 males and 16 females) in southern Africa, focusing specifically on the fundamental frequency (F0) and formant frequencies. The mean F0 and the mean frequencies of the first three formants were all significantly lower in males than they were in females, more dramatically so for F0. Experiments using standard psychophysical procedures subsequently tested the discriminability of adult male and adult female grunts. After learning to discriminate the grunt of one male from that of one female, five baboon subjects subsequently generalized this discrimination both to new call tokens from the same individuals and to grunts from novel males and females. These results are discussed in the context of both the possible vocal anatomical basis for sex differences in call structure and the potential perceptual mechanisms involved in their processing by listeners, particularly as these relate to analogous issues in human speech production and perception.

  20. Interspeaker Variability in Hard Palate Morphology and Vowel Production

    ERIC Educational Resources Information Center

    Lammert, Adam; Proctor, Michael; Narayanan, Shrikanth

    2013-01-01

    Purpose: Differences in vocal tract morphology have the potential to explain interspeaker variability in speech production. The potential acoustic impact of hard palate shape was examined in simulation, in addition to the interplay among morphology, articulation, and acoustics in real vowel production data. Method: High-front vowel production from…

  1. Extreme acoustic metamaterial by coiling up space.

    PubMed

    Liang, Zixian; Li, Jensen

    2012-03-16

    We show that by coiling up space using curled perforations, a two-dimensional acoustic metamaterial can be constructed to give a frequency dispersive spectrum of extreme constitutive parameters, including double negativity, a density near zero, and a large refractive index. Such an approach has band foldings at the effective medium regime without using local resonating subwavelength structures, while the principle can be easily generalized to three dimensions. Negative refraction with a double negative prism and tunneling with a density-near-zero metamaterial are numerically demonstrated.

  2. Articulation of extreme formant patterns for emphasized vowels.

    PubMed

    Erickson, Donna

    2002-01-01

    This study examined formant, jaw and tongue dorsum measurements from X-ray microbeam recordings of American English speakers producing emphasized vs. unemphasized words containing high-front, mid-front and low vowels. For emphasized vowels, the jaw position, regardless of vowel height, was lower, while the tongue dorsum had a more extreme articulation in the direction of the phonological specification of the vowel. For emphasized low vowels, the tongue dorsum position was lower with the acoustic consequence of F1 and F2 bunched closer together. For emphasized high and mid-front vowels, the tongue was more forward with the acoustic consequence of F1 and F2 spread more apart. These findings are interpreted within acoustic models of speech production. They also provide empirical data which have application to the C/D model hypothesis that both increased lowering of jaw and enhanced tongue gesture are consequences of a magnitude increase in the syllable pulse due to emphasis.

  3. English vowel identification and vowel formant discrimination by native Mandarin Chinese- and native English-speaking listeners: The effect of vowel duration dependence.

    PubMed

    Mi, Lin; Tao, Sha; Wang, Wenjing; Dong, Qi; Guan, Jingjing; Liu, Chang

    2016-03-01

    The purpose of this study was to examine the relationship between English vowel identification and English vowel formant discrimination for native Mandarin Chinese- and native English-speaking listeners. The identification of 12 English vowels was measured with the duration cue preserved or removed. The thresholds of vowel formant discrimination on the F2 of two English vowels,/Λ/and/i/, were also estimated using an adaptive-tracking procedure. Native Mandarin Chinese-speaking listeners showed significantly higher thresholds of vowel formant discrimination and lower identification scores than native English-speaking listeners. The duration effect on English vowel identification was similar between native Mandarin Chinese- and native English-speaking listeners. Moreover, regardless of listeners' language background, vowel identification was significantly correlated with vowel formant discrimination for the listeners who were less dependent on duration cues, whereas the correlation between vowel identification and vowel formant discrimination was not significant for the listeners who were highly dependent on duration cues. This study revealed individual variability in using multiple acoustic cues to identify English vowels for both native and non-native listeners.

  4. Articulatory Changes in Vowel Production following STN DBS and Levodopa Intake in Parkinson's Disease

    PubMed Central

    Martel Sauvageau, Vincent; Roy, Johanna-Pascale; Cantin, Léo; Prud'Homme, Michel; Langlois, Mélanie; Macoir, Joël

    2015-01-01

    Purpose. To investigate the impact of deep brain stimulation of the subthalamic nucleus (STN DBS) and levodopa intake on vowel articulation in dysarthric speakers with Parkinson's disease (PD). Methods. Vowel articulation was assessed in seven Quebec French speakers diagnosed with idiopathic PD who underwent STN DBS. Assessments were conducted on- and off-medication, first prior to surgery and then 1 year later. All recordings were made on-stimulation. Vowel articulation was measured using acoustic vowel space and formant centralization ratio. Results. Compared to the period before surgery, vowel articulation was reduced after surgery when patients were off-medication, while it was better on-medication. The impact of levodopa intake on vowel articulation changed with STN DBS: before surgery, levodopa impaired articulation, while it no longer had a negative effect after surgery. Conclusions. These results indicate that while STN DBS could lead to a direct deterioration in articulation, it may indirectly improve it by reducing the levodopa dose required to manage motor symptoms. These findings suggest that, with respect to speech production, STN DBS and levodopa intake cannot be investigated separately because the two are intrinsically linked. Along with motor symptoms, speech production should be considered when optimizing therapeutic management of patients with PD. PMID:26558134

  5. Vowel development in an emergent Mandarin-English bilingual child: a longitudinal study.

    PubMed

    Yang, Jing; Fox, Robert A; Jacewicz, Ewa

    2015-09-01

    This longitudinal case study documents the emergence of bilingualism in a young monolingual Mandarin boy on the basis of an acoustic analysis of his vowel productions recorded via a picture-naming task over 20 months following his enrollment in an all-English (L2) preschool at the age of 3;7. The study examined (1) his initial L2 vowel space, (2) the process of L1-L2 separation, and (3) his L1 vowel system in relation to L2. The child initially utilized his L1 base in building the L2 vowel system. The L1-L2 separation started from a drastic restructuring of his working vowel space to create maximal contrast between the two languages. Meanwhile, L1 developmental processes and influence of L2 on L1 were also in effect. The developmental profile of this child uncovered strategies sequential bilingual children may use to restructure their phonetic space and construct a new system of contrasts in L2.

  6. Comparing identification of standardized and regionally-valid vowels

    PubMed Central

    Wright, Richard; Souza, Pamela

    2012-01-01

    Purpose In perception studies, it is common to use vowel stimuli from standardized recordings or synthetic stimuli created using values from well-known published research. Although the use of standardized stimuli is convenient, unconsidered dialect and regional accent differences may introduce confounding effects. The goal of this study was to examine the effect of regional accent variation on vowel identification. Method We analyzed formant values of 8 monophthong vowels produced by 12 talkers from the region where the research took place and compared them to standardized vowels. Fifteen listeners with normal hearing identified synthesized vowels presented in varying levels of noise and at varying spectral distances from the local-dialect values. Results Acoustically, local vowels differed from standardized vowels, and distance varied across vowels. Perceptually, there was a robust effect of accent similarity such that identification was reduced for vowels at greater distances from local values. Conclusions Researchers and clinicians should take care in choosing stimuli for perception experiments. It is recommended that regionally validated vowels be used rather than relying on standardized vowels in vowel perception tasks. PMID:22199181

  7. Space vehicle acoustics prediction improvement for payloads. [space shuttle

    NASA Technical Reports Server (NTRS)

    Dandridge, R. E.

    1979-01-01

    The modal analysis method was extensively modified for the prediction of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The predicted noise reduction levels for two test cases were compared with experimental results to determine the validity of the analytical model for predicting space vehicle payload noise environments in the 10 Hz one-third octave band regime. The prediction approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the predictions could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more accurate assessment of normal modes. Techniques for improving and expandng the noise prediction for a payload environment are also suggested.

  8. Neural Processing of Acoustic Duration and Phonological German Vowel Length: Time Courses of Evoked Fields in Response to Speech and Nonspeech Signals

    ERIC Educational Resources Information Center

    Tomaschek, Fabian; Truckenbrodt, Hubert; Hertrich, Ingo

    2013-01-01

    Recent experiments showed that the perception of vowel length by German listeners exhibits the characteristics of categorical perception. The present study sought to find the neural activity reflecting categorical vowel length and the short-long boundary by examining the processing of non-contrastive durations and categorical length using MEG.…

  9. Direct Mapping of Acoustics to Phonology: On the Lexical Encoding of Front Rounded Vowels in L1 English-L2 French Acquisition

    ERIC Educational Resources Information Center

    Darcy, Isabelle; Dekydtspotter, Laurent; Sprouse, Rex A.; Glover, Justin; Kaden, Christiane; McGuire, Michael; Scott, John H. G.

    2012-01-01

    It is well known that adult US-English-speaking learners of French experience difficulties acquiring high /y/-/u/ and mid /oe/-/[openo]/ front vs. back rounded vowel contrasts in French. This study examines the acquisition of these French vowel contrasts at two levels: phonetic categorization and lexical representations. An ABX categorization task…

  10. Speech after Radial Forearm Free Flap Reconstruction of the Tongue: A Longitudinal Acoustic Study of Vowel and Diphthong Sounds

    ERIC Educational Resources Information Center

    Laaksonen, Juha-Pertti; Rieger, Jana; Happonen, Risto-Pekka; Harris, Jeffrey; Seikaly, Hadi

    2010-01-01

    The purpose of this study was to use acoustic analyses to describe speech outcomes over the course of 1 year after radial forearm free flap (RFFF) reconstruction of the tongue. Eighteen Canadian English-speaking females and males with reconstruction for oral cancer had speech samples recorded (pre-operative, and 1 month, 6 months, and 1 year…

  11. Acoustic emission technology for space applications

    SciTech Connect

    Friesel, M.A.; Lemon, D.K.; Skorpik, J.R.; Hutton, P.H.

    1989-05-01

    Clearly the structural and functional integrity of space station components is a primary requirement. The combinations of advanced materials, new designs, and an unusual environment increase the need for inservice monitoring to help assure component integrity. Continuous monitoring of the components using acoustic emission (AE) methods can provide early indication of structural or functional distress, thus allowing time to plan remedial action. The term ''AE'' refers to energy impulses propagated from a growing crack in a solid material or from a leak in a pressurized pipe or tube. In addition to detecting a crack or leak, AE methods can provide information on the location of the defect and an estimate of crack growth rate and leak rate. 8 figs.

  12. Pacific northwest vowels: A Seattle neighborhood dialect study

    NASA Astrophysics Data System (ADS)

    Ingle, Jennifer K.; Wright, Richard; Wassink, Alicia

    2005-04-01

    According to current literature a large region encompassing nearly the entire west half of the U.S. belongs to one dialect region referred to as Western, which furthermore, according to Labov et al., ``... has developed a characteristic but not unique phonology.'' [http://www.ling.upenn.edu/phono-atlas/NationalMap/NationalMap.html] This paper will describe the vowel space of a set of Pacific Northwest American English speakers native to the Ballard neighborhood of Seattle, Wash. based on the acoustical analysis of high-quality Marantz CDR 300 recordings. Characteristics, such as low back merger and [u] fronting will be compared to findings by other studies. It is hoped that these recordings will contribute to a growing number of corpora of North American English dialects. All participants were born in Seattle and began their residence in Ballard between ages 0-8. They were recorded in two styles of speech: individually reading repetitions of a word list containing one token each of 10 vowels within carrier phrases, and in casual conversation for 40 min with a partner matched in age, gender, and social mobility. The goal was to create a compatible data set for comparison with current acoustic studies. F1 and F2 and vowel duration from LPC spectral analysis will be presented.

  13. Learning English vowels with different first-language vowel systems II: Auditory training for native Spanish and German speakers.

    PubMed

    Iverson, Paul; Evans, Bronwen G

    2009-08-01

    This study investigated whether individuals with small and large native-language (L1) vowel inventories learn second-language (L2) vowel systems differently, in order to better understand how L1 categories interfere with new vowel learning. Listener groups whose L1 was Spanish (5 vowels) or German (18 vowels) were given five sessions of high-variability auditory training for English vowels, after having been matched to assess their pre-test English vowel identification accuracy. Listeners were tested before and after training in terms of their identification accuracy for English vowels, the assimilation of these vowels into their L1 vowel categories, and their best exemplars for English (i.e., perceptual vowel space map). The results demonstrated that Germans improved more than Spanish speakers, despite the Germans' more crowded L1 vowel space. A subsequent experiment demonstrated that Spanish listeners were able to improve as much as the German group after an additional ten sessions of training, and that both groups were able to retain this learning. The findings suggest that a larger vowel category inventory may facilitate new learning, and support a hypothesis that auditory training improves identification by making the application of existing categories to L2 phonemes more automatic and efficient.

  14. Multichannel Compression: Effects of Reduced Spectral Contrast on Vowel Identification

    ERIC Educational Resources Information Center

    Bor, Stephanie; Souza, Pamela; Wright, Richard

    2008-01-01

    Purpose: To clarify if large numbers of wide dynamic range compression channels provide advantages for vowel identification and to measure its acoustic effects. Methods: Eight vowels produced by 12 talkers in the /hVd/ context were compressed using 1, 2, 4, 8, and 16 channels. Formant contrast indices (mean formant peak minus mean formant trough;…

  15. Perceptual Adaptation of Voice Gender Discrimination with Spectrally Shifted Vowels

    ERIC Educational Resources Information Center

    Li, Tianhao; Fu, Qian-Jie

    2011-01-01

    Purpose: To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Method: Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the…

  16. Managing the distinctiveness of phonemic nasal vowels: articulatory evidence from Hindi.

    PubMed

    Shosted, Ryan; Carignan, Christopher; Rong, Panying

    2012-01-01

    There is increasing evidence that fine articulatory adjustments are made by speakers to reinforce and sometimes counteract the acoustic consequences of nasality. However, it is difficult to attribute the acoustic changes in nasal vowel spectra to either oral cavity configuration or to velopharyngeal opening (VPO). This paper takes the position that it is possible to disambiguate the effects of VPO and oropharyngeal configuration on the acoustic output of the vocal tract by studying the position and movement of the tongue and lips during the production of oral and nasal vowels. This paper uses simultaneously collected articulatory, acoustic, and nasal airflow data during the production of all oral and phonemically nasal vowels in Hindi (four speakers) to understand the consequences of the movements of oral articulators on the spectra of nasal vowels. For Hindi nasal vowels, the tongue body is generally lowered for back vowels, fronted for low vowels, and raised for front vowels (with respect to their oral congeners). These movements are generally supported by accompanying changes in the vowel spectra. In Hindi, the lowering of back nasal vowels may have originally served to enhance the acoustic salience of nasality, but has since engendered a nasal vowel chain shift.

  17. The emergence of vowels in an infant.

    PubMed

    Buhr, R D

    1980-03-01

    Recordings of vocal production of an infant (age 16-64 weeks) were subjected to perceptual and acoustic analysis. Sounds resembling the vowel sounds of English were identified, and formant frequency measurements were made from spectrograms. Significant longitudinal trends for individual vowel sounds were not apparent during this period, although formant relationships for some vowels after 38 weeks were consistent with the notion of restructuring of the infant's vocal tract. However, analysis of F1/F2 plots over time revealed the emergence of a well-developed vowel triangle, resembling that of older children and adults. The acute axis of this triangle seems to develop before the grave axis. Implications for anatomical, neuromuscular, and linguistic development are discussed.

  18. Articulatory characteristics of Hungarian ‘transparent’ vowels

    PubMed Central

    Benus, Stefan; Gafos, Adamantios I.

    2007-01-01

    Using a combination of magnetometry and ultrasound, we examined the articulatory characteristics of the so-called ‘transparent’ vowels [iː], [i], and [eː] in Hungarian vowel harmony. Phonologically, transparent vowels are front, but they can be followed by either front or back suffixes. However, a finer look reveals an underlying phonetic coherence in two respects. First, transparent vowels in back harmony contexts show a less advanced (more retracted) tongue body posture than phonemically identical vowels in front harmony contexts: e.g. [i] in buli-val is less advanced than [i] in bili-vel. Second, transparent vowels in monosyllabic stems selecting back suffixes are also less advanced than phonemically identical vowels in stems selecting front suffixes: e.g. [iː] in ír, taking back suffixes, compared to [iː] of hír, taking front suffixes, is less advanced when these stems are produced in bare form (no suffixes). We thus argue that the phonetic degree of tongue body horizontal position correlates with the phonological alternation in suffixes. A hypothesis that emerges from this work is that a plausible phonetic basis for transparency can be found in quantal characteristics of the relation between articulation and acoustics of transparent vowels. More broadly, the proposal is that the phonology of transparent vowels is better understood when their phonological patterning is studied together with their articulatory and acoustic characteristics. PMID:18389086

  19. Vowel Devoicing in Shanghai.

    ERIC Educational Resources Information Center

    Zee, Eric

    A phonetic study of vowel devoicing in the Shanghai dialect of Chinese explored the phonetic conditions under which the high, closed vowels and the apical vowel in Shanghai are most likely to become devoiced. The phonetic conditions may be segmental or suprasegmental. Segmentally, the study sought to determine whether a certain type of pre-vocalic…

  20. Spanish Vowel Sandhi.

    ERIC Educational Resources Information Center

    Hutchinson, Sandra Pinkerton

    The effects of syllable timing and syllable sequence type on vowel sandhi in Spanish are investigated in this paper. It is argued that structuralist and generative treatments of vowel sandhi, which are characterized by generalizations about vowel "shortening" and dropping and glide formation, are inadequate because they focus exclusively…

  1. Adult Second Language Learning of Spanish Vowels

    ERIC Educational Resources Information Center

    Cobb, Katherine; Simonet, Miquel

    2015-01-01

    The present study reports on the findings of a cross-sectional acoustic study of the production of Spanish vowels by three different groups of speakers: 1) native Spanish speakers; 2) native English intermediate learners of Spanish; and 3) native English advanced learners of Spanish. In particular, we examined the production of the five Spanish…

  2. Two Notes on Kinande Vowel Harmony

    ERIC Educational Resources Information Center

    Kenstowicz, Michael J.

    2009-01-01

    This paper documents the acoustic reflexes of ATR harmony in Kinande followed by an analysis of the dominance reversal found in class 5 nominals. The principal findings are that the ATR harmony is reliably reflected in a lowering of the first formant. Depending on the vowel, ATR harmony also affects the second formant. The directional asymmetry…

  3. Acoustic levitation for high temperature containerless processing in space

    NASA Technical Reports Server (NTRS)

    Rey, C. A.; Sisler, R.; Merkley, D. R.; Danley, T. J.

    1990-01-01

    New facilities for high-temperature containerless processing in space are described, including the acoustic levitation furnace (ALF), the high-temperature acoustic levitator (HAL), and the high-pressure acoustic levitator (HPAL). In the current ALF development, the maximum temperature capabilities of the levitation furnaces are 1750 C, and in the HAL development with a cold wall furnace they will exceed 2000-2500 C. The HPAL demonstrated feasibility of precursor space flight experiments on the ground in a 1 g pressurized-gas environment. Testing of lower density materials up to 1300 C has also been accomplished. It is suggested that advances in acoustic levitation techniques will result in the production of new materials such as ceramics, alloys, and optical and electronic materials.

  4. Acoustic emissions applications on the NASA Space Station

    SciTech Connect

    Friesel, M.A.; Dawson, J.F.; Kurtz, R.J.; Barga, R.S.; Hutton, P.H.; Lemon, D.K.

    1991-08-01

    Acoustic emission is being investigated as a way to continuously monitor the space station Freedom for damage caused by space debris impact and seal failure. Experiments run to date focused on detecting and locating simulated and real impacts and leakage. These were performed both in the laboratory on a section of material similar to a space station shell panel and also on the full-scale common module prototype at Boeing's Huntsville facility. A neural network approach supplemented standard acoustic emission detection and analysis techniques. 4 refs., 5 figs., 1 tab.

  5. Discrete Motor Coordinates for Vowel Production

    PubMed Central

    Assaneo, María Florencia; Trevisan, Marcos A.; Mindlin, Gabriel B.

    2013-01-01

    Current models of human vocal production that capture peripheral dynamics in speech require large dimensional measurements of the neural activity, which are mapped into equally complex motor gestures. In this work we present a motor description for vowels as points in a discrete low-dimensional space. We monitor the dynamics of 3 points at the oral cavity using Hall-effect transducers and magnets, describing the resulting signals during normal utterances in terms of active/inactive patterns that allow a robust vowel classification in an abstract binary space. We use simple matrix algebra to link this representation to the anatomy of the vocal tract and to recent reports of highly tuned neuronal activations for vowel production, suggesting a plausible global strategy for vowel codification and motor production. PMID:24244681

  6. Discrete motor coordinates for vowel production.

    PubMed

    Assaneo, María Florencia; Trevisan, Marcos A; Mindlin, Gabriel B

    2013-01-01

    Current models of human vocal production that capture peripheral dynamics in speech require large dimensional measurements of the neural activity, which are mapped into equally complex motor gestures. In this work we present a motor description for vowels as points in a discrete low-dimensional space. We monitor the dynamics of 3 points at the oral cavity using Hall-effect transducers and magnets, describing the resulting signals during normal utterances in terms of active/inactive patterns that allow a robust vowel classification in an abstract binary space. We use simple matrix algebra to link this representation to the anatomy of the vocal tract and to recent reports of highly tuned neuronal activations for vowel production, suggesting a plausible global strategy for vowel codification and motor production.

  7. Effects of Intensive Voice Treatment (the Lee Silverman Voice Treatment [LSVT]) on Vowel Articulation in Dysarthric Individuals with Idiopathic Parkinson Disease: Acoustic and Perceptual Findings

    ERIC Educational Resources Information Center

    Sapir, Shimon; Spielman, Jennifer L.; Ramig, Lorraine O.; Story, Brad H.; Fox, Cynthia

    2007-01-01

    Purpose: To evaluate the effects of intensive voice treatment targeting vocal loudness (the Lee Silverman Voice Treatment [LSVT]) on vowel articulation in dysarthric individuals with idiopathic Parkinson's disease (PD). Method: A group of individuals with PD receiving LSVT (n = 14) was compared to a group of individuals with PD not receiving LSVT…

  8. Space manufacturing of surface acoustic wave devices, appendix D

    NASA Technical Reports Server (NTRS)

    Sardella, G.

    1973-01-01

    Space manufacturing of transducers in a vibration free environment is discussed. Fabrication of the masks, and possible manufacturing of the surface acoustic wave components aboard a space laboratory would avoid the inherent ground vibrations and the frequency limitation imposed by a seismic isolator pad. The manufacturing vibration requirements are identified. The concepts of space manufacturing are analyzed. A development program for manufacturing transducers is recommended.

  9. The Shift in Infant Preferences for Vowel Duration and Pitch Contour between 6 and 10 Months of Age

    ERIC Educational Resources Information Center

    Kitamura, Christine; Notley, Anna

    2009-01-01

    This study investigates the influence of the acoustic properties of vowels on 6- and 10-month-old infants' speech preferences. The shape of the contour (bell or monotonic) and the duration (normal or stretched) of vowels were manipulated in words containing the vowels /i/ and /u/, and presented to infants using a two-choice preference procedure.…

  10. Dimension-based statistical learning of vowels.

    PubMed

    Liu, Ran; Holt, Lori L

    2015-12-01

    Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners' baseline perceptual weighting of 2 acoustic dimensions (spectral quality and vowel duration) toward vowel categorization and examine how they subsequently adapt to an "artificial accent" that deviates from English norms in the correlation between the 2 dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an "artificial accent" in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception.

  11. DIMENSION-BASED STATISTICAL LEARNING OF VOWELS

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2015-01-01

    Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners’ baseline perceptual weighting of two acoustic dimensions (spectral quality and vowel duration) towards vowel categorization and examine how they subsequently adapt to an “artificial accent” that deviates from English norms in the correlation between the two dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an “artificial accent” in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically-distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception. PMID:26280268

  12. Vowel Intelligibility in Children with and without Dysarthria: An Exploratory Study

    ERIC Educational Resources Information Center

    Levy, Erika S.; Leone, Dorothy; Moya-Gale, Gemma; Hsu, Sih-Chiao; Chen, Wenli; Ramig, Lorraine O.

    2016-01-01

    Children with dysarthria due to cerebral palsy (CP) present with decreased vowel space area and reduced word intelligibility. Although a robust relationship exists between vowel space and word intelligibility, little is known about the intelligibility of vowels in this population. This exploratory study investigated the intelligibility of American…

  13. International Space Station Crew Quarters Ventilation and Acoustic Design Implementation

    NASA Technical Reports Server (NTRS)

    Broyan, James L., Jr.; Cady, Scott M; Welsh, David A.

    2010-01-01

    The International Space Station (ISS) United States Operational Segment has four permanent rack sized ISS Crew Quarters (CQs) providing a private crew member space. The CQs use Node 2 cabin air for ventilation/thermal cooling, as opposed to conditioned ducted air-from the ISS Common Cabin Air Assembly (CCAA) or the ISS fluid cooling loop. Consequently, CQ can only increase the air flow rate to reduce the temperature delta between the cabin and the CQ interior. However, increasing airflow causes increased acoustic noise so efficient airflow distribution is an important design parameter. The CQ utilized a two fan push-pull configuration to ensure fresh air at the crew member's head position and reduce acoustic exposure. The CQ ventilation ducts are conduits to the louder Node 2 cabin aisle way which required significant acoustic mitigation controls. The CQ interior needs to be below noise criteria curve 40 (NC-40). The design implementation of the CQ ventilation system and acoustic mitigation are very inter-related and require consideration of crew comfort balanced with use of interior habitable volume, accommodation of fan failures, and possible crew uses that impact ventilation and acoustic performance. Each CQ required 13% of its total volume and approximately 6% of its total mass to reduce acoustic noise. This paper illustrates the types of model analysis, assumptions, vehicle interactions, and trade-offs required for CQ ventilation and acoustics. Additionally, on-orbit ventilation system performance and initial crew feedback is presented. This approach is applicable to any private enclosed space that the crew will occupy.

  14. English vowels produced by Cantonese-English bilingual speakers.

    PubMed

    Chen, Yang; Ng, Manwa L; Li, Tie-Shan

    2012-12-01

    The present study attempted to test the postulate that sounds of a foreign language that are familiar can be produced with less accuracy than sounds that are new to second language (L2) learners. The first two formant frequencies (F1 and F2) were obtained from the 11 English monophthong vowels produced by 40 Cantonese-English (CE) bilingual and 40 native American English monolingual speakers. Based on F1 and F2, compact-diffuse (C-D) and grave-acute (G-A) values, and Euclidean Distance (ED) associated with the English vowels were evaluated and correlated with the perceived amount of accent present in the vowels. Results indicated that both male and female CE speakers exhibited different vowel spaces compared to their AE counterparts. While C-D and G-A indicated that acquisition of familiar and new vowels were not particularly different, ED values suggested better performance in CE speakers' productions of familiar vowels over new vowels. In conclusion, analyses based on spectral measurements obtained from the English vowel sounds produced by CE speakers did not provide favourable evidence to support the Speech Learning Model (SLM) proposed by Flege (1995) . Nevertheless, for both familiar and new sounds, English back vowels were found to be produced with greater inaccuracy than English front vowels.

  15. Does knowing speaker sex facilitate vowel recognition at short durations?

    PubMed

    Smith, David R R

    2014-05-01

    A man, woman or child saying the same vowel do so with very different voices. The auditory system solves the complex problem of extracting what the man, woman or child has said despite substantial differences in the acoustic properties of their voices. Much of the acoustic variation between the voices of men and woman is due to changes in the underlying anatomical mechanisms for producing speech. If the auditory system knew the sex of the speaker then it could potentially correct for speaker sex related acoustic variation thus facilitating vowel recognition. This study measured the minimum stimulus duration necessary to accurately discriminate whether a brief vowel segment was spoken by a man or woman, and the minimum stimulus duration necessary to accuately recognise what vowel was spoken. Results showed that reliable vowel recognition precedesreliable speaker sex discrimination, thus questioning the use of speaker sex information in compensating for speaker sex related acoustic variation in the voice. Furthermore, the pattern of performance across experiments where the fundamental frequency and formant frequency information of speaker's voices were systematically varied, was markedly different depending on whether the task was speaker-sex discrimination or vowel recognition. This argues for there being little relationship between perception of speaker sex (indexical information) and perception of what has been said (linguistic information) at short durations.

  16. Producing American English Vowels during Vocal Tract Growth: A Perceptual Categorization Study of Synthesized Vowels

    ERIC Educational Resources Information Center

    Menard, Lucie; Davis, Barbara L.; Boe, Louis-Jean; Roy, Johanna-Pascale

    2009-01-01

    Purpose: To consider interactions of vocal tract change with growth and perceived output patterns across development, the influence of nonuniform vocal tract growth on the ability to reach acoustic-perceptual targets for English vowels was studied. Method: Thirty-seven American English speakers participated in a perceptual categorization…

  17. Regional dialect variation in the vowel systems of typically developing children

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen; Salmons, Joseph

    2015-01-01

    Purpose To investigate regional dialect variation in the vowel systems of normally developing 8–12 years-old children. Method Thirteen vowels in isolated h_d words were produced by 94 children and 93 adults, males and females. All participants spoke American English and were born and raised in one of three distinct dialect regions in the United States: western North Carolina (Southern dialect), central Ohio (Midland) and southeastern Wisconsin (Northern Midwestern dialect). Acoustic analysis included formant frequencies (F1 and F2) measured at five equidistant time points in a vowel and formant movement (trajectory length). Results Children’s productions showed many dialect-specific features comparable to those in adult speakers, both in terms of vowel dispersion patterns and formant movement. Different features were also found including systemic vowel changes, significant monophthongization of selected vowels and greater formant movement in diphthongs. Conclusions The acoustic results provide evidence for regional distinctiveness in children’s vowel systems. Children acquire not only the systemic relations among vowels but also their dialect-specific patterns of formant dynamics. Directing attention to the regional variation in the production of American English vowels, this work may prove helpful in better understanding and interpretation of the development of vowel categories and vowel systems in children. PMID:20966384

  18. Space Launch System Begins Acoustic Testing

    NASA Video Gallery

    Engineers at NASA's Marshall Space Flight Center in Huntsville, Ala., have assembled a collection of thrusters to stand in for the various propulsion elements in a scale model version of NASA’s S...

  19. The Vietnamese Vowel System

    ERIC Educational Resources Information Center

    Emerich, Giang Huong

    2012-01-01

    In this dissertation, I provide a new analysis of the Vietnamese vowel system as a system with fourteen monophthongs and nineteen diphthongs based on phonetic and phonological data. I propose that these Vietnamese contour vowels - /ie/, /[turned m]?/ and /uo/-should be grouped with these eleven monophthongs /i e epsilon a [turned a] ? ? [turned m]…

  20. Intrinsic-cum-extrinsic normalization of formant data of vowels.

    PubMed

    T V, Ananthapadmanabha; A G, Ramakrishnan

    2016-11-01

    Using a known speaker-intrinsic normalization procedure, formant data are scaled by the reciprocal of the geometric mean of the first three formant frequencies. This reduces the influence of the talker but results in a distorted vowel space. The proposed speaker-extrinsic procedure re-scales the normalized values by the mean formant values of vowels. When tested on the formant data of vowels published by Peterson and Barney, the combined approach leads to well separated clusters by reducing the spread due to talkers. The proposed procedure performs better than two top-ranked normalization procedures based on the accuracy of vowel classification as the objective measure.

  1. Synthesis fidelity and time-varying spectral change in vowels

    NASA Astrophysics Data System (ADS)

    Assmann, Peter F.; Katz, William F.

    2005-02-01

    Recent studies have shown that synthesized versions of American English vowels are less accurately identified when the natural time-varying spectral changes are eliminated by holding the formant frequencies constant over the duration of the vowel. A limitation of these experiments has been that vowels produced by formant synthesis are generally less accurately identified than the natural vowels after which they are modeled. To overcome this limitation, a high-quality speech analysis-synthesis system (STRAIGHT) was used to synthesize versions of 12 American English vowels spoken by adults and children. Vowels synthesized with STRAIGHT were identified as accurately as the natural versions, in contrast with previous results from our laboratory showing identification rates 9%-12% lower for the same vowels synthesized using the cascade formant model. Consistent with earlier studies, identification accuracy was not reduced when the fundamental frequency was held constant across the vowel. However, elimination of time-varying changes in the spectral envelope using STRAIGHT led to a greater reduction in accuracy (23%) than was previously found with cascade formant synthesis (11%). A statistical pattern recognition model, applied to acoustic measurements of the natural and synthesized vowels, predicted both the higher identification accuracy for vowels synthesized using STRAIGHT compared to formant synthesis, and the greater effects of holding the formant frequencies constant over time with STRAIGHT synthesis. Taken together, the experiment and modeling results suggest that formant estimation errors and incorrect rendering of spectral and temporal cues by cascade formant synthesis contribute to lower identification accuracy and underestimation of the role of time-varying spectral change in vowels. .

  2. Vowel Formant Values in Hearing and Hearing-Impaired Children: A Discriminant Analysis

    ERIC Educational Resources Information Center

    Ozbic, Martina; Kogovsek, Damjana

    2010-01-01

    Hearing-impaired speakers show changes in vowel production and formant pitch and variability, as well as more cases of overlapping between vowels and more restricted formant space, than hearing speakers; consequently their speech is less intelligible. The purposes of this paper were to determine the differences in vowel formant values between 32…

  3. Exceptionality in vowel harmony

    NASA Astrophysics Data System (ADS)

    Szeredi, Daniel

    Vowel harmony has been of great interest in phonological research. It has been widely accepted that vowel harmony is a phonetically natural phenomenon, which means that it is a common pattern because it provides advantages to the speaker in articulation and to the listener in perception. Exceptional patterns proved to be a challenge to the phonetically grounded analysis as they, by their nature, introduce phonetically disadvantageous sequences to the surface form, that consist of harmonically different vowels. Such forms are found, for example in the Finnish stem tuoli 'chair' or in the Hungarian suffixed form hi:d-hoz 'to the bridge', both word forms containing a mix of front and back vowels. There has recently been evidence shown that there might be a phonetic level explanation for some exceptional patterns, as the possibility that some vowels participating in irregular stems (like the vowel [i] in the Hungarian stem hi:d 'bridge' above) differ in some small phonetic detail from vowels in regular stems. The main question has not been raised, though: does this phonetic detail matter for speakers? Would they use these minor differences when they have to categorize a new word as regular or irregular? A different recent trend in explaining morphophonological exceptionality by looking at the phonotactic regularities characteristic of classes of stems based on their morphological behavior. Studies have shown that speakers are aware of these regularities, and use them as cues when they have to decide what class a novel stem belongs to. These sublexical phonotactic regularities have already been shown to be present in some exceptional patterns vowel harmony, but many questions remain open: how is learning the static generalization linked to learning the allomorph selection facet of vowel harmony? How much does the effect of consonants on vowel harmony matter, when compared to the effect of vowel-to-vowel correspondences? This dissertation aims to test these two ideas

  4. Response of space shuttle insulation panels to acoustic noise pressure

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1976-01-01

    The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.

  5. How to stretch and shrink vowel systems: results from a vowel normalization procedure.

    PubMed

    Geng, Christian; Mooshammer, Christine

    2009-05-01

    One of the goals of phonetic investigations is to find strategies for vowel production independent of speaker-specific vocal-tract anatomies and individual biomechanical properties. In this study techniques for speaker normalization that are derived from Procrustes methods were applied to acoustic and articulatory data. More precisely, data consist of the first two formants and EMMA fleshpoint markers of stressed and unstressed vowels of German from seven speakers in the consonantal context /t/. Main results indicate that (a) for the articulatory data, the normalization can be related to anatomical properties (palate shapes), (b) the recovery of phonemic identity is of comparable quality for acoustic and articulatory data, (c) the procedure outperforms the Lobanov transform in the acoustic domain in terms of phoneme recovery, and (d) this advantage comes at the cost of partly also changing ellipse orientations, which is in accordance with the formulation of the algorithms.

  6. Variability in Vowel Production within and between Days

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2015-01-01

    Although the acoustic variability of speech is often described as a problem for phonetic recognition, there is little research examining acoustic-phonetic variability over time. We measured naturally occurring acoustic variability in speech production at nine specific time points (three per day over three days) to examine daily change in production as well as change across days for citation-form vowels. Productions of seven different vowels (/EE/, /IH/, /AH/, /UH/, /AE/, /OO/, /EH/) were recorded at 9AM, 3PM and 9PM over the course of each testing day on three different days, every other day, over a span of five days. Results indicate significant systematic change in F1 and F0 values over the course of a day for each of the seven vowels recorded, whereas F2 and F3 remained stable. Despite this systematic change within a day, however, talkers did not show significant changes in F0, F1, F2, and F3 between days, demonstrating that speakers are capable of producing vowels with great reliability over days without any extrinsic feedback besides their own auditory monitoring. The data show that in spite of substantial day-to-day variability in the specific listening and speaking experiences of these participants and thus exposure to different acoustic tokens of speech, there is a high degree of internal precision and consistency for the production of citation form vowels. PMID:26331478

  7. Variability in production of the vowels /i/ and /a/.

    PubMed

    Perkell, J S; Nelson, W L

    1985-05-01

    A hypothesis on the nature of articulatory targets for the vowels /i/ and /a/ is proposed, based on acoustic considerations and vowel articulations. The conjecture is that positioning of points on the tongue surface in a repetition experiment should be most accurate in the direction perpendicular to the vocal-tract midline, at the acoustically critical point of maximal constriction for each vowel. The hypothesis was tested by: examining x-ray microbeam data for three speakers, conducting a partial acoustical analysis, and performing a modeling study. Distributions were plotted of the midsagittal locations of three tongue points at the time of maximal excursion toward the vowel target for numbers of examples of the vowels, embedded in a variety of phonetic contexts. More variation was found along a direction parallel to the vocal tract midline than perpendicular to the midline, supporting the hypothesis. Statistics on formant values for one subject have been calculated, and pairwise regressions of displacement and formant data have been run. An articulatory synthesizer [Rubin et al., J. Acoust. Soc. Am. 70, 321-328 (1981)] has been manipulated through displacements similar to the subject's articulatory variation. Although articulatory synthesis showed systematic relationships between articulatory relationships and formant frequencies, there were no significant correlations between the subject's measured articulatory displacements and his formant data. These additional results raise questions about the methodology and point to the need for additional work for an adequate test of the hypothesis.

  8. Acoustical analysis and multiple source auralizations of charismatic worship spaces

    NASA Astrophysics Data System (ADS)

    Lee, Richard W.

    2004-05-01

    Because of the spontaneity and high level of call and response, many charismatic churches have verbal and musical communication problems that stem from highly reverberant sound fields, poor speech intelligibility, and muddy music. This research looks at the subjective dimensions of room acoustics perception that affect a charismatic worship space, which is summarized using the acronym RISCS (reverberation, intimacy, strength, coloration, and spaciousness). The method of research is to obtain acoustical measurements for three worship spaces in order to analyze the objective parameters associated with the RISCS subjective dimensions. For the same spaces, binaural room impulse response (BRIR) measurements are done for different receiver positions in order to create an auralization for each position. The subjective descriptors of RISCS are analyzed through the use of listening tests of the three auralized spaces. The results from the measurements and listening tests are analyzed to determine if listeners' perceptions correlate with the objective parameter results, the appropriateness of the subjective parameters for the use of the space, and which parameters seem to take precedent. A comparison of the multi-source auralization to a conventional single-source auralization was done with the mixed down version of the synchronized multi-track anechoic signals.

  9. Speaker age and vowel perception.

    PubMed

    Drager, Katie

    2011-03-01

    Recent research provides evidence that individuals shift in their perception of variants depending on social characteristics attributed to the speaker.This paper reports on a speech perception experiment designed to test the degree to which the age attributed to a speaker influences the perception of vowels undergoing a chain shift. As a result of the shift, speakers from different generations produce different variants from one another. Results from the experiment indicate that a speaker's perceived age can influence vowel categorization in the expected direction. However, only older participants are influenced by perceived speaker age.This suggests that social characteristics attributed to a speaker affect speech perception differently depending on the salience of the relationship between the variant and the characteristic.The results also provide evidence of an unexpected interaction between the sex of the participant and the sex of the stimulus.The interaction is interpreted as an effect of the participants' previous exposure with male and female speakers.The results are analyzed under an exemplar model of speech production and perception where social information is indexed to acoustic information and the weight of the connection varies depending on the perceived salience of sociophonetic trends.

  10. Learning phonemic vowel length from naturalistic recordings of Japanese infant-directed speech.

    PubMed

    Bion, Ricardo A H; Miyazawa, Kouki; Kikuchi, Hideaki; Mazuka, Reiko

    2013-01-01

    In Japanese, vowel duration can distinguish the meaning of words. In order for infants to learn this phonemic contrast using simple distributional analyses, there should be reliable differences in the duration of short and long vowels, and the frequency distribution of vowels must make these differences salient enough in the input. In this study, we evaluate these requirements of phonemic learning by analyzing the duration of vowels from over 11 hours of Japanese infant-directed speech. We found that long vowels are substantially longer than short vowels in the input directed to infants, for each of the five oral vowels. However, we also found that learning phonemic length from the overall distribution of vowel duration is not going to be easy for a simple distributional learner, because of the large base-rate effect (i.e., 94% of vowels are short), and because of the many factors that influence vowel duration (e.g., intonational phrase boundaries, word boundaries, and vowel height). Therefore, a successful learner would need to take into account additional factors such as prosodic and lexical cues in order to discover that duration can contrast the meaning of words in Japanese. These findings highlight the importance of taking into account the naturalistic distributions of lexicons and acoustic cues when modeling early phonemic learning.

  11. The vowel systems of Quichua-Spanish bilinguals. Age of acquisition effects on the mutual influence of the first and second languages.

    PubMed

    Guion, Susan G

    2003-01-01

    This study investigates vowel productions of 20 Quichua-Spanish bilinguals, differing in age of Spanish acquisition, and 5 monolingual Spanish speakers. While the vowel systems of simultaneous, early, and some mid bilinguals all showed significant plasticity, there were important differences in the kind, as well as the extent, of this adaptability. Simultaneous bilinguals differed from early bilinguals in that they were able to partition the vowel space in a more fine-grained way to accommodate the vowels of their two languages. Early and some mid bilinguals acquired Spanish vowels, whereas late bilinguals did not. It was also found that acquiring Spanish vowels could affect the production of native Quichua vowels. The Quichua vowels were produced higher by bilinguals who had acquired Spanish vowels than those who had not. It is proposed that this vowel reorganization serves to enhance the perceptual distinctiveness between the vowels of the combined first- and second-language system.

  12. Acoustic Emission Detection of Impact Damage on Space Shuttle Structures

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Gorman, Michael R.; Madaras, Eric I.

    2004-01-01

    The loss of the Space Shuttle Columbia as a result of impact damage from foam debris during ascent has led NASA to investigate the feasibility of on-board impact detection technologies. AE sensing has been utilized to monitor a wide variety of impact conditions on Space Shuttle components ranging from insulating foam and ablator materials, and ice at ascent velocities to simulated hypervelocity micrometeoroid and orbital debris impacts. Impact testing has been performed on both reinforced carbon composite leading edge materials as well as Shuttle tile materials on representative aluminum wing structures. Results of these impact tests will be presented with a focus on the acoustic emission sensor responses to these impact conditions. These tests have demonstrated the potential of employing an on-board Shuttle impact detection system. We will describe the present plans for implementation of an initial, very low frequency acoustic impact sensing system using pre-existing flight qualified hardware. The details of an accompanying flight measurement system to assess the Shuttle s acoustic background noise environment as a function of frequency will be described. The background noise assessment is being performed to optimize the frequency range of sensing for a planned future upgrade to the initial impact sensing system.

  13. Toward a Systematic Evaluation of Vowel Target Events across Speech Tasks

    ERIC Educational Resources Information Center

    Kuo, Christina

    2011-01-01

    The core objective of this study was to examine whether acoustic variability of vowel production in American English, across speaking tasks, is systematic. Ten male speakers who spoke a relatively homogeneous Wisconsin dialect produced eight monophthong vowels (in hVd and CVC contexts) in four speaking tasks, including clear-speech, citation form,…

  14. Effect of voice quality on perceived height of English vowels.

    PubMed

    Lotto, A J; Holt, L L; Kluender, K R

    1997-01-01

    Across a variety of languages, phonation type and vocal-tract shape systematically covary in vowel production. Breathy phonation tends to accompany vowels produced with a raised tongue body and/or advanced tongue root. A potential explanation for this regularity, based on a hypothesized interaction between the acoustic effects of vocal-tract shape and phonation type, is evaluated. It is suggested that increased spectral tilt and first-harmonic amplitude resulting from breathy phonation interact with the lower-frequency first formant resulting from a raised tongue body to produce a perceptually 'higher' vowel. To test this hypothesis, breathy and modal versions of vowel series modelled after male and female productions of English vowel pairs /i/ and /i/, /u/ and /[symbol: see text]/, and /lamda/ and /a/ were synthesized. Results indicate that for most cases, breathy voice quality led to more tokens being identified as the higher vowel (i.e. /i/, /u/, /lamda/). In addition, the effect of voice quality is greater for vowels modelled after female productions. These results are consistent with a hypothesized perceptual explanation for the covariation of phonation type and tongue-root advancement in West African languages. The findings may also be relevant to gender differences in phonation type.

  15. Acoustics in the Worship Space: Goals and Strategies

    NASA Astrophysics Data System (ADS)

    Crist, Ernest Vincent, III

    The act of corporate worship, though encompassing nearly all the senses, is primarily aural. Since the transmission process is aural, the experiences must be heard with sufficient volume and quality in order to be effective. The character and magnitude of the sound-producing elements are significant in the process, but it is the nature of the enclosing space that is even more crucial. Every building will, by its size, shape, and materials, act upon all sounds produced, affecting not only the amount, but also the quality of sound reaching the listeners' ears. The purpose of this project was to determine appropriate acoustical goals for worship spaces, and to propose design strategies to achieve these goals. Acoustic goals were determined by examination of the results of previously conducted subjective preference studies, computer plotting of virtual sources, and experimentation in actual spaces. Determinations also take into account the sensory inhibition factors in the processing of sound by the human auditory system. Design strategies incorporate aspects of placement of performing forces, the geometry of the enclosing space, materials and furnishings, and noise control.

  16. Towards a continuous population model for natural language vowel shift.

    PubMed

    Shipman, Patrick D; Faria, Sérgio H; Strickland, Christopher

    2013-09-07

    The Great English Vowel Shift of 16th-19th centuries and the current Northern Cities Vowel Shift are two examples of collective language processes characterized by regular phonetic changes, that is, gradual changes in vowel pronunciation over time. Here we develop a structured population approach to modeling such regular changes in the vowel systems of natural languages, taking into account learning patterns and effects such as social trends. We treat vowel pronunciation as a continuous variable in vowel space and allow for a continuous dependence of vowel pronunciation in time and age of the speaker. The theory of mixtures with continuous diversity provides a framework for the model, which extends the McKendrick-von Foerster equation to populations with age and phonetic structures. We develop the general balance equations for such populations and propose explicit expressions for the factors that impact the evolution of the vowel pronunciation distribution. For illustration, we present two examples of numerical simulations. In the first one we study a stationary solution corresponding to a state of phonetic equilibrium, in which speakers of all ages share a similar phonetic profile. We characterize the variance of the phonetic distribution in terms of a parameter measuring a ratio of phonetic attraction to dispersion. In the second example we show how vowel shift occurs upon starting with an initial condition consisting of a majority pronunciation that is affected by an immigrant minority with a different vowel pronunciation distribution. The approach developed here for vowel systems may be applied also to other learning situations and other time-dependent processes of cognition in self-interacting populations, like opinions or perceptions.

  17. Formant Centralization Ratio: A Proposal for a New Acoustic Measure of Dysarthric Speech

    ERIC Educational Resources Information Center

    Sapir, Shimon; Ramig, Lorraine O.; Spielman, Jennifer L.; Fox, Cynthia

    2010-01-01

    Purpose: The vowel space area (VSA) has been used as an acoustic metric of dysarthric speech, but with varying degrees of success. In this study, the authors aimed to test an alternative metric to the VSA--the "formant centralization ratio" (FCR), which is hypothesized to more effectively differentiate dysarthric from healthy speech and register…

  18. Orderly cortical representation of vowel categories presented by multiple exemplars.

    PubMed

    Shestakova, Anna; Brattico, Elvira; Soloviev, Alexei; Klucharev, Vasily; Huotilainen, Minna

    2004-11-01

    This study aimed at determining how the human brain automatically processes phoneme categories irrespective of the large acoustic inter-speaker variability. Subjects were presented with 450 different speech stimuli, equally distributed across the [a], [i], and [u] vowel categories, and each uttered by a different male speaker. A 306-channel magnetoencephalogram (MEG) was used to record N1m, the magnetic counterpart of the N1 component of the auditory event-related potential (ERP). The N1m amplitude and source locations differed between vowel categories. We also found that the spectrum dissimilarities were reproduced in the cortical representations of the large set of the phonemes used in this study: vowels with similar spectral envelopes had closer cortical representations than those whose spectral differences were the largest. Our data further extend the notion of differential cortical representations in response to vowel categories, previously demonstrated by using only one or a few tokens representing each category.

  19. Vowel intelligibility in classical singing.

    PubMed

    Gregg, Jean Westerman; Scherer, Ronald C

    2006-06-01

    Vowel intelligibility during singing is an important aspect of communication during performance. The intelligibility of isolated vowels sung by Western classically trained singers has been found to be relatively low, in fact, decreasing as pitch rises, and it is lower for women than for men. The lack of contextual cues significantly deteriorates vowel intelligibility. It was postulated in this study that the reduced intelligibility of isolated sung vowels may be partly from the vowels used by the singers in their daily vocalises. More specifically, if classically trained singers sang only a few American English vowels during their vocalises, their intelligibility for American English vowels would be less than for those classically trained singers who usually vocalize on most American English vowels. In this study, there were 21 subjects (15 women, 6 men), all Western classically trained performers as well as teachers of classical singing. They sang 11 words containing 11 different American English vowels, singing on two pitches a musical fifth apart. Subjects were divided into two groups, those who normally vocalize on 4, 5, or 6 vowels, and those who sing all 11 vowels during their daily vocalises. The sung words were cropped to isolate the vowels, and listening tapes were created. Two listening groups, four singing teachers and five speech-language pathologists, were asked to identify the vowels intended by the singers. Results suggest that singing fewer vowels during daily vocalises does not decrease intelligibility compared with singing the 11 American English vowels. Also, in general, vowel intelligibility was lower with the higher pitch, and vowels sung by the women were less intelligible than those sung by the men. Identification accuracy was about the same for the singing teacher listeners and the speech-language pathologist listeners except for the lower pitch, where the singing teachers were more accurate.

  20. The phonological function of vowels is maintained at fundamental frequencies up to 880 Hz.

    PubMed

    Friedrichs, Daniel; Maurer, Dieter; Dellwo, Volker

    2015-07-01

    In a between-subject perception task, listeners either identified full words or vowels isolated from these words at F0s between 220 and 880 Hz. They received two written words as response options (minimal pair with the stimulus vowel in contrastive position). Listeners' sensitivity (A') was extremely high in both conditions at all F0s, showing that the phonological function of vowels can also be maintained at high F0s. This indicates that vowel sounds may carry strong acoustic cues departing from common formant frequencies at high F0s and that listeners do not rely on consonantal context phenomena for their identification performance.

  1. Reading skills and the discrimination of English vowel contrasts by bilingual Spanish/English-speaking children: Is there a correlation?

    NASA Astrophysics Data System (ADS)

    Levey, Sandra

    2005-04-01

    This study examined the discrimination of English vowel contrasts in real and novel word-pairs by 21 children: 11 bilingual Spanish/English- and 10 monolingual English-speaking children, 8-12 years of age (M=10; 6; Mdn=10; 4). The goal was to determine if children with poor reading skills had difficulty with discrimination, an essential factor in reading abilities. A categorial discrimination task was used in an ABX discrimination paradigm: A (the first word in the sequence) and B (the second word in the sequence) were different stimuli, and X (the third word in the sequence) was identical to either A or to B. Stimuli were produced by one of three different speakers. Seventy-two monosyllabic words were presented: 36 real English and 36 novel words. Vowels were those absent from the inventory of Spanish vowels. Discrimination accuracy for the English-speaking children with good reading skills was significantly greater than for the bilingual-speaking children with good or poor reading skills. Early age of acquisition and greater percentage of time devoted to communication in English played the greatest role in bilingual children's discrimination and reading skills. The adjacency of vowels in the F1-F2 acoustic space presented the greatest difficulty.

  2. Acoustics

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.; Grosveld, Ferdinand

    2007-01-01

    The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.

  3. Underwater acoustic source localization using closely spaced hydrophone pairs

    NASA Astrophysics Data System (ADS)

    Sim, Min Seop; Choi, Bok-Kyoung; Kim, Byoung-Nam; Lee, Kyun Kyung

    2016-07-01

    Underwater sound source position is determined using a line array. However, performance degradation occurs owing to a multipath environment, which generates incoherent signals. In this paper, a hydrophone array is proposed for underwater source position estimation robust to a multipath environment. The array is composed of three pairs of sensors placed on the same line. The source position is estimated by performing generalized cross-correlation (GCC). The proposed system is not affected by a multipath time delay because of the close distance between closely spaced sensors. The validity of the array is confirmed by simulation using acoustic signals synthesized by eigenrays.

  4. A modified statistical pattern recognition approach to measuring the crosslinguistic similarity of Mandarin and English vowels.

    PubMed

    Thomson, Ron I; Nearey, Terrance M; Derwing, Tracey M

    2009-09-01

    This study describes a statistical approach to measuring crosslinguistic vowel similarity and assesses its efficacy in predicting L2 learner behavior. In the first experiment, using linear discriminant analysis, relevant acoustic variables from vowel productions of L1 Mandarin and L1 English speakers were used to train a statistical pattern recognition model that simultaneously comprised both Mandarin and English vowel categories. The resulting model was then used to determine what categories novel Mandarin and English vowel productions most resembled. The extent to which novel cases were classified as members of a competing language category provided a means for assessing the crosslinguistic similarity of Mandarin and English vowels. In a second experiment, L2 English learners imitated English vowels produced by a native speaker of English. The statistically defined similarity between Mandarin and English vowels quite accurately predicted L2 learner behavior; the English vowel elicitation stimuli deemed most similar to Mandarin vowels were more likely to elicit L2 productions that were recognized as a Mandarin category; English stimuli that were less similar to Mandarin vowels were more likely to elicit L2 productions that were recognized as new or emerging categories.

  5. Characterization of space dust using acoustic impact detection.

    PubMed

    Corsaro, Robert D; Giovane, Frank; Liou, Jer-Chyi; Burchell, Mark J; Cole, Michael J; Williams, Earl G; Lagakos, Nicholas; Sadilek, Albert; Anderson, Christopher R

    2016-08-01

    This paper describes studies leading to the development of an acoustic instrument for measuring properties of micrometeoroids and other dust particles in space. The instrument uses a pair of easily penetrated membranes separated by a known distance. Sensors located on these films detect the transient acoustic signals produced by particle impacts. The arrival times of these signals at the sensor locations are used in a simple multilateration calculation to measure the impact coordinates on each film. Particle direction and speed are found using these impact coordinates and the known membrane separations. This ability to determine particle speed, direction, and time of impact provides the information needed to assign the particle's orbit and identify its likely origin. In many cases additional particle properties can be estimated from the signal amplitudes, including approximate diameter and (for small particles) some indication of composition/morphology. Two versions of this instrument were evaluated in this study. Fiber optic displacement sensors are found advantageous when very thin membranes can be maintained in tension (solar sails, lunar surface). Piezoelectric strain sensors are preferred for thicker films without tension (long duration free flyers). The latter was selected for an upcoming installation on the International Space Station.

  6. Learning foreign vowels.

    PubMed

    Kingston, John

    2003-01-01

    Two hypotheses have recently been put forward to account for listeners' ability to distinguish and learn contrasts between speech sounds in foreign languages. First, Best's Perceptual Assimilation Model and Flege's Speech Learning Model both predict that the ease with which a listener can tell one non-native phoneme from another varies directly with the extent to which these sounds assimilate to different native phonemes (Best, 1994; also Best, McRoberts, & Goodell, 2001; Flege, 1991). Second, Logan, Lively, & Pisoni (1991) have argued that training listeners to identify non-native phonemes teaches them sets of exemplars rather than more abstract distinctive feature values. I report here the results of three sets of experiments designed to test these hypotheses, in which American English listeners were trained to categorize German nonlow vowels. The first set of experiments show that some instances of the same contrast between German vowels are more easily discriminated than others, a result incompatible with the predictions of either Best's or Flege's models, but compatible with the alternative category recognition interpretation. The second set of experiments reveals effects of contextual and speaker variation on listeners' ability to learn [tense] but not [high] contrasts between foreign vowels, and are thus at least partly compatible with an exemplar model of foreign category learning (Pisoni, Lively, & Logan, 1994; also Nosofsky, 1986). The third set of experiments compares the predictions of Nosofsky's (1986) selective attention exemplar model of category learning with those of a feature learning model in tests of listeners' learning the natural classes to which the German vowels belong. The results are mixed: listeners learned the features that define the natural classes of [+/- high] and [+/- back] vowels, but could have learned either the feature that defines the natural classes of [+/- tense] vowels or sets of [+/- tense] exemplars. Natural classes

  7. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  8. Effects of Talker Variability on Vowel Recognition in Cochlear Implants

    ERIC Educational Resources Information Center

    Chang, Yi-ping; Fu, Qian-Jie

    2006-01-01

    Purpose: To investigate the effects of talker variability on vowel recognition by cochlear implant (CI) users and by normal-hearing (NH) participants listening to 4-channel acoustic CI simulations. Method: CI users were tested with their clinically assigned speech processors. For NH participants, 3 CI processors were simulated, using different…

  9. The Effect of Stress and Speech Rate on Vowel Coarticulation in Catalan Vowel-Consonant-Vowel Sequences

    ERIC Educational Resources Information Center

    Recasens, Daniel

    2015-01-01

    Purpose: The goal of this study was to ascertain the effect of changes in stress and speech rate on vowel coarticulation in vowel-consonant-vowel sequences. Method: Data on second formant coarticulatory effects as a function of changing /i/ versus /a/ were collected for five Catalan speakers' productions of vowel-consonant-vowel sequences with the…

  10. Reconstruction of Japanese Vowels.

    ERIC Educational Resources Information Center

    Aoki, Haruo

    1972-01-01

    This paper discusses the relationship between linguistic reconstructions and their historical validity using the case of Old Japanese (8th century A.D.) vowels as an example. Reconstructions throughout the paper include only those cases in which the modern reflexes and phonological correspondences between two or more genetically related languages…

  11. Palestinian Arabic Vowels.

    ERIC Educational Resources Information Center

    Cormick, James

    The vowel system of the educated rural sedentary dialect of the West Bank (Palestine) is analyzed from a generative phonological point of view and in relation to three phonological processes: monophthongization, centralization, and pharyngealization. The results of the analysis are compared with Mark Cowell's broader 1964 analysis of Syrian…

  12. A New Acoustic Test Facility at Alcatel Space Test Centre

    NASA Astrophysics Data System (ADS)

    Meurat, A.; Jezequel, L.

    2004-08-01

    Due to the obsolescence of its acoustic test facility, Alcatel Space has initiated the investment of a large acoustic chamber on its test centre located in Cannes, south of France. This paper presents the main specification elaborated to design the facility, and the solution chosen : it will be located on a dedicated area of the existing test centre and will be based on technical solution already used in similar facilities over the world. The main structure consists in a chamber linked to an external envelope (concrete building) through suspension aiming at decoupling the vibration and preventing from seismic risks. The noise generation system is based on the use of Wyle modulators located on the chamber roof. Gaseous nitrogen is produced by a dedicated gas generator developed by Air-Liquide that could deliver high flow rate with accurate pressure and temperature controls. The control and acquisition system is based on existing solution implemented on the vibration facilities of the test centre. With the start of the construction in May 2004, the final acceptance tests are planned for April 2005, and the first satellites to be tested are planned for May 2005.

  13. Durational and spectral differences in American English vowels: dialect variation within and across regions.

    PubMed

    Fridland, Valerie; Kendall, Tyler; Farrington, Charlie

    2014-07-01

    Spectral differences among varieties of American English have been widely studied, typically recognizing three major regionally diagnostic vowel shift patterns [Labov, Ash, and Boberg (2006). The Atlas of North American English: Phonetics, Phonology and Sound Change (De Gruyter, Berlin)]. Durational variability across dialects, on the other hand, has received relatively little attention. This paper investigates to what extent regional differences in vowel duration are linked with spectral changes taking place in the Northern, Western, and Southern regions of the U.S. Using F1/F2 and duration measures, the durational correlates of the low back vowel merger, characteristic of Western dialects, and the acoustic reversals of the front tense/lax vowels, characteristic of Southern dialects, are investigated. Results point to a positive correlation between spectral overlap and vowel duration for Northern and Western speakers, suggesting that both F1/F2 measures and durational measures are used for disambiguation of vowel quality. The findings also indicate that, regardless of region, a durational distinction maintains the contrast between the low back vowel classes, particularly in cases of spectral merger. Surprisingly, Southerners show a negative correlation for the vowel shifts most defining of contemporary Southern speech, suggesting that neither spectral position nor durational measures are the most relevant cues for vowel quality in the South.

  14. Are vowel errors influenced by consonantal context in the speech of persons with aphasia?

    NASA Astrophysics Data System (ADS)

    Gelfer, Carole E.; Bell-Berti, Fredericka; Boyle, Mary

    2004-05-01

    The literature suggests that vowels and consonants may be affected differently in the speech of persons with conduction aphasia (CA) or nonfluent aphasia with apraxia of speech (AOS). Persons with CA have shown similar error rates across vowels and consonants, while those with AOS have shown more errors for consonants than vowels. These data have been interpreted to suggest that consonants have greater gestural complexity than vowels. However, recent research [M. Boyle et al., Proc. International Cong. Phon. Sci., 3265-3268 (2003)] does not support this interpretation: persons with AOS and CA both had a high proportion of vowel errors, and vowel errors almost always occurred in the context of consonantal errors. To examine the notion that vowels are inherently less complex than consonants and are differentially affected in different types of aphasia, vowel production in different consonantal contexts for speakers with AOS or CA was examined. The target utterances, produced in carrier phrases, were bVC and bV syllables, allowing us to examine whether vowel production is influenced by consonantal context. Listener judgments were obtained for each token, and error productions were grouped according to the intended utterance and error type. Acoustical measurements were made from spectrographic displays.

  15. The acquisition of nuclei: a longitudinal analysis of phonological vowel length in three German-speaking children.

    PubMed

    Kehoe, Margaret M; Lleó, Conxita

    2003-08-01

    Studies of vowel length acquisition indicate an initial stage in which phonological vowel length is random followed by a stage in which either long vowels (without codas) or short vowels and codas are produced. To determine whether this sequence of acquisition applies to a group of German-speaking children (three children aged 1;3-2;6), monosyllabic and disyllabic words were transcribed and acoustically analysed. The results did not support a stage in which vowel length was totally random. At the first time period (onset of word production to 1;7), one child's monosyllabic productions were governed by a bipositional constraint such that either long vowels, or short vowels and codas were produced. At the second (1;10 to 2;0) and third time periods (2;3 to 2;6), all three children produced target long vowels significantly longer than target short vowels. Transcription results indicated that children experienced more difficulty producing target long than short vowels. In the discussion, the findings are interpreted in terms of the representation of vowel length in children's grammars.

  16. LEARNING NONADJACENT DEPENDENCIES IN PHONOLOGY: TRANSPARENT VOWELS IN VOWEL HARMONY.

    PubMed

    Finley, Sara

    2015-03-01

    Nonadjacent dependencies are an important part of the structure of language. While the majority of syntactic and phonological processes occur at a local domain, there are several processes that appear to apply at a distance, posing a challenge for theories of linguistic structure. This article addresses one of the most common nonadjacent phenomena in phonology: transparent vowels in vowel harmony. Vowel harmony occurs when adjacent vowels are required to share the same phonological feature value (e.g. V+F C V+F). However, transparent vowels create a second-order nonadjacent pattern because agreement between two vowels can 'skip' the transparent neutral vowel in addition to consonants (e.g. V+F C V(T)-F C V+F). Adults are shown to display initial learning biases against second-order nonadjacency in experiments that use an artificial grammar learning paradigm. Experiments 1-3 show that adult learners fail to learn the second-order long-distance dependency created by the transparent vowel (as compared to a control condition). In experiments 4-5, training in terms of overall exposure as well as the frequency of relevant transparent items was increased. With adequate exposure, learners reliably generalize to novel words containing transparent vowels. The experiments suggest that learners are sensitive to the structure of phonological representations, even when learning occurs at a relatively rapid pace.

  17. Clear Speech Variants: An Acoustic Study in Parkinson's Disease

    PubMed Central

    Tjaden, Kris

    2016-01-01

    Purpose The authors investigated how different variants of clear speech affect segmental and suprasegmental acoustic measures of speech in speakers with Parkinson's disease and a healthy control group. Method A total of 14 participants with Parkinson's disease and 14 control participants served as speakers. Each speaker produced 18 different sentences selected from the Sentence Intelligibility Test (Yorkston & Beukelman, 1996). All speakers produced stimuli in 4 speaking conditions (habitual, clear, overenunciate, and hearing impaired). Segmental acoustic measures included vowel space area and first moment (M1) coefficient difference measures for consonant pairs. Second formant slope of diphthongs and measures of vowel and fricative durations were also obtained. Suprasegmental measures included fundamental frequency, sound pressure level, and articulation rate. Results For the majority of adjustments, all variants of clear speech instruction differed from the habitual condition. The overenunciate condition elicited the greatest magnitude of change for segmental measures (vowel space area, vowel durations) and the slowest articulation rates. The hearing impaired condition elicited the greatest fricative durations and suprasegmental adjustments (fundamental frequency, sound pressure level). Conclusions Findings have implications for a model of speech production for healthy speakers as well as for speakers with dysarthria. Findings also suggest that particular clear speech instructions may target distinct speech subsystems. PMID:27355431

  18. Cross-linguistic studies of children’s and adults’ vowel spacesa

    PubMed Central

    Chung, Hyunju; Kong, Eun Jong; Edwards, Jan; Weismer, Gary; Fourakis, Marios; Hwang, Youngdeok

    2012-01-01

    This study examines cross-linguistic variation in the location of shared vowels in the vowel space across five languages (Cantonese, American English, Greek, Japanese, and Korean) and three age groups (2-year-olds, 5-year-olds, and adults). The vowels /a/, /i/, and /u/ were elicited in familiar words using a word repetition task. The productions of target words were recorded and transcribed by native speakers of each language. For correctly produced vowels, first and second formant frequencies were measured. In order to remove the effect of vocal tract size on these measurements, a normalization approach that calculates distance and angular displacement from the speaker centroid was adopted. Language-specific differences in the location of shared vowels in the formant values as well as the shape of the vowel spaces were observed for both adults and children. PMID:22280606

  19. The Interplay between Input and Initial Biases: Asymmetries in Vowel Perception during the First Year of Life

    ERIC Educational Resources Information Center

    Pons, Ferran; Albareda-Castellot, Barbara; Sebastian-Galles, Nuria

    2012-01-01

    Vowels with extreme articulatory-acoustic properties act as natural referents. Infant perceptual asymmetries point to an underlying bias favoring these referent vowels. However, as language experience is gathered, distributional frequency of speech sounds could modify this initial bias. The perception of the /i/-/e/ contrast was explored in 144…

  20. Call Me Alix, Not Elix: Vowels Are More Important than Consonants in Own-Name Recognition at 5 Months

    ERIC Educational Resources Information Center

    Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry

    2015-01-01

    Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of…

  1. Acoustic emissions verification testing of International Space Station experiment racks at the NASA Glenn Research Center Acoustical Testing Laboratory

    NASA Astrophysics Data System (ADS)

    Akers, James C.; Passe, Paul J.; Cooper, Beth A.

    2005-09-01

    The Acoustical Testing Laboratory (ATL) at the NASA John H. Glenn Research Center (GRC) in Cleveland, OH, provides acoustic emission testing and noise control engineering services for a variety of specialized customers, particularly developers of equipment and science experiments manifested for NASA's manned space missions. The ATL's primary customer has been the Fluids and Combustion Facility (FCF), a multirack microgravity research facility being developed at GRC for the USA Laboratory Module of the International Space Station (ISS). Since opening in September 2000, ATL has conducted acoustic emission testing of components, subassemblies, and partially populated FCF engineering model racks. The culmination of this effort has been the acoustic emission verification tests on the FCF Combustion Integrated Rack (CIR) and Fluids Integrated Rack (FIR), employing a procedure that incorporates ISO 11201 (``Acoustics-Noise emitted by machinery and equipment-Measurement of emission sound pressure levels at a work station and at other specified positions-Engineering method in an essentially free field over a reflecting plane''). This paper will provide an overview of the test methodology, software, and hardware developed to perform the acoustic emission verification tests on the CIR and FIR flight racks and lessons learned from these tests.

  2. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    NASA Technical Reports Server (NTRS)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  3. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    NASA Technical Reports Server (NTRS)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  4. Final Vowel-Consonant-e.

    ERIC Educational Resources Information Center

    Burmeister, Lou E.

    The utility value of the final vowel-consonant-e phonic generalization was examined using 2,715 common English words. When the vowel was defined as a single-vowel, the consonant as a single-consonant, and the final e as a single-e the generalization was found to be highly useful, contrary to other recent findings. Using the total sample of 2,715…

  5. Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics

    NASA Technical Reports Server (NTRS)

    Kenny, Robert Jeremy

    2009-01-01

    NASA's current models to predict lift-off acoustics for launch vehicles are currently being updated using several numerical and empirical inputs. One empirical input comes from free-field acoustic data measured at three Space Shuttle Reusable Solid Rocket Motor (RSRM) static firings. The measurements were collected by a joint collaboration between NASA - Marshall Space Flight Center, Wyle Labs, and ATK Launch Systems. For the first time NASA measured large-thrust solid rocket motor plume acoustics for evaluation of both noise sources and acoustic radiation properties. Over sixty acoustic free-field measurements were taken over the three static firings to support evaluation of acoustic radiation near the rocket plume, far-field acoustic radiation patterns, plume acoustic power efficiencies, and apparent noise source locations within the plume. At approximately 67 m off nozzle centerline and 70 m downstream of the nozzle exit plan, the measured overall sound pressure level of the RSRM was 155 dB. Peak overall levels in the far field were over 140 dB at 300 m and 50-deg off of the RSRM thrust centerline. The successful collaboration has yielded valuable data that are being implemented into NASA's lift-off acoustic models, which will then be used to update predictions for Ares I and Ares V liftoff acoustic environments.

  6. The prosodic licensing of coda consonants in early speech: interactions with vowel length.

    PubMed

    Miles, Kelly; Yuen, Ivan; Cox, Felicity; Demuth, Katherine

    2015-05-28

    English has a word-minimality requirement that all open-class lexical items must contain at least two moras of structure, forming a bimoraic foot (Hayes, 1995).Thus, a word with either a long vowel, or a short vowel and a coda consonant, satisfies this requirement. This raises the question of when and how young children might learn this language-specific constraint, and if they would use coda consonants earlier and more reliably after short vowels compared to long vowels. To evaluate this possibility we conducted an elicited imitation experiment with 15 two-year-old Australian English-speaking children, using both perceptual and acoustic analysis. As predicted, the children produced codas more often when preceded by short vowels. The findings suggest that English-speaking two-year-olds are sensitive to language-specific lexical constraints, and are more likely to use coda consonants when prosodically required.

  7. Acoustic interference and recognition space within a complex assemblage of dendrobatid frogs.

    PubMed

    Amézquita, Adolfo; Flechas, Sandra Victoria; Lima, Albertina Pimentel; Gasser, Herbert; Hödl, Walter

    2011-10-11

    In species-rich assemblages of acoustically communicating animals, heterospecific sounds may constrain not only the evolution of signal traits but also the much less-studied signal-processing mechanisms that define the recognition space of a signal. To test the hypothesis that the recognition space is optimally designed, i.e., that it is narrower toward the species that represent the higher potential for acoustic interference, we studied an acoustic assemblage of 10 diurnally active frog species. We characterized their calls, estimated pairwise correlations in calling activity, and, to model the recognition spaces of five species, conducted playback experiments with 577 synthetic signals on 531 males. Acoustic co-occurrence was not related to multivariate distance in call parameters, suggesting a minor role for spectral or temporal segregation among species uttering similar calls. In most cases, the recognition space overlapped but was greater than the signal space, indicating that signal-processing traits do not act as strictly matched filters against sounds other than homospecific calls. Indeed, the range of the recognition space was strongly predicted by the acoustic distance to neighboring species in the signal space. Thus, our data provide compelling evidence of a role of heterospecific calls in evolutionarily shaping the frogs' recognition space within a complex acoustic assemblage without obvious concomitant effects on the signal.

  8. Changing space and sound: Parametric design and variable acoustics

    NASA Astrophysics Data System (ADS)

    Norton, Christopher William

    This thesis examines the potential for parametric design software to create performance based design using acoustic metrics as the design criteria. A former soundstage at the University of Southern California used by the Thornton School of Music is used as a case study for a multiuse space for orchestral, percussion, master class and recital use. The criteria used for each programmatic use include reverberation time, bass ratio, and the early energy ratios of the clarity index and objective support. Using a panelized ceiling as a design element to vary the parameters of volume, panel orientation and type of absorptive material, the relationships between these parameters and the design criteria are explored. These relationships and subsequently derived equations are applied to Grasshopper parametric modeling software for Rhino 3D (a NURBS modeling software). Using the target reverberation time and bass ratio for each programmatic use as input for the parametric model, the genomic optimization function of Grasshopper - Galapagos - is run to identify the optimum ceiling geometry and material distribution.

  9. Two cross-linguistic factors underlying tongue shapes for vowels

    SciTech Connect

    Nix, D.A.; Papcun, G.; Hogden, J.; Zlokarnik, I.

    1996-06-01

    Desirable characteristics of a vocal-tract parametrization include accuracy, low dimensionality, and generalizability across speakers and languages. A low-dimensional, speaker-independent linear parametrization of vowel tongue shapes can be obtained using the PARAFAC three-mode factor analysis procedure. Harshman et al. applied PARAFAC to midsagittal x-ray vowel data from five English speakers, reporting that two speaker-independent factors are required to accurately represent the tongue shape measured along anatomically normalized vocal-tract diameter grid lines. Subsequently, the cross-linguistic generality of this parametrization was brought into question by the application of PARAFAC to Icelandic vowel data, where three nonorthogonal factors were reported. This solution is shown to be degenerate; a reanalysis of Jackson`s Icelandic data produces two factors that match Harshman et al.`s factors for English vowels, contradicting Jackson`s distinction between English and Icelandic language-specific `articulatory primes.` To obtain vowel factors not constrained by artificial measurement grid lines, x-ray tongue shape traces of six English speakers were marked with 13 equally spaced points. PARAFAC analysis of this unconstranied (x,y) coordinate data results in two factors that are clearly interpretable in terms of the traditional vowel quality dimensions front/back, high/low. 14 refs., 8 figs., 2 tabs.

  10. Training Japanese listeners to identify American English vowels

    NASA Astrophysics Data System (ADS)

    Nishi, Kanae; Kewley-Port, Diane

    2005-04-01

    Perception training of phonemes by second language (L2) learners has been studied primarily using consonant contrasts, where the number of contrasting sounds rarely exceeds five. In order to investigate the effects of stimulus sets, this training study used two conditions: 9 American English vowels covering the entire vowel space (9V), and 3 difficult vowels for problem-focused training (3V). Native speakers of Japanese were trained for nine days. To assess changes in performance due to training, a battery of perception and production tests were given pre- and post-training, as well as 3 months following training. The 9V trainees improved vowel perception on all vowels after training, on average by 23%. Their performance at the 3-month test was slightly worse than the posttest, but still better than the pretest. Transfer of training effect to stimuli spoken by new speakers was observed. Strong response bias observed in the pretest disappeared after the training. The preliminary results of the 3V trainees showed substantial improvement only on the trained vowels. The implications of this research for improved training of L2 learners to understand speech will be discussed. [Work supported by NIH-NIDCD DC-006313 & DC-02229.

  11. Perception of steady-state vowels and vowelless syllables by adults and children

    NASA Astrophysics Data System (ADS)

    Nittrouer, Susan

    2005-04-01

    Vowels can be produced as long, isolated, and steady-state, but that is not how they are found in natural speech. Instead natural speech consists of almost continuously changing (i.e., dynamic) acoustic forms from which mature listeners recover underlying phonetic form. Some theories suggest that children need steady-state information to recognize vowels (and so learn vowel systems), even though that information is sparse in natural speech. The current study examined whether young children can recover vowel targets from dynamic forms, or whether they need steady-state information. Vowel recognition was measured for adults and children (3, 5, and 7 years) for natural productions of /dæd/, /dUd/ /æ/, /U/ edited to make six stimulus sets: three dynamic (whole syllables; syllables with middle 50-percent replaced by cough; syllables with all but the first and last three pitch periods replaced by cough), and three steady-state (natural, isolated vowels; reiterated pitch periods from those vowels; reiterated pitch periods from the syllables). Adults scored nearly perfectly on all but first/last three pitch period stimuli. Children performed nearly perfectly only when the entire syllable was heard, and performed similarly (near 80%) for all other stimuli. Consequently, children need dynamic forms to perceive vowels; steady-state forms are not preferred.

  12. Cross-language comparisons of contextual variation in the production and perception of vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred

    2005-04-01

    In the last two decades, a considerable amount of research has investigated second-language (L2) learners problems with perception and production of non-native vowels. Most studies have been conducted using stimuli in which the vowels are produced and presented in simple, citation-form (lists) monosyllabic or disyllabic utterances. In my laboratory, we have investigated the spectral (static/dynamic formant patterns) and temporal (syllable duration) variation in vowel productions as a function of speech-style (list/sentence utterances), speaking rate (normal/rapid), sentence focus (narrow focus/post-focus) and phonetic context (voicing/place of surrounding consonants). Data will be presented for a set of languages that include large and small vowel inventories, stress-, syllable-, and mora-timed prosody, and that vary in the phonological/phonetic function of vowel length, diphthongization, and palatalization. Results show language-specific patterns of contextual variation that affect the cross-language acoustic similarity of vowels. Research on cross-language patterns of perceived phonetic similarity by naive listeners suggests that listener's knowledge of native language (L1) patterns of contextual variation influences their L1/L2 similarity judgments and subsequently, their discrimination of L2 contrasts. Implications of these findings for assessing L2 learners perception of vowels and for developing laboratory training procedures to improve L2 vowel perception will be discussed. [Work supported by NIDCD.

  13. Rate and onset cues can improve cochlear implant synthetic vowel recognition in noise.

    PubMed

    Mc Laughlin, Myles; Reilly, Richard B; Zeng, Fan-Gang

    2013-03-01

    Understanding speech-in-noise is difficult for most cochlear implant (CI) users. Speech-in-noise segregation cues are well understood for acoustic hearing but not for electric hearing. This study investigated the effects of stimulation rate and onset delay on synthetic vowel-in-noise recognition in CI subjects. In experiment I, synthetic vowels were presented at 50, 145, or 795 pulse/s and noise at the same three rates, yielding nine combinations. Recognition improved significantly if the noise had a lower rate than the vowel, suggesting that listeners can use temporal gaps in the noise to detect a synthetic vowel. This hypothesis is supported by accurate prediction of synthetic vowel recognition using a temporal integration window model. Using lower rates a similar trend was observed in normal hearing subjects. Experiment II found that for CI subjects, a vowel onset delay improved performance if the noise had a lower or higher rate than the synthetic vowel. These results show that differing rates or onset times can improve synthetic vowel-in-noise recognition, indicating a need to develop speech processing strategies that encode or emphasize these cues.

  14. Nasalization of vowels in nasal environments in babbling: evidence for frame dominance.

    PubMed

    Matyear, C L; MacNeilage, P F; Davis, B L

    1998-01-01

    An emerging concept for the characterization of the form of babbling and early speech is 'frame dominance': most of the variance arises from a frame provided by open-close mandibular oscillation. In contrast, the tongue - the most versatile articulator in adults - plays only a minor role in intersegmental and even intersyllabic changes. The contribution of another articulator - the soft palate - to time-domain changes in babbling was evaluated in an acoustic analysis of 433 consonant-vowel-consonant sequences produced by 3 infants. Strong nasal effects on vowels in symmetrical consonantal environment were observed in the form of a lower frequency first formant region in low vowels and a lower frequency second formant region in front vowels. These results, the first of which also occurs in adults, were complemented by perceptual tendencies for transcribers to transcribe more mid vowels relative to low vowels and more central vowels relative to front vowels in nasal environments. Thus the soft palate is like the tongue in making only minor contributions to time-domain changes in babbling, and this is considered to be additional evidence for the frame dominance conception.

  15. Sparseness of vowel category structure: Evidence from English dialect comparison

    PubMed Central

    Scharinger, Mathias; Idsardi, William J.

    2014-01-01

    Current models of speech perception tend to emphasize either fine-grained acoustic properties or coarse-grained abstract characteristics of speech sounds. We argue for a particular kind of 'sparse' vowel representations and provide new evidence that these representations account for the successful access of the corresponding categories. In an auditory semantic priming experiment, American English listeners made lexical decisions on targets (e.g. load) preceded by semantically related primes (e.g. pack). Changes of the prime vowel that crossed a vowel-category boundary (e.g. peck) were not treated as a tolerable variation, as assessed by a lack of priming, although the phonetic categories of the two different vowels considerably overlap in American English. Compared to the outcome of the same experiment with New Zealand English listeners, where such prime variations were tolerated, our experiment supports the view that phonological representations are important in guiding the mapping process from the acoustic signal to an abstract mental representation. Our findings are discussed with regard to current models of speech perception and recent findings from brain imaging research. PMID:24653528

  16. Acoustic wave network and multivariate analysis for biosensing in space

    NASA Astrophysics Data System (ADS)

    Jayarajah, Christine N.; Thompson, Michael

    2005-03-01

    Bioanalytical techniques play an important role in monitoring the effects of environmental stress factors on fundamental life processes. In terms of space flight and extraterrestrial research, radiation, altered and microgravity are known to induce changes in gene expression. We report the use of an on-line transverse shear mode (TSM) acoustic wave biosensor to detect the initiation of gene transcription and DNA — drug binding. Since this biosensor offers real-time, label free monitoring of biological processes, it is possible to detect sequential binding steps as demonstrated in this paper. Furthermore, this sensor responds to several factors in the liquid phase such as viscosity, elasticity, surface tension, charge distribution and mass loading, which can in turn be influenced by specific gravity. The sensing device is a piezoelectric quartz crystal onto which the probe molecule (DNA in this case) is immobilized. Change in resonance frequency of the crystal in response to the binding of the target molecule(s), RNA polymerase and actinomycin-D, is fit to an equivalent circuit model from which multidimensional data is extracted. By performing multivariate analysis on this data we are able to observe interactions between several of these data series representing parameters such as motional resistance and capacitance. As well, we are able to observe the dominating parameters (for instance, frequency vs. motional resistance, which in turn can correspond to mass loading vs. energy dissipation) during the course of the experiment, as they vary between the different steps. Such advantages offered by the TSM sensor along with multivariate analysis are indispensable for biotechnological work under the influence of microgravity as several variables come into play.

  17. Investigating Interaural Frequency-Place Mismatches via Bimodal Vowel Integration

    PubMed Central

    Santurette, Sébastien; Chalupper, Josef; Dau, Torsten

    2014-01-01

    For patients having residual hearing in one ear and a cochlear implant (CI) in the opposite ear, interaural place-pitch mismatches might be partly responsible for the large variability in individual benefit. Behavioral pitch-matching between the two ears has been suggested as a way to individualize the fitting of the frequency-to-electrode map but is rather tedious and unreliable. Here, an alternative method using two-formant vowels was developed and tested. The interaural spectral shift was inferred by comparing vowel spaces, measured by presenting the first formant (F1) to the nonimplanted ear and the second (F2) on either side. The method was first evaluated with eight normal-hearing listeners and vocoder simulations, before being tested with 11 CI users. Average vowel distributions across subjects showed a similar pattern when presenting F2 on either side, suggesting acclimatization to the frequency map. However, individual vowel spaces with F2 presented to the implant did not allow a reliable estimation of the interaural mismatch. These results suggest that interaural frequency-place mismatches can be derived from such vowel spaces. However, the method remains limited by difficulties in bimodal fusion of the two formants. PMID:25421087

  18. Acoustic vibration analysis for utilization of woody plant in space environment

    NASA Astrophysics Data System (ADS)

    Chida, Yukari; Yamashita, Masamichi; Hashimoto, Hirofumi; Sato, Seigo; Tomita-Yokotani, Kaori; Baba, Keiichi; Suzuki, Toshisada; Motohashi, Kyohei; Sakurai, Naoki; Nakagawa-izumi, Akiko

    2012-07-01

    We are proposing to raise woody plants for space agriculture in Mars. Space agriculture has the utilization of wood in their ecosystem. Nobody knows the real tree shape grown under space environment under the low or micro gravitational conditions such as outer environment. Angiosperm tree forms tension wood for keeping their shape. Tension wood formation is deeply related to gravity, but the details of the mechanism of its formation has not yet been clarified. For clarifying the mechanism, the space experiment in international space station, ISS is the best way to investigate about them as the first step. It is necessary to establish the easy method for crews who examine the experiments at ISS. Here, we are proposing to investigate the possibility of the acoustic vibration analysis for the experiment at ISS. Two types of Japanese cherry tree, weeping and upright types in Prunus sp., were analyzed by the acoustic vibration method. Coefficient-of-variation (CV) of sound speed was calculated by the acoustic vibration analysis. The amount of lignin and decomposed lignin were estimated by both Klason and Py-GC/MS method, respectively. The relationships of the results of acoustic vibration analysis and the inner components in tested woody materials were investigated. After the experiments, we confirm the correlation about them. Our results indicated that the acoustic vibration analysis would be useful for determining the inside composition as a nondestructive method in outer space environment.

  19. Perception of English vowels by bilingual Chinese-English and corresponding monolingual listeners.

    PubMed

    Yang, Jing; Fox, Robert A

    2014-06-01

    This study compares the underlying perceptual structure of vowel perception in monolingual Chinese, monolingual English and bilingual Chinese-English listeners. Of particular interest is how listeners' spatial organization of vowels is affected either by their L1 or their experience with L2. Thirteen English vowels, /i, I, e, epsilon, ae, u, omega, o, (see symbol), alpha, (see symbol)I, alphaI, alphaomega/, embedded in /hVd/ syllable produced by an Ohio male speaker were presented in pairs to three groups of listeners. Each listener rated 312 vowel pairs on a nine-point dissimilarity scale. The responses from each group were analyzed using a multidimensional scaling program (ALSCAL). Results demonstrated that all three groups of listeners used high/low and front/back distinctions as the two most important dimensions to perceive English vowels. However, the vowels were distributed in clusters in the perceptual space of Chinese monolinguals, while they were appropriately separated and located in that of bilinguals and English monolinguals. Besides the two common perceptual dimensions, each group of listeners utilized a different third dimension to perceive these English vowels. English monolinguals used high-front offset. Bilinguals used a dimension mainly correlated to the distinction of monophthong/diphthong. Chinese monolinguals separated two high vowels, /i/ and /u/, from the rest of vowels in the third dimension. The difference between English monolinguals and Chinese monolinguals evidenced the effect of listeners' native language on the vowel perception. The difference between Chinese monolinguals and bilingual listeners as well as the approximation of bilingual listeners' perceptual space to that of English monolinguals demonstrated the effect of L2 experience on listeners' perception of L2 vowels.

  20. A Design Process for the Acoustical System of an Enclosed Space Colony

    NASA Technical Reports Server (NTRS)

    Hawke, Joanne

    1981-01-01

    Sounds of Silence. Using a general systems approach, factors and components of the acoustical design process for an isolated, confined space community in a torus space enclosure are considered. These components include the following: organizational structure and its effect on alternatives; problem definition and limits; criteria and priorities; methods of data gathering; modelling and measurement of the whole system and its components; decision methods; and design scenario of the acoustics of the complex, socio-technical space community system with emphasis on the human factors.

  1. Preliminary characterization of a one-axis acoustic system. [acoustic levitation for space processing

    NASA Technical Reports Server (NTRS)

    Oran, W. A.; Reiss, D. A.; Berge, L. H.; Parker, H. W.

    1979-01-01

    The acoustic fields and levitation forces produced along the axis of a single-axis resonance system were measured. The system consisted of a St. Clair generator and a planar reflector. The levitation force was measured for bodies of various sizes and geometries (i.e., spheres, cylinders, and discs). The force was found to be roughly proportional to the volume of the body until the characteristic body radius reaches approximately 2/k (k = wave number). The acoustic pressures along the axis were modeled using Huygens principle and a method of imaging to approximate multiple reflections. The modeled pressures were found to be in reasonable agreement with those measured with a calibrated microphone.

  2. Technical Aspects of Acoustical Engineering for the ISS [International Space Station

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.

    2009-01-01

    It is important to control acoustic levels on manned space flight vehicles and habitats to protect crew-hearing, allow for voice communications, and to ensure a healthy and habitable environment in which to work and live. For the International Space Station (ISS) this is critical because of the long duration crew-stays of approximately 6-months. NASA and the JSC Acoustics Office set acoustic requirements that must be met for hardware to be certified for flight. Modules must meet the NC-50 requirement and other component hardware are given smaller allocations to meet. In order to meet these requirements many aspects of noise generation and control must be considered. This presentation has been developed to give an insight into the various technical activities performed at JSC to ensure that a suitable acoustic environment is provided for the ISS crew. Examples discussed include fan noise, acoustic flight material development, on-orbit acoustic monitoring, and a specific hardware development and acoustical design case, the ISS Crew Quarters.

  3. Detection of Impact Damage on Space Shuttle Structures Using Acoustic Emission

    NASA Astrophysics Data System (ADS)

    Madaras, Eric I.; Prosser, William H.; Gorman, Michael R.

    2005-04-01

    Studies of the acoustic signals originating from impact damage on Space Shuttle components were undertaken. Sprayed on foam insulation and small aluminum spheres were used as impactors. Shuttle reinforced carbon-carbon panels, panels with Shuttle thermal protection tiles, and Shuttle main landing gear doors with tiles were targets. Ballistic speed and hypervelocity impacts over a wide range of impactor sizes, energies, and angles were tested. Additional tests were conducted to correlate the acoustic response of the test articles to actual Shuttle structures.

  4. International Space Station USOS Crew Quarters Ventilation and Acoustic Design Implementation

    NASA Technical Reports Server (NTRS)

    Broyan, James Lee, Jr.

    2009-01-01

    The International Space Station (ISS) United States Operational Segment (USOS) has four permanent rack sized ISS Crew Quarters (CQ) providing a private crewmember space. The CQ uses Node 2 cabin air for ventilation/thermal cooling, as opposed to conditioned ducted air from the ISS Temperature Humidity Control System or the ISS fluid cooling loop connections. Consequently, CQ can only increase the air flow rate to reduce the temperature delta between the cabin and the CQ interior. However, increasing airflow causes increased acoustic noise so efficient airflow distribution is an important design parameter. The CQ utilized a two fan push-pull configuration to ensure fresh air at the crewmember s head position and reduce acoustic exposure. The CQ interior needs to be below Noise Curve 40 (NC-40). The CQ ventilation ducts are open to the significantly louder Node 2 cabin aisle way which required significantly acoustic mitigation controls. The design implementation of the CQ ventilation system and acoustic mitigation are very inter-related and require consideration of crew comfort balanced with use of interior habitable volume, accommodation of fan failures, and possible crew uses that impact ventilation and acoustic performance. This paper illustrates the types of model analysis, assumptions, vehicle interactions, and trade-offs required for CQ ventilation and acoustics. Additionally, on-orbit ventilation system performance and initial crew feedback is presented. This approach is applicable to any private enclosed space that the crew will occupy.

  5. Magnetic brain response mirrors extraction of phonological features from spoken vowels.

    PubMed

    Obleser, Jonas; Lahiri, Aditi; Eulitz, Carsten

    2004-01-01

    This study further elucidates determinants of vowel perception in the human auditory cortex. The vowel inventory of a given language can be classified on the basis of phonological features which are closely linked to acoustic properties. A cortical representation of speech sounds based on these phonological features might explain the surprisingly inverse correlation between immense variance in the acoustic signal and high accuracy of speech recognition. We investigated timing and mapping of the N100m elicited by 42 tokens of seven natural German vowels varying along the phonological features tongue height (corresponding to the frequency of the first formant) and place of articulation (corresponding to the frequency of the second and third formants). Auditory evoked fields were recorded using a 148-channel whole-head magnetometer while subjects performed target vowel detection tasks. Source location differences appeared to be driven by place of articulation: Vowels with mutually exclusive place of articulation features, namely, coronal and dorsal elicited separate centers of activation along the posterior-anterior axis. Additionally, the time course of activation as reflected in the N100m peak latency distinguished between vowel categories especially when the spatial distinctiveness of cortical activation was low. In sum, results suggest that both N100m latency and source location as well as their interaction reflect properties of speech stimuli that correspond to abstract phonological features.

  6. Gender difference in speech intelligibility using speech intelligibility tests and acoustic analyses

    PubMed Central

    2010-01-01

    PURPOSE The purpose of this study was to compare men with women in terms of speech intelligibility, to investigate the validity of objective acoustic parameters related with speech intelligibility, and to try to set up the standard data for the future study in various field in prosthodontics. MATERIALS AND METHODS Twenty men and women were served as subjects in the present study. After recording of sample sounds, speech intelligibility tests by three speech pathologists and acoustic analyses were performed. Comparison of the speech intelligibility test scores and acoustic parameters such as fundamental frequency, fundamental frequency range, formant frequency, formant ranges, vowel working space area, and vowel dispersion were done between men and women. In addition, the correlations between the speech intelligibility values and acoustic variables were analyzed. RESULTS Women showed significantly higher speech intelligibility scores than men and there were significant difference between men and women in most of acoustic parameters used in the present study. However, the correlations between the speech intelligibility scores and acoustic parameters were low. CONCLUSION Speech intelligibility test and acoustic parameters used in the present study were effective in differentiating male voice from female voice and their values might be used in the future studies related patients involved with maxillofacial prosthodontics. However, further studies are needed on the correlation between speech intelligibility tests and objective acoustic parameters. PMID:21165272

  7. Vowel perception by noise masked normal-hearing young adults

    NASA Astrophysics Data System (ADS)

    Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen

    2005-08-01

    This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /smcapi e ɛ invv æ/ when F1 or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.

  8. Changes in Wisconsin English over 110 Years: A Real-Time Acoustic Account

    ERIC Educational Resources Information Center

    Delahanty, Jennifer

    2011-01-01

    The growing set of studies on American regional dialects have to date focused heavily on vowels while few examine consonant features and none provide acoustic analysis of both vowel and consonant features. This dissertation uses real-time data on both vowels and consonants to show how Wisconsin English has changed over time. Together, the…

  9. Fan Acoustic Issues in the NASA Space Flight Experience

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.; Goodman, Jerry

    2008-01-01

    Emphasis needs to be placed on choosing quiet fans compatible with systems design and specifications that control spec levels: a) Sound power; b) Choose quiet fan or plan to quiet it, early in program; c) Plan early verification that fan source allocations are met. Airborne noise: a) System design should function/play together with fans used (flow passages, restrictions, bends, expansions & contractions, and acoustics) vs. fan speed understood (nominal, worst case, & unplanned variances); b) Fan inlets treated, as required; c) Fan Outlets treated, as required; d) Ducted system inlets are outlets designed for acoustic compliance compatibility & designed so some late required modifications can be made without significant impacts. Structure Borne Noise: a) Structure borne noise dealt with as part of fan package or installation; b) Duct attachments and lines isolated. Case Radiated Noise: - Treatment added as much as possible to fan package (see example).

  10. Vowel production in Korean, Korean-accented English, and American English

    NASA Astrophysics Data System (ADS)

    Lee, Jimin; Weismer, Gary

    2005-09-01

    The current study compares vowel formant frequencies and durations produced by ten native speakers of Korean, those same speakers producing American English vowels, and ten native speakers of American English. The Korean speakers were chosen carefully to have a minimum of 2 years, and maximum of 5 years residence in the United States; all speakers were between the ages of 22 and 27. In addition, the native speakers of Korean were chosen, by means of a small-scale dialect-severity experiment, from a larger pool of speakers to achieve some homogeneity in their mastery of English phonetics. The full vowel systems of both languages were explored, and a rate condition was included (conversational versus fast) to test the hypothesis that the English vowel space is modified by rate differently for native speakers of Korean who produce English, versus native speakers of English. Results will be discussed in terms of language- and rate-induced adjustments of the vowel systems under study.

  11. Vowel category dependence of the relationship between palate height, tongue height, and oral area.

    PubMed

    Hasegawa-Johnson, Mark; Pizza, Shamala; Alwan, Abeer; Cha, Jul Setsu; Haker, Katherine

    2003-06-01

    This article evaluates intertalker variance of oral area, logarithm of the oral area, tongue height, and formant frequencies as a function of vowel category. The data consist of coronal magnetic resonance imaging (MRI) sequences and acoustic recordings of 5 talkers, each producing 11 different vowels. Tongue height (left, right, and midsagittal), palate height, and oral area were measured in 3 coronal sections anterior to the oropharyngeal bend and were subjected to multivariate analysis of variance, variance ratio analysis, and regression analysis. The primary finding of this article is that oral area (between palate and tongue) showed less intertalker variance during production of vowels with an oral place of articulation (palatal and velar vowels) than during production of vowels with a uvular or pharyngeal place of articulation. Although oral area variance is place dependent, percentage variance (log area variance) is not place dependent. Midsagittal tongue height in the molar region was positively correlated with palate height during production of palatal vowels, but not during production of nonpalatal vowels. Taken together, these results suggest that small oral areas are characterized by relatively talker-independent vowel targets and that meeting these talker-independent targets is important enough that each talker adjusts his or her own tongue height to compensate for talker-dependent differences in constriction anatomy. Computer simulation results are presented to demonstrate that these results may be explained by an acoustic control strategy: When talkers with very different anatomical characteristics try to match talker-independent formant targets, the resulting area variances are minimized near the primary vocal tract constriction.

  12. Speech intelligibility, speaking rate, and vowel formant characteristics in Mandarin-speaking children with cochlear implant.

    PubMed

    Chuang, Hsiu-Feng; Yang, Cheng-Chieh; Chi, Lin-Yang; Weismer, Gary; Wang, Yu-Tsai

    2012-04-01

    The effects of the use of cochlear implant (CI) on speech intelligibility, speaking rate, and vowel formant characteristics and the relationships between speech intelligibility, speaking rate, and vowel formant characteristics for children are clinically important. The purposes of this study were to report on the comparisons for speaking rate and vowel space area, and their relationship with speech intelligibility, between 24 Mandarin-speaking children with CI and 24 age-sex-education level matched normal hearing (NH) controls. Participants were audio recorded as they read a designed Mandarin intelligibility test, repeated prolongation of each of the three point vowels /i/, /a/, and /u/ five times, and repeated each of three sentences carrying one point vowel five times. Compared to the NH group, the CI group exhibited: (1) mild-to-moderate speech intelligibility impairment; (2) significantly reduced speaking rate mainly due to significantly longer inter-word pauses and larger pause proportion; and (3) significantly less vowel reduction in the horizontal dimension in sustained vowel phonation. The limitations of speech intelligibility development in children after cochlear implantation were related to atypical patterns and to a smaller degree in vowel reduction and slower speaking rate resulting from less efficient articulatory movement transition.

  13. Mixing fuel particles for space combustion research using acoustics

    NASA Technical Reports Server (NTRS)

    Burns, Robert J.; Johnson, Jerome A.; Klimek, Robert B.

    1988-01-01

    Part of the microgravity science to be conducted aboard the Shuttle (STS) involves combustion using solids, particles, and liquid droplets. The central experimental facts needed for characterization of premixed quiescent particle cloud flames cannot be adequately established by normal gravity studies alone. The experimental results to date of acoustically mixing a prototypical particulate, lycopodium, in a 5 cm diameter by 75 cm long flame tube aboard a Learjet aircraft flying a 20 sec low gravity trajectory are described. Photographic and light detector instrumentation combine to measure and characterize particle cloud uniformity.

  14. Micropropulsion by an acoustic bubble for navigating microfluidic spaces.

    PubMed

    Feng, Jian; Yuan, Junqi; Cho, Sung Kwon

    2015-03-21

    This paper describes an underwater micropropulsion principle where a gaseous bubble trapped in a suspended microchannel and oscillated by external acoustic excitation generates a propelling force. The propelling swimmer is designed and microfabricated from parylene on the microscale (the equivalent diameter of the cylindrical bubble is around 60 μm) using microphotolithography. The propulsion mechanism is studied and verified by computational fluid dynamics (CFD) simulations as well as experiments. The acoustically excited and thus periodically oscillating bubble generates alternating flows of intake and discharge through an opening of the microchannel. As the Reynolds number of oscillating flow increases, the difference between the intake and discharge flows becomes significant enough to generate a net flow (microstreaming flow) and a propulsion force against the channel. As the size of the device is reduced, however, the Reynolds number is also reduced. To maintain the Reynolds number in a certain range and thus generate a strong propulsion force in the fabricated device, the oscillation amplitude of the bubble is maximized (resonated) and the oscillation frequency is set high (over 10 kHz). Propelling motions by a single bubble as well as an array of bubbles are achieved on the microscale. In addition, the microswimmer demonstrates payload carrying. This propulsion mechanism may be applied to microswimmers that navigate microfluidic environments and possibly narrow passages in human bodies to perform biosensing, drug delivery, imaging, and microsurgery.

  15. Acoustic response modeling of energetics systems in confined spaces

    NASA Astrophysics Data System (ADS)

    González, David R.; Hixon, Ray; Liou, William W.; Sanford, Matthew

    2007-04-01

    In recent times, warfighting has been taking place not in far-removed areas but within urban environments. As a consequence, the modern warfighter must adapt. Currently, an effort is underway to develop shoulder-mounted rocket launcher rounds suitable with reduced acoustic signatures for use in such environments. Of prime importance is to ensure that these acoustic levels, generated by propellant burning, reflections from enclosures, etc., are at tolerable levels without requiring excessive hearing protection. Presented below is a proof-of-concept approach aimed at developing a computational tool to aid in the design process. Unsteady, perfectly-expanded-jet simulations at two different Mach numbers and one at an elevated temperature ratio were conducted using an existing computational aeroacoustics code. From the solutions, sound pressure levels and frequency spectra were then obtained. The results were compared to sound pressure levels collected from a live-fire test of the weapon. Lastly, an outline of work that is to continue and be completed in the near future will be presented.

  16. Acoustics of the human middle-ear air space.

    PubMed

    Stepp, Cara E; Voss, Susan E

    2005-08-01

    The impedance of the middle-ear air space was measured on three human cadaver ears with complete mastoid air-cell systems. Below 500 Hz, the impedance is approximately compliance-like, and at higher frequencies (500-6000 Hz) the impedance magnitude has several (five to nine) extrema. Mechanisms for these extrema are identified and described through circuit models of the middle-ear air space. The measurements demonstrate that the middle-ear air space impedance can affect the middle-ear impedance at the tympanic membrane by as much as 10 dB at frequencies greater than 1000 Hz. Thus, variations in the middle-ear air space impedance that result from variations in anatomy of the middle-ear air space can contribute to inter-ear variations in both impedance measurements and otoacoustic emissions, when measured at the tympanic membrane.

  17. Modeling of oropharyngeal articulatory adaptation to compensate for the acoustic effects of nasalization.

    PubMed

    Rong, Panying; Kuehn, David P; Shosted, Ryan K

    2016-09-01

    Hypernasality is one of the most detrimental speech disturbances that lead to declines of speech intelligibility. Velopharyngeal inadequacy, which is associated with anatomic defects such as cleft palate or neuromuscular disorders that affect velopharygneal function, is the primary cause of hypernasality. A simulation study by Rong and Kuehn [J. Speech Lang. Hear. Res. 55(5), 1438-1448 (2012)] demonstrated that properly adjusted oropharyngeal articulation can reduce nasality for vowels synthesized with an articulatory model [Mermelstein, J. Acoust. Soc. Am. 53(4), 1070-1082 (1973)]. In this study, a speaker-adaptive articulatory model was developed to simulate speaker-customized oropharyngeal articulatory adaptation to compensate for the acoustic effects of nasalization on /a/, /i/, and /u/. The results demonstrated that (1) the oropharyngeal articulatory adaptation effectively counteracted the effects of nasalization on the second lowest formant frequency (F2) and partially compensated for the effects of nasalization on vowel space (e.g., shifting and constriction of vowel space) and (2) the articulatory adaptation strategies generated by the speaker-adaptive model might be more efficacious for counteracting the acoustic effects of nasalization compared to the adaptation strategies generated by the standard articulatory model in Rong and Kuehn. The findings of this study indicated the potential of using oropharyngeal articulatory adaptation as a means to correct maladaptive articulatory behaviors and to reduce nasality.

  18. Capturing the acoustic response of historical spaces for interactive music performance and recording

    NASA Astrophysics Data System (ADS)

    Woszczyk, Wieslaw; Martens, William

    2004-10-01

    Performers engaged in musical recording while they are located in relatively dry recording studios generally find their musical performance facilitated when they are provided with synthetic reverberation. This well established practice is extended in the project described here to include highly realistic virtual acoustic recreation of original rooms in which Haydn taught his students to play pianoforte. The project has two primary components, the first of which is to capture for posterity the acoustic response of such historical rooms that may no longer be available or functional for performance. The project's second component is to reproduce as accurately as possible the virtual acoustic interactions between a performer and the re-created acoustic space, as performers, during their performance, move relative to their instrument and the boundaries of surrounding enclosure. In the first of two presentations on this ongoing project, the method for measurement of broadband impulse responses for these historical rooms is described. The test signal is radiated by a group of omnidirectional loudspeakers approximating the layout and the complex directional radiation pattern of the pianoforte, and the room response is sampled by a spaced microphone array. The companion presentation will describe the method employed for virtual acoustic reproduction for the performer.

  19. Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Hack, Zarita Caplan; Erber, Norman P.

    1982-01-01

    Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…

  20. The effects of tongue loading and auditory feedback on vowel production.

    PubMed

    Leung, Man-Tak; Ciocca, Valter

    2011-01-01

    This study investigated the role of sensory feedback during the production of front vowels. A temporary aftereffect induced by tongue loading was employed to modify the somatosensory-based perception of tongue height. Following the removal of tongue loading, tongue height during vowel production was estimated by measuring the frequency of the first formant (F1) from the acoustic signal. In experiment 1, the production of front vowels following tongue loading was investigated either in the presence or absence of auditory feedback. With auditory feedback available, the tongue height of front vowels was not modified by the aftereffect of tongue loading. By contrast, speakers did not compensate for the aftereffect of tongue loading when they produced vowels in the absence of auditory feedback. In experiment 2, the characteristics of the masking noise were manipulated such that it masked energy either in the F1 region or in the region of the second and higher formants. The results showed that the adjustment of tongue height during the production of front vowels depended on information about F1 in the auditory feedback. These findings support the idea that speech goals include both auditory and somatosensory targets and that speakers are able to make use of information from both sensory modalities to maximize the accuracy of speech production.

  1. Speaker Age and Vowel Perception

    ERIC Educational Resources Information Center

    Drager, Katie

    2011-01-01

    Recent research provides evidence that individuals shift in their perception of variants depending on social characteristics attributed to the speaker. This paper reports on a speech perception experiment designed to test the degree to which the age attributed to a speaker influences the perception of vowels undergoing a chain shift. As a result…

  2. Acoustic levitation technique for containerless processing at high temperatures in space

    NASA Technical Reports Server (NTRS)

    Rey, Charles A.; Merkley, Dennis R.; Hammarlund, Gregory R.; Danley, Thomas J.

    1988-01-01

    High temperature processing of a small specimen without a container has been demonstrated in a set of experiments using an acoustic levitation furnace in the microgravity of space. This processing technique includes the positioning, heating, melting, cooling, and solidification of a material supported without physical contact with container or other surface. The specimen is supported in a potential energy well, created by an acoustic field, which is sufficiently strong to position the specimen in the microgravity environment of space. This containerless processing apparatus has been successfully tested on the Space Shuttle during the STS-61A mission. In that experiment, three samples wer successfully levitated and processed at temperatures from 600 to 1500 C. Experiment data and results are presented.

  3. NONLINEAR BEHAVIOR OF BARYON ACOUSTIC OSCILLATIONS IN REDSHIFT SPACE FROM THE ZEL'DOVICH APPROXIMATION

    SciTech Connect

    McCullagh, Nuala; Szalay, Alexander S.

    2015-01-10

    Baryon acoustic oscillations (BAO) are a powerful probe of the expansion history of the universe, which can tell us about the nature of dark energy. In order to accurately characterize the dark energy equation of state using BAO, we must understand the effects of both nonlinearities and redshift space distortions on the location and shape of the acoustic peak. In a previous paper, we introduced a novel approach to second order perturbation theory in configuration space using the Zel'dovich approximation, and presented a simple result for the first nonlinear term of the correlation function. In this paper, we extend this approach to redshift space. We show how to perform the computation and present the analytic result for the first nonlinear term in the correlation function. Finally, we validate our result through comparison with numerical simulations.

  4. An assessment of the microgravity and acoustic environments in Space Station Freedom using VAPEPS

    NASA Astrophysics Data System (ADS)

    Bergen, Thomas F.; Scharton, Terry D.; Badilla, Gloria A.

    The Vibroacoustic Payload Environment Prediction System (VAPEPS) was used to predict the stationary on-orbit environments in one of the Space Station Freedom modules. The model of the module included the outer structure, equipment and payload racks, avionics, and cabin air and duct systems. Acoustic and vibratory outputs of various source classes were derived and input to the model. Initial results of analyses, performed in one-third octave frequency bands from 10 to 10,000 Hz, show that both the microgravity and acoustic environments will be exceeded in some one-third octave bands with the current SSF design. Further analyses indicate that interior acoustic level requirements will be exceeded even if the microgravity requirements are met.

  5. Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics

    NASA Technical Reports Server (NTRS)

    Kenny, Jeremy; Hobbs, Chris; Plotkin, Ken; Pilkey, Debbie

    2009-01-01

    Lift-off acoustic environments generated by the future Ares I launch vehicle are assessed by the NASA Marshall Space Flight Center (MSFC) acoustics team using several prediction tools. This acoustic environment is directly caused by the Ares I First Stage booster, powered by the five-segment Reusable Solid Rocket Motor (RSRMV). The RSRMV is a larger-thrust derivative design from the currently used Space Shuttle solid rocket motor, the Reusable Solid Rocket Motor (RSRM). Lift-off acoustics is an integral part of the composite launch vibration environment affecting the Ares launch vehicle and must be assessed to help generate hardware qualification levels and ensure structural integrity of the vehicle during launch and lift-off. Available prediction tools that use free field noise source spectrums as a starting point for generation of lift-off acoustic environments are described in the monograph NASA SP-8072: "Acoustic Loads Generated by the Propulsion System." This monograph uses a reference database for free field noise source spectrums which consist of subscale rocket motor firings, oriented in horizontal static configurations. The phrase "subscale" is appropriate, since the thrust levels of rockets in the reference database are orders of magnitude lower than the current design thrust for the Ares launch family. Thus, extrapolation is needed to extend the various reference curves to match Ares-scale acoustic levels. This extrapolation process yields a subsequent amount of uncertainty added upon the acoustic environment predictions. As the Ares launch vehicle design schedule progresses, it is important to take every opportunity to lower prediction uncertainty and subsequently increase prediction accuracy. Never before in NASA s history has plume acoustics been measured for large scale solid rocket motors. Approximately twice a year, the RSRM prime vendor, ATK Launch Systems, static fires an assembled RSRM motor in a horizontal configuration at their test facility

  6. Vowel reduction in word-final position by early and late Spanish-English bilinguals.

    PubMed

    Byers, Emily; Yavas, Mehmet

    2017-01-01

    Vowel reduction is a prominent feature of American English, as well as other stress-timed languages. As a phonological process, vowel reduction neutralizes multiple vowel quality contrasts in unstressed syllables. For bilinguals whose native language is not characterized by large spectral and durational differences between tonic and atonic vowels, systematically reducing unstressed vowels to the central vowel space can be problematic. Failure to maintain this pattern of stressed-unstressed syllables in American English is one key element that contributes to a "foreign accent" in second language speakers. Reduced vowels, or "schwas," have also been identified as particularly vulnerable to the co-articulatory effects of adjacent consonants. The current study examined the effects of adjacent sounds on the spectral and temporal qualities of schwa in word-final position. Three groups of English-speaking adults were tested: Miami-based monolingual English speakers, early Spanish-English bilinguals, and late Spanish-English bilinguals. Subjects performed a reading task to examine their schwa productions in fluent speech when schwas were preceded by consonants from various points of articulation. Results indicated that monolingual English and late Spanish-English bilingual groups produced targeted vowel qualities for schwa, whereas early Spanish-English bilinguals lacked homogeneity in their vowel productions. This extends prior claims that schwa is targetless for F2 position for native speakers to highly-proficient bilingual speakers. Though spectral qualities lacked homogeneity for early Spanish-English bilinguals, early bilinguals produced schwas with near native-like vowel duration. In contrast, late bilinguals produced schwas with significantly longer durations than English monolinguals or early Spanish-English bilinguals. Our results suggest that the temporal properties of a language are better integrated into second language phonologies than spectral qualities

  7. From prosodic structure to acoustic saliency: A fMRI investigation of speech rate, clarity, and emphasis

    NASA Astrophysics Data System (ADS)

    Golfinopoulos, Elisa

    Acoustic variability in fluent speech can arise at many stages in speech production planning and execution. For example, at the phonological encoding stage, the grouping of phonemes into syllables determines which segments are coarticulated and, by consequence, segment-level acoustic variation. Likewise phonetic encoding, which determines the spatiotemporal extent of articulatory gestures, will affect the acoustic detail of segments. Functional magnetic resonance imaging (fMRI) was used to measure brain activity of fluent adult speakers in four speaking conditions: fast, normal, clear, and emphatic (or stressed) speech. These speech manner changes typically result in acoustic variations that do not change the lexical or semantic identity of productions but do affect the acoustic saliency of phonemes, syllables and/or words. Acoustic responses recorded inside the scanner were assessed quantitatively using eight acoustic measures and sentence duration was used as a covariate of non-interest in the neuroimaging analysis. Compared to normal speech, emphatic speech was characterized acoustically by a greater difference between stressed and unstressed vowels in intensity, duration, and fundamental frequency, and neurally by increased activity in right middle premotor cortex and supplementary motor area, and bilateral primary sensorimotor cortex. These findings are consistent with right-lateralized motor planning of prosodic variation in emphatic speech. Clear speech involved an increase in average vowel and sentence durations and average vowel spacing, along with increased activity in left middle premotor cortex and bilateral primary sensorimotor cortex. These findings are consistent with an increased reliance on feedforward control, resulting in hyper-articulation, under clear as compared to normal speech. Fast speech was characterized acoustically by reduced sentence duration and average vowel spacing, and neurally by increased activity in left anterior frontal

  8. Effects of bite blocks and hearing status on vowel production

    NASA Astrophysics Data System (ADS)

    Lane, Harlan; Denny, Margaret; Guenther, Frank H.; Matthies, Melanie L.; Menard, Lucie; Perkell, Joseph S.; Stockmann, Ellen; Tiede, Mark; Vick, Jennell; Zandipour, Majid

    2005-09-01

    This study explores the effects of hearing status and bite blocks on vowel production. Normal-hearing controls and postlingually deaf adults read elicitation lists of /hVd/ syllables with and without bite blocks and auditory feedback. Deaf participants' auditory feedback was provided by a cochlear prosthesis and interrupted by switching off their implant microphones. Recording sessions were held before prosthesis was provided and one month and one year after. Long-term absence of auditory feedback was associated with heightened dispersion of vowel tokens, which was inflated further by inserting bite blocks. The restoration of some hearing with prosthesis reduced dispersion. Deaf speakers' vowel spaces were reduced in size compared to controls. Insertion of bite blocks reduced them further because of the speakers' incomplete compensation. A year of prosthesis use increased vowel contrast with feedback during elicitation. These findings support the inference that models of speech production must assign a role to auditory feedback in error-based correction of feedforward commands for subsequent articulatory gestures.

  9. Vowel-specific mismatch responses in the anterior superior temporal gyrus: an fMRI study.

    PubMed

    Leff, Alexander P; Iverson, Paul; Schofield, Thomas M; Kilner, James M; Crinion, Jennifer T; Friston, Karl J; Price, Cathy J

    2009-04-01

    There have been many functional imaging studies that have investigated the neural correlates of speech perception by contrasting neural responses to speech and "speech-like" but unintelligible control stimuli. A potential drawback of this approach is that intelligibility is necessarily conflated with a change in the acoustic parameters of the stimuli. The approach we have adopted is to take advantage of the mismatch response elicited by an oddball paradigm to probe neural responses in temporal lobe structures to a parametrically varied set of deviants in order to identify brain regions involved in vowel processing. Thirteen normal subjects were scanned using a functional magnetic resonance imaging (fMRI) paradigm while they listened to continuous trains of auditory stimuli. Three classes of stimuli were used: 'vowel deviants' and two classes of control stimuli: one acoustically similar ('single formants') and the other distant (tones). The acoustic differences between the standard and deviants in both the vowel and single-formant classes were designed to match each other closely. The results revealed an effect of vowel deviance in the left anterior superior temporal gyrus (aSTG). This was most significant when comparing all vowel deviants to standards, irrespective of their psychoacoustic or physical deviance. We also identified a correlation between perceptual discrimination and deviant-related activity in the dominant superior temporal sulcus (STS), although this effect was not stimulus specific. The responses to vowel deviants were in brain regions implicated in the processing of intelligible or meaningful speech, part of the so-called auditory "what" processing stream. Neural components of this pathway would be expected to respond to sudden, perhaps unexpected changes in speech signal that result in a change to narrative meaning.

  10. Acoustic Modeling and Analysis for the Space Shuttle Main Propulsion System Liner Crack Investigation

    NASA Technical Reports Server (NTRS)

    Casiano, Matthew J.; Zoladz, Tom F.

    2004-01-01

    Cracks were found on bellows flow liners in the liquid hydrogen feedlines of several space shuttle orbiters in 2002. An effort to characterize the fluid environment upstream of the space shuttle main engine low-pressure fuel pump was undertaken to help identify the cause of the cracks and also provide quantitative environments and loads of the region. Part of this effort was to determine the duct acoustics several inches upstream of the low-pressure fuel pump in the region of a bellows joint. A finite element model of the complicated geometry was made using three-dimensional fluid elements. The model was used to describe acoustics in the complex geometry and played an important role in the investigation. Acoustic mode shapes and natural frequencies of the liquid hydrogen in the duct and in the cavity behind the flow liner were determined. Forced response results were generated also by applying an edgetone-like forcing to the liner slots. Studies were conducted for state conditions and also conditions assuming two-phase entrapment in the backing cavity. Highly instrumented single-engine hot fire data confirms the presence of some of the predicted acoustic modes.

  11. The acoustics of short circular holes opening to confined and unconfined spaces

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Morgans, Aimee S.

    2017-04-01

    The sound generated or absorbed by short circular holes with a mean flow passing through them is relevant in many practical applications. Analytical models for their acoustic response often ignore the fact that such holes open to a confined or finite space either side, or account for this effect simply by adding an end mass inertial correction. The vortex-sound interaction within a short hole has been recently shown to strongly affect the acoustic response at low frequencies (D. Yang, A.S. Morgans, J. Sound Vib. 384 (2016) 294-311 [19]). The present study considers a semi-analytical model based on Green's function method to investigate how the expansion ratios either side of a short hole affect the vortex-sound interaction within it. After accounting for expansions to confined spaces using a cylinder Green's function method, the model is substantially simplified by applying a half-space Green's function for expansions to large spaces. The effect of both the up- and downstream expansion ratios on the acoustics of the hole is investigated. These hole models are then incorporated into a Helmholtz resonator model, allowing a systematic investigation into the effect of neck-to-cavity expansion ratio and neck length. Both of these are found to affect the resonator damping.

  12. Acoustical Testing Laboratory Developed to Support the Low-Noise Design of Microgravity Space Flight Hardware

    NASA Technical Reports Server (NTRS)

    Cooper, Beth A.

    2001-01-01

    The NASA John H. Glenn Research Center at Lewis Field has designed and constructed an Acoustical Testing Laboratory to support the low-noise design of microgravity space flight hardware. This new laboratory will provide acoustic emissions testing and noise control services for a variety of customers, particularly for microgravity space flight hardware that must meet International Space Station limits on noise emissions. These limits have been imposed by the space station to support hearing conservation, speech communication, and safety goals as well as to prevent noise-induced vibrations that could impact microgravity research data. The Acoustical Testing Laboratory consists of a 23 by 27 by 20 ft (height) convertible hemi/anechoic chamber and separate sound-attenuating test support enclosure. Absorptive 34-in. fiberglass wedges in the test chamber provide an anechoic environment down to 100 Hz. A spring-isolated floor system affords vibration isolation above 3 Hz. These criteria, along with very low design background levels, will enable the acquisition of accurate and repeatable acoustical measurements on test articles, up to a full space station rack in size, that produce very little noise. Removable floor wedges will allow the test chamber to operate in either a hemi/anechoic or anechoic configuration, depending on the size of the test article and the specific test being conducted. The test support enclosure functions as a control room during normal operations but, alternatively, may be used as a noise-control enclosure for test articles that require the operation of noise-generating test support equipment.

  13. Acoustic correlates of timbre space dimensions: A confirmatory study using synthetic tones

    NASA Astrophysics Data System (ADS)

    Caclin, Anne; McAdams, Stephen; Smith, Bennett K.; Winsberg, Suzanne

    2005-07-01

    Timbre spaces represent the organization of perceptual distances, as measured with dissimilarity ratings, among tones equated for pitch, loudness, and perceived duration. A number of potential acoustic correlates of timbre-space dimensions have been proposed in the psychoacoustic literature, including attack time, spectral centroid, spectral flux, and spectrum fine structure. The experiments reported here were designed as direct tests of the perceptual relevance of these acoustical parameters for timbre dissimilarity judgments. Listeners presented with carefully controlled synthetic tones use attack time, spectral centroid, and spectrum fine structure in dissimilarity rating experiments. These parameters thus appear as major determinants of timbre. However, spectral flux appears as a less salient timbre parameter, its salience depending on the number of other dimensions varying concurrently in the stimulus set. Dissimilarity ratings were analyzed with two different multidimensional scaling models (CLASCAL and CONSCAL), the latter providing psychophysical functions constrained by the physical parameters. Their complementarity is discussed.

  14. Haptic Holography: Acoustic Space and the Evolution of the Whole Message

    NASA Astrophysics Data System (ADS)

    Logan, N.

    2013-02-01

    The paper argues that the Haptic Holography Work Station is an example of a medium that fit's with McLuhan's notion of Acoustic Space, that is it is a medium which stimulates more than one sense of perception at a time. As a result, the Haptic Holography Work Station transmits information about the subject much more rapidly than other media that precedes it, be it text, photography or television.

  15. Fundamental frequency effects on thresholds for vowel formant discrimination.

    PubMed

    Kewley-Port, D; Li, X; Zheng, Y; Neel, A T

    1996-10-01

    The present experiments examined the effect of fundamental frequency (F0) on thresholds for the discrimination of formant frequency for male vowels. Thresholds for formant-frequency discrimination were obtained for six vowels with two fundamental frequencies: normal F0 (126 Hz) and low F0 (101 Hz). Four well-trained subjects performed an adaptive tracking task under low stimulus uncertainty. Comparisons between the normal-F0 and the low-F0 conditions showed that formants were resolved more accurately for low F0. These thresholds for male vowels were compared to thresholds for female vowels previously reported by Kewley-Port and Watson [J. Acoust. Soc. Am. 95, 485-496 (1994)]. Analyses of the F0 sets demonstrated that formant thresholds were significantly degraded for increases both in formant frequency and in F0. A piece-wise linear function was fit to each of the three sets of delta F thresholds as a function of formant frequency. The shape of the three parallel functions was similar such that delta F was constant in the F1 region and increased with formant frequency in the F2 region. The capability for humans to discriminate formant frequency may therefore be described as uniform in the F1 region (< 800 Hz) when represented as delta F and also uniform in the F2 region when represented as a ratio of delta F/F. A model of formant discrimination is proposed in which the effects of formant frequency are represented by the shape of an underlying piece-wise linear function. Increases in F0 significantly degrade overall discrimination independently from formant frequency.

  16. A wideband fast multipole boundary element method for half-space/plane-symmetric acoustic wave problems

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Chen, Hai-Bo; Chen, Lei-Lei

    2013-04-01

    This paper presents a novel wideband fast multipole boundary element approach to 3D half-space/plane-symmetric acoustic wave problems. The half-space fundamental solution is employed in the boundary integral equations so that the tree structure required in the fast multipole algorithm is constructed for the boundary elements in the real domain only. Moreover, a set of symmetric relations between the multipole expansion coefficients of the real and image domains are derived, and the half-space fundamental solution is modified for the purpose of applying such relations to avoid calculating, translating and saving the multipole/local expansion coefficients of the image domain. The wideband adaptive multilevel fast multipole algorithm associated with the iterative solver GMRES is employed so that the present method is accurate and efficient for both lowand high-frequency acoustic wave problems. As for exterior acoustic problems, the Burton-Miller method is adopted to tackle the fictitious eigenfrequency problem involved in the conventional boundary integral equation method. Details on the implementation of the present method are described, and numerical examples are given to demonstrate its accuracy and efficiency.

  17. The role of vowel quality in stress clash

    NASA Astrophysics Data System (ADS)

    Levey, Sandra Kay

    The effect of stress clash between adjacent primary stressed syllables was examined through perceptual and acoustical analysis. Bisyllabic target words with primary stress on the second syllable were produced in citation form and in sentences by ten adult participants. The selected target words were analyzed for (a)the position of primary stress and (b)the identity of the vowel in the first syllable when produced in citation form. The goal was to determine if primary stress was placed on the final syllable and that the first syllable contained a vowel that could receive primary stress. Words judged to not meet these criteria were eliminated from the corpus. The target words were placed in stress clash contexts (followed by a primary stressed syllable) and in non-clash contexts (followed by a non-primary stressed syllable). The goal was to determine if stress clash resolution would occur, and if so, which of three explanations could account for resolution: (a)stress shift, with primary stress shifted to the first syllable in the target words or stress deletion, with acoustic features reduced in the second syllable in the target words, (b)pitch accent, taking the form of fundamental frequency, assigned to the first syllable in target words produced in early-sentence position or (c)increased final syllable duration in the target word. Perceptual judgment showed that stress clash was resolved inconsistently in stress clash contexts, and that stress shift also occurred in non-clash contexts. Acoustic analysis showed that fundamental frequency was higher the first syllable of target words when stress shift occurred, and that both syllables of the target words were produced with higher fundamental frequency in early-sentence position. A test of the correlation between perceptual judgments and acoustic results showed that fundamental frequency was potentially the primary acoustic feature that signaled the presence of stress shift.

  18. Acoustic Correlates of Emphatic Stress in Central Catalan

    ERIC Educational Resources Information Center

    Nadeu, Marianna; Hualde, Jose Ignacio

    2012-01-01

    A common feature of public speech in Catalan is the placement of prominence on lexically unstressed syllables ("emphatic stress"). This paper presents an acoustic study of radio speech data. Instances of emphatic stress were perceptually identified. Within-word comparison between vowels with emphatic stress and vowels with primary lexical stress…

  19. Peripheral auditory tuning for vowels.

    PubMed

    Namasivayam, Aravind Kumar; Le, Duc James; Hard, Jennifer; Lewis, Samantha Evelyn; Neufeld, Chris; van Lieshout, Pascal

    2013-12-01

    In this study, 35 young, healthy adults were tested on whether speech-like stimuli evoke a unique response in the auditory efferent system. To this end, descending cortical influences on medial olivocochlear (MOC) activity were indirectly evaluated by studying the effects of contralateral suppression on distortion product otoacoustic emissions (DPOAEs) under four conditions: (a) in the absence of any contralateral noise (Baseline), (b) presence of contralateral broadband noise (Noise Baseline), (c) vowel discrimination-in-noise task (VDN) and (d) tone discrimination-in-noise (TDN) task. A statistically significant release from suppression was evident across all tested DPOAE frequencies (1, 1.5 and 2 kHz) only for the VDN task (p < 0.05), which yielded greater release from suppression than the TDN task. These findings indicate that during active listening in the presence of noise, the MOC activity may be differentially modulated depending on the type of stimulus (vowel vs. tone). Specifically, in the presence of background noise, vowels may show a greater release from suppression in the cochlea than frequency, intensity and duration matched tones.

  20. English phonology and an acoustic language universal

    PubMed Central

    Nakajima, Yoshitaka; Ueda, Kazuo; Fujimaru, Shota; Motomura, Hirotoshi; Ohsaka, Yuki

    2017-01-01

    Acoustic analyses of eight different languages/dialects had revealed a language universal: Three spectral factors consistently appeared in analyses of power fluctuations of spoken sentences divided by critical-band filters into narrow frequency bands. Examining linguistic implications of these factors seems important to understand how speech sounds carry linguistic information. Here we show the three general categories of the English phonemes, i.e., vowels, sonorant consonants, and obstruents, to be discriminable in the Cartesian space constructed by these factors: A factor related to frequency components above 3,300 Hz was associated only with obstruents (e.g., /k/ or /z/), and another factor related to frequency components around 1,100 Hz only with vowels (e.g., /a/ or /i/) and sonorant consonants (e.g., /w/, /r/, or /m/). The latter factor highly correlated with the hypothetical concept of sonority or aperture in phonology. These factors turned out to connect the linguistic and acoustic aspects of speech sounds systematically.

  1. English phonology and an acoustic language universal.

    PubMed

    Nakajima, Yoshitaka; Ueda, Kazuo; Fujimaru, Shota; Motomura, Hirotoshi; Ohsaka, Yuki

    2017-04-11

    Acoustic analyses of eight different languages/dialects had revealed a language universal: Three spectral factors consistently appeared in analyses of power fluctuations of spoken sentences divided by critical-band filters into narrow frequency bands. Examining linguistic implications of these factors seems important to understand how speech sounds carry linguistic information. Here we show the three general categories of the English phonemes, i.e., vowels, sonorant consonants, and obstruents, to be discriminable in the Cartesian space constructed by these factors: A factor related to frequency components above 3,300 Hz was associated only with obstruents (e.g., /k/ or /z/), and another factor related to frequency components around 1,100 Hz only with vowels (e.g., /a/ or /i/) and sonorant consonants (e.g., /w/, /r/, or /m/). The latter factor highly correlated with the hypothetical concept of sonority or aperture in phonology. These factors turned out to connect the linguistic and acoustic aspects of speech sounds systematically.

  2. Acoustic wave and eikonal equations in a transformed metric space for various types of anisotropy.

    PubMed

    Noack, Marcus M; Clark, Stuart

    2017-03-01

    Acoustic waves propagating in anisotropic media are important for various applications. Even though these wave phenomena do not generally occur in nature, they can be used to approximate wave motion in various physical settings. We propose a method to derive wave equations for anisotropic wave propagation by adjusting the dispersion relation according to a selected type of anisotropy and transforming it into another metric space. The proposed method allows for the derivation of acoustic wave and eikonal equations for various types of anisotropy, and generalizes anisotropy by interpreting it as a change of the metric instead of a change of velocity with direction. The presented method reduces the scope of acoustic anisotropy to a selection of a velocity or slowness surface and a tensor that describes the transformation into a new metric space. Experiments are shown for spatially dependent ellipsoidal anisotropy in homogeneous and inhomogeneous media and sandstone, which shows vertical transverse isotropy. The results demonstrate the stability and simplicity of the solution process for certain types of anisotropy and the equivalency of the solutions.

  3. Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)

    2002-01-01

    The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.

  4. Space Shuttle payload bay acoustics prediction study. Volume 3A: Addendum to computer users' manual

    NASA Technical Reports Server (NTRS)

    Wilby, J. F.; Wilby, E. G.

    1983-01-01

    Since the publication of the Computer User's Manual for Payload Acoustics Environment for Shuttle (PACES), the analytical model was validated by means of measured data from the first three shuttle lift-offs. During the validation process, new information became available and five changes were made to the input data and the computer program. Three changes affect the user. They are: a revision to the recommended exterior sound pressure levels, a revision to the recommended payload bay acoustic absorption coefficients, and a revision to the vertical station datum for the payload bay. The two other changes do not involve the user. The changes are associated with the output of confidence limits for the predicted space-average sound pressure levels in the payload bay, and a modification to the analytical representation of the payload bay door. The changes are discussed briefly in this Addendum to the Computer User's Manual.

  5. Continuous and Discrete Space Particle Filters for Predictions in Acoustic Positioning

    NASA Astrophysics Data System (ADS)

    Bauer, Will; Kim, Surrey; Kouritzin, Michael A.

    2002-12-01

    Predicting the future state of a random dynamic signal based on corrupted, distorted, and partial observations is vital for proper real-time control of a system that includes time delay. Motivated by problems from Acoustic Positioning Research Inc., we consider the continual automated illumination of an object moving within a bounded domain, which requires object location prediction due to inherent mechanical and physical time lags associated with robotic lighting. Quality computational predictions demand high fidelity models for the coupled moving object signal and observation equipment pair. In our current problem, the signal represents the vector position, orientation, and velocity of a stage performer. Acoustic observations are formed by timing ultrasonic waves traveling from four perimeter speakers to a microphone attached to the performer. The goal is to schedule lighting movements that are coordinated with the performer by anticipating his/her future position based upon these observations using filtering theory. Particle system based methods have experienced rapid development and have become an essential technique of contemporary filtering strategies. Hitherto, researchers have largely focused on continuous state particle filters, ranging from traditional weighted particle filters to adaptive refining particle filters, readily able to perform path-space estimation and prediction. Herein, we compare the performance of a state-of-the-art refining particle filter to that of a novel discrete-space particle filter on the acoustic positioning problem. By discrete space particle filter we mean a Markov chain that counts particles in discretized cells of the signal state space in order to form an approximated unnormalized distribution of the signal state. For both filters mentioned above, we will examine issues like the mean time to localize a signal, the fidelity of filter estimates at various signal to noise ratios, computational costs, and the effect of signal

  6. Vowel Aperture and Syllable Segmentation in French

    ERIC Educational Resources Information Center

    Goslin, Jeremy; Frauenfelder, Ulrich H.

    2008-01-01

    The theories of Pulgram (1970) suggest that if the vowel of a French syllable is open then it will induce syllable segmentation responses that result in the syllable being closed, and vice versa. After the empirical verification that our target French-speaking population was capable of distinguishing between mid-vowel aperture, we examined the…

  7. Preference patterns in infant vowel perception

    NASA Astrophysics Data System (ADS)

    Molnar, Monika T.; Polka, Linda

    2004-05-01

    Infants show directional asymmetries in vowel discrimination tasks that reveal an underlying perceptual bias favoring more peripheral vowels. Polka and Bohn (2003) propose that this bias is language independent and plays an important role in the development of vowel perception. In the present study we measured infant listening preferences for vowels to assess whether a perceptual bias favoring peripheral vowels can be measured more directly. Monolingual (French and English) and bilingual infants completed a listening preference task using multiple natural tokens of German /dut/ and /dyt/ produced by a male talker. In previous work, discrimination of this vowel pair by German-learning and by English-learning infants revealed a robust directional asymmetry in which /u/ acts as a perceptual anchor; specifically, infants had difficulty detecting a change from /u/ to /y/, whereas a change from /y/ to /u/ was readily detected. Preliminary results from preference tests with these stimuli show that most infants between 3 and 5 months of age also listen longer to /u/ than to /y/. Preference data obtained from older infants and with other vowel pairs will also be reported to further test the claim that peripheral vowels have a privileged perceptual status in infant perception.

  8. Contrastive and contextual vowel nasalization in Ottawa

    NASA Astrophysics Data System (ADS)

    Klopfenstein, Marie

    2005-09-01

    Ottawa is a Central Algonquian language that possesses the recent innovation of contrastive vowel nasalization. Most phonetic studies done to date on contrastive vowel nasalization have investigated Indo-European languages; therefore, a study of Ottawa could prove to be a valuable addition to the literature. To this end, a percentage of nasalization (nasal airflow/oral + nasal airflow) was measured during target vowels produced by native Ottawa speakers using a Nasometer 6200-3. Nasalized vowels in the target word set were either contrastively or contextually nasalized: candidates for contextual nasalization were either regressive or perserverative in word-initial and word-final syllables. Subjects were asked to read words containing target vowels in a carrier sentence. Mean, minimum, and maximum nasalance were obtained for each target vowel across its full duration. Target vowels were compared across context (regressive or perseverative and word-initial or word-final). In addition, contexts were compared to determine whether a significant difference existed between contrastive and contextual nasalization. Results for Ottawa will be compared with results for vowels in similar contexts in other languages including Hindi, Breton, Bengali, and French.

  9. Vowel Quantity and Syllabification in English.

    ERIC Educational Resources Information Center

    Hammond, Michael

    1997-01-01

    Argues that there is phonological gemination in English based on distribution of vowel qualities in medial and final syllables. The analysis, cast in terms of optimality theory, has implications in several domains: (1) ambisyllabicity is not the right way to capture aspiration and flapping; (2) languages in which stress depends on vowel quality…

  10. Vibro-Acoustic Analysis of NASA's Space Shuttle Launch Pad 39A Flame Trench Wall

    NASA Technical Reports Server (NTRS)

    Margasahayam, Ravi N.

    2009-01-01

    A vital element to NASA's manned space flight launch operations is the Kennedy Space Center Launch Complex 39's launch pads A and B. Originally designed and constructed In the 1960s for the Saturn V rockets used for the Apollo missions, these pads were modified above grade to support Space Shuttle missions. But below grade, each of the pad's original walls (including a 42 feet deep, 58 feet wide, and 450 feet long tunnel designed to deflect flames and exhaust gases, the flame trench) remained unchanged. On May 31, 2008 during the launch of STS-124, over 3500 of the. 22000 interlocking refractory bricks that lined east wall of the flame trench, protecting the pad structure were liberated from pad 39A. The STS-124 launch anomaly spawned an agency-wide initiative to determine the failure root cause, to assess the impact of debris on vehicle and ground support equipment safety, and to prescribe corrective action. The investigation encompassed radar imaging, infrared video review, debris transport mechanism analysis using computational fluid dynamics, destructive testing, and non-destructive evaluation, including vibroacoustic analysis, in order to validate the corrective action. The primary focus of this paper is on the analytic approach, including static, modal, and vibro-acoustic analysis, required to certify the corrective action, and ensure Integrity and operational reliability for future launches. Due to the absence of instrumentation (including pressure transducers, acoustic pressure sensors, and accelerometers) in the flame trench, defining an accurate acoustic signature of the launch environment during shuttle main engine/solid rocket booster Ignition and vehicle ascent posed a significant challenge. Details of the analysis, including the derivation of launch environments, the finite element approach taken, and analysistest/ launch data correlation are discussed. Data obtained from the recent launch of STS-126 from Pad 39A was instrumental in validating the

  11. Identification and Multiplicity of Double Vowels in Cochlear Implant Users

    ERIC Educational Resources Information Center

    Kwon, Bomjun J.; Perry, Trevor T.

    2014-01-01

    Purpose: The present study examined cochlear implant (CI) users' perception of vowels presented concurrently (i.e., "double vowels") to further our understanding of auditory grouping in electric hearing. Method: Identification of double vowels and single vowels was measured with 10 CI subjects. Fundamental frequencies (F0s) of…

  12. A Study of the Pronunciation of Words Containing Adjacent Vowels.

    ERIC Educational Resources Information Center

    Greif, Ivo P.

    To determine the usefulness of the commonly taught phonics rule, "only pronounce the first vowel in words that contain adjacent vowels" (the VV rule, with the first "v" pronounced with the long vowel sound), two new studies applied it to words with adjacent vowels in several lists and dictionaries. The first study analyzed words containing…

  13. [Analysis of dysarthria in amyotrophic lateral sclerosis--MRI of the tongue and formant analysis of vowels].

    PubMed

    Watanabe, S; Arasaki, K; Nagata, H; Shouji, S

    1994-03-01

    To evaluate dysarthria in patients with ALS, we used MRI (gradient rephasing echo method) and compared it with the computed acoustic analysis. Five ALS male patients of progressive bulbar palsy type and five normal male were asked to phonate the five Japanese vowels, /a/./i/./u/./e/./o/. MRI of the sagittal tongue and vocal tract was obtained by the gradient rephasing echo method (0.2 Tesla, TR:30 ms, TE:10 ms, FA 25 degrees C, Hitachi). We could clearly visualize the change of tongue shape and the narrow site of the vocal tract for each vowel phonation. In normal subjects, the tongue shape and the narrow site of the vocal tract were distinguishable between each vowel, but unclear in ALS. Acoustic analysis showed that the first formant frequency of /i/./u/ in ALS was higher than normal and the second formant frequency of /i/./e/ in ALS was significantly lower than normal. The discrepancy from the normal first, second and third formant frequency for each vowel of ALS was most seen in /i/./e/. It was speculated that /i/ and /e/ were the most disturbed vowels in ALS. The first and second formant frequency of vowel depends on the tongue shape and the width of the oral cavity. Therefore the results of the acoustic analysis in ALS indicated poor movement of tongue in /i/./u/./e/ and were compatible with the findings of the sagittal tongue MRI. The sagittal view of the tongue in the gradient rephasing echo MRI and the acoustic analysis are useful in evaluation dysarthria in ALS.

  14. Observations of acoustic-gravity waves in the thermosphere following Space Shuttle ascents

    NASA Astrophysics Data System (ADS)

    Jacobson, Abram R.; Carlos, Robert C.

    1994-03-01

    Using an ionospheric Doppler sounder at Havelock, North Carolina, we observed upper atmospheric waves generated by three ascents of the Space Shuttle during 1990-1991. The exhaust plume's initial explosion and subsequent buoyant rise apparently launch acoustic and buoyancy waves, respectively. The buoyancy waves observed close (less than 150 km) to the flight path are shorter period (200s) than the Brunt-Vaisaila period. This may be due to wind-generated Doppler shifts, or alternatively to the waves being ducted on the thermocline.

  15. Reflection and transmission of acoustical waves from a layer with space-dependent velocity.

    NASA Technical Reports Server (NTRS)

    Steinmetz, G. G.; Singh, J. J.

    1972-01-01

    The refraction of acoustical waves by a moving medium layer is theoretically treated and the reflection and transmission coefficients are determined. The moving-medium-layer velocity is uniform but with a space dependence in one direction. A partitioning of the moving medium layer into constant-velocity sublayers is introduced and numerical results for a three-sublayer approximation of Poiseuille flow are presented. The degenerate case of a single constant-velocity layer is also treated theoretically and numerically. The numerical results show the reflection and transmission coefficients as functions of the peak moving-medium-layer normalized velocity for several angles of incidence.

  16. The Effect of Acoustic Disturbances on the Operation of the Space Shuttle Main Engine Fuel Flowmeter

    NASA Technical Reports Server (NTRS)

    Marcu, Bogdan; Szabo, Roland; Dorney, Dan; Zoladz, Tom

    2007-01-01

    The Space Shuttle Main Engine (SSME) uses a turbine fuel flowmeter (FFM) in its Low Pressure Fuel Duct (LPFD) to measure liquid hydrogen flowrates during engine operation. The flowmeter is required to provide accurate and robust measurements of flow rates ranging from 10000 to 18000 GPM in an environment contaminated by duct vibration and duct internal acoustic disturbances. Errors exceeding 0.5% can have a significant impact on engine operation and mission completion. The accuracy of each sensor is monitored during hot-fire engine tests on the ground. Flow meters which do not meet requirements are not flown. Among other parameters, the device is screened for a specific behavior in which a small shift in the flow rate reading is registered during a period in which the actual fuel flow as measured by a facility meter does not change. Such behavior has been observed over the years for specific builds of the FFM and must be avoided or limited in magnitude in flight. Various analyses of the recorded data have been made prior to this report in an effort to understand the cause of the phenomenon; however, no conclusive cause for the shift in the instrument behavior has been found. The present report proposes an explanation of the phenomenon based on interactions between acoustic pressure disturbances in the duct and the wakes produced by the FFM flow straightener. Physical insight into the effects of acoustic plane wave disturbances was obtained using a simple analytical model. Based on that model, a series of three-dimensional unsteady viscous flow computational fluid dynamics (CFD) simulations were performed using the MSFC PHANTOM turbomachinery code. The code was customized to allow the FFM rotor speed to change at every time step according to the instantaneous fluid forces on the rotor, that, in turn, are affected by acoustic plane pressure waves propagating through the device. The results of the simulations show the variation in the rotation rate of the flowmeter

  17. Palatalization and intrinsic prosodic vowel features in Russian.

    PubMed

    Ordin, Mikhail

    2011-12-01

    The presented study is aimed at investigating the interaction of palatalization and intrinsic prosodic features of the vowel in CVC (consonant+vowel+consonant) syllables in Russian. The universal nature of intrinsic prosodic vowel features was confirmed with the data from the Russian language. It was found that palatalization of the consonants affects intrinsic fundamental frequency (IFO), intensity (I), and duration of the vowels in CVC syllables by modifying the vowel articulatory parameters such as vowel height and fronting. The obtained results are discussed in the light of opposing theories: those suggesting automatic control and those suggesting active control over intrinsic vowel features.

  18. The Neural Representation of Consonant-Vowel Transitions in Adults Who Wear Hearing Aids

    PubMed Central

    Tremblay, Kelly L.; Kalstein, Laura; Billings, Cuttis J.; Souza, Pamela E.

    2006-01-01

    Hearing aids help compensate for disorders of the ear by amplifying sound; however, their effectiveness also depends on the central auditory system's ability to represent and integrate spectral and temporal information delivered by the hearing aid. The authors report that the neural detection of time-varying acoustic cues contained in speech can be recorded in adult hearing aid users using the acoustic change complex (ACC). Seven adults (50–76 years) with mild to severe sensorineural hearing participated in the study. When presented with 2 identifiable consonant-vowel (CV) syllables (“shee” and “see”), the neural detection of CV transitions (as indicated by the presence of a P1-N1-P2 response) was different for each speech sound. More specifically, the latency of the evoked neural response coincided in time with the onset of the vowel, similar to the latency patterns the authors previously reported in normal-hearing listeners. PMID:16959736

  19. Acoustic omni meta-atom for decoupled access to all octants of a wave parameter space.

    PubMed

    Koo, Sukmo; Cho, Choonlae; Jeong, Jun-Ho; Park, Namkyoo

    2016-09-30

    The common behaviour of a wave is determined by wave parameters of its medium, which are generally associated with the characteristic oscillations of its corresponding elementary particles. In the context of metamaterials, the decoupled excitation of these fundamental oscillations would provide an ideal platform for top-down and reconfigurable access to the entire constitutive parameter space; however, this has remained as a conceivable problem that must be accomplished, after being pointed out by Pendry. Here by focusing on acoustic metamaterials, we achieve the decoupling of density ρ, modulus B(-1) and bianisotropy ξ, by separating the paths of particle momentum to conform to the characteristic oscillations of each macroscopic wave parameter. Independent access to all octants of wave parameter space (ρ, B(-1), ξ)=(+/-,+/-,+/-) is thus realized using a single platform that we call an omni meta-atom; as a building block that achieves top-down access to the target properties of metamaterials.

  20. Acoustic omni meta-atom for decoupled access to all octants of a wave parameter space

    PubMed Central

    Koo, Sukmo; Cho, Choonlae; Jeong, Jun-ho; Park, Namkyoo

    2016-01-01

    The common behaviour of a wave is determined by wave parameters of its medium, which are generally associated with the characteristic oscillations of its corresponding elementary particles. In the context of metamaterials, the decoupled excitation of these fundamental oscillations would provide an ideal platform for top–down and reconfigurable access to the entire constitutive parameter space; however, this has remained as a conceivable problem that must be accomplished, after being pointed out by Pendry. Here by focusing on acoustic metamaterials, we achieve the decoupling of density ρ, modulus B−1 and bianisotropy ξ, by separating the paths of particle momentum to conform to the characteristic oscillations of each macroscopic wave parameter. Independent access to all octants of wave parameter space (ρ, B−1, ξ)=(+/−,+/−,+/−) is thus realized using a single platform that we call an omni meta-atom; as a building block that achieves top–down access to the target properties of metamaterials. PMID:27687689

  1. Gauge-invariant coupled gravitational, acoustical, and electromagnetic modes on most general spherical space-times

    NASA Astrophysics Data System (ADS)

    Gerlach, Ulrich H.; Sengupta, Uday K.

    1980-09-01

    The coupled Einstein-Maxwell system linearized away from an arbitrarily given spherically symmetric background space-time is reduced from its four-dimensional to a two-dimensional form expressed solely in terms of gauge-invariant geometrical perturbation objects. These objects, which besides the gravitational and electromagnetic, also include mass-energy degrees of freedom, are defined on the two-manifold spanned by the radial and time coordinates. For charged or uncharged arbitrary matter background the odd-parity perturbation equations for example, reduce to three second-order linear scalar equations driven by matter and charge inhomogeneities. These three equations describe the intercoupled gravitational, electromagnetic, and acoustic perturbational degrees of freedom. For a charged black hole in an asymptotically de Sitter space-time the gravitational and electromagnetic equations decouple into two inhomogeneous scalar wave equations.

  2. Neural representation of three-dimensional acoustic space in the human temporal lobe

    PubMed Central

    Zhang, Xiaolu; Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-01-01

    Sound localization is an important function of the human brain, but the underlying cortical mechanisms remain unclear. In this study, we recorded auditory stimuli in three-dimensional space and then replayed the stimuli through earphones during functional magnetic resonance imaging (fMRI). By employing a machine learning algorithm, we successfully decoded sound location from the blood oxygenation level-dependent signals in the temporal lobe. Analysis of the data revealed that different cortical patterns were evoked by sounds from different locations. Specifically, discrimination of sound location along the abscissa axis evoked robust responses in the left posterior superior temporal gyrus (STG) and right mid-STG, discrimination along the elevation (EL) axis evoked robust responses in the left posterior middle temporal lobe (MTL) and right STG, and discrimination along the ordinate axis evoked robust responses in the left mid-MTL and right mid-STG. These results support a distributed representation of acoustic space in human cortex. PMID:25932011

  3. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the ability of utilizing the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with actual measurements of leak sounds made by a one atmosphere to vacuum leak through a small hole in the pressure wall of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). While the E-FEM method represents a reverberant sound field calculation, of importance to this application is the requirement to also handle the direct field effect of the sound generation. It was also important to be able to compute the sound fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  4. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging “periodicity-tagged” segregation of competing speech in rooms

    PubMed Central

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.

    2015-01-01

    The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening

  5. Task-dependent decoding of speaker and vowel identity from auditory cortical response patterns.

    PubMed

    Bonte, Milene; Hausfeld, Lars; Scharke, Wolfgang; Valente, Giancarlo; Formisano, Elia

    2014-03-26

    Selective attention to relevant sound properties is essential for everyday listening situations. It enables the formation of different perceptual representations of the same acoustic input and is at the basis of flexible and goal-dependent behavior. Here, we investigated the role of the human auditory cortex in forming behavior-dependent representations of sounds. We used single-trial fMRI and analyzed cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by different speakers (boy, girl, male) and performed a delayed-match-to-sample task on either speech sound or speaker identity. Univariate analyses showed a task-specific activation increase in the right superior temporal gyrus/sulcus (STG/STS) during speaker categorization and in the right posterior temporal cortex during vowel categorization. Beyond regional differences in activation levels, multivariate classification of single trial responses demonstrated that the success with which single speakers and vowels can be decoded from auditory cortical activation patterns depends on task demands and subject's behavioral performance. Speaker/vowel classification relied on distinct but overlapping regions across the (right) mid-anterior STG/STS (speakers) and bilateral mid-posterior STG/STS (vowels), as well as the superior temporal plane including Heschl's gyrus/sulcus. The task dependency of speaker/vowel classification demonstrates that the informative fMRI response patterns reflect the top-down enhancement of behaviorally relevant sound representations. Furthermore, our findings suggest that successful selection, processing, and retention of task-relevant sound properties relies on the joint encoding of information across early and higher-order regions of the auditory cortex.

  6. Early sound symbolism for vowel sounds.

    PubMed

    Spector, Ferrinne; Maurer, Daphne

    2013-01-01

    Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound-shape mapping. In this study, we investigated the influence of vowels on sound-shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded-jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  7. A comparative analysis of Media Lengua and Quichua vowel production.

    PubMed

    Stewart, Jesse

    2014-01-01

    This study presents a comparative analysis of F1 and F2 vowel frequencies from Pijal Media Lengua (PML) and Imbabura Quichua. Mixed-effects models are used to test Spanish-derived high and low vowels against their Quichua-derived counterparts for statistical significance. Spanish-derived and Quichua-derived high vowels are also tested against Spanish-derived mid vowels. This analysis suggests that PML may be manipulating as many as eight vowels where Spanishderived high and low vowels coexist as near-mergers with their Quichua-derived counterparts, while high and mid vowels coexist with partial overlap. Quichua, traditionally viewed as a three-vowel system, shows similar results and may be manipulating as many as six vowels.

  8. Visualizing vowel-production mechanism using simple educational tools

    NASA Astrophysics Data System (ADS)

    Arai, Takayuki

    2005-09-01

    To develop intuitive and effective methods for educating Acoustics to students of different ages and from varied backgrounds, Arai [J. Phonetic Soc. Jpn. 5, 31-38, (2001)] replicated Chiba and Kajiyama's physical models of the human vocal tract as educational tools and verified that the physical models and sound sources, such as an artificial larynx, yield a simple but powerful demonstration of vowel production in the classroom. We have also started exhibiting our models at the Science Museum ``Ru-Ku-Ru'' in Shizuoka City, Japan. We further extended our model to a lung model as well as several head-shaped models with visible vocal tract to demonstrate the total vowel-production mechanism from phonation to articulation. The lung model imitates the human respiratory system with a diaphragm. In the head-shaped model, the midsaggital cross section is visible from the outside. To adjust the degree of nasopharyngeal coupling, the velum may be rotated. Another head-shaped model with the manipulable tongue position was also developed. Two test results were compared before and after using these physical models, and the educational effectiveness of the models was confirmed. The homepage of the vocal-tract models is available at http://www.splab.ee.sophia.ac.jp/Vocal-Tract-Model/index-e.htm. [Work supported by KAKENHI (17500603).

  9. An investigation of acoustic noise requirements for the Space Station centrifuge facility

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy

    1994-01-01

    Acoustic noise emissions from the Space Station Freedom (SSF) centrifuge facility hardware represent a potential technical and programmatic risk to the project. The SSF program requires that no payload exceed a Noise Criterion 40 (NC-40) noise contour in any octave band between 63 Hz and 8 kHz as measured 2 feet from the equipment item. Past experience with life science experiment hardware indicates that this requirement will be difficult to meet. The crew has found noise levels on Spacelab flights to be unacceptably high. Many past Ames Spacelab life science payloads have required waivers because of excessive noise. The objectives of this study were (1) to develop an understanding of acoustic measurement theory, instruments, and technique, and (2) to characterize the noise emission of analogous Facility components and previously flown flight hardware. Test results from existing hardware were reviewed and analyzed. Measurements of the spectral and intensity characteristics of fans and other rotating machinery were performed. The literature was reviewed and contacts were made with NASA and industry organizations concerned with or performing research on noise control.

  10. Frequency-space prediction filtering for acoustic clutter and random noise attenuation in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Shin, Junseob; Huang, Lianjie

    2016-04-01

    Frequency-space prediction filtering (FXPF), also known as FX deconvolution, is a technique originally developed for random noise attenuation in seismic imaging. FXPF attempts to reduce random noise in seismic data by modeling only real signals that appear as linear or quasilinear events in the aperture domain. In medical ultrasound imaging, channel radio frequency (RF) signals from the main lobe appear as horizontal events after receive delays are applied while acoustic clutter signals from off-axis scatterers and electronic noise do not. Therefore, FXPF is suitable for preserving only the main-lobe signals and attenuating the unwanted contributions from clutter and random noise in medical ultrasound imaging. We adapt FXPF to ultrasound imaging, and evaluate its performance using simulated data sets from a point target and an anechoic cyst. Our simulation results show that using only 5 iterations of FXPF achieves contrast-to-noise ratio (CNR) improvements of 67 % in a simulated noise-free anechoic cyst and 228 % in a simulated anechoic cyst contaminated with random noise of 15 dB signal-to-noise ratio (SNR). Our findings suggest that ultrasound imaging with FXPF attenuates contributions from both acoustic clutter and random noise and therefore, FXPF has great potential to improve ultrasound image contrast for better visualization of important anatomical structures and detection of diseased conditions.

  11. The right ear advantage revisited: speech lateralisation in dichotic listening using consonant-vowel and vowel-consonant syllables.

    PubMed

    Sætrevik, Bjørn

    2012-01-01

    The dichotic listening task is typically administered by presenting a consonant-vowel (CV) syllable to each ear and asking the participant to report the syllable heard most clearly. The results tend to show more reports of the right ear syllable than of the left ear syllable, an effect called the right ear advantage (REA). The REA is assumed to be due to the crossing over of auditory fibres and the processing of language stimuli being lateralised to left temporal areas. However, the tendency for most dichotic listening experiments to use only CV syllable stimuli limits the extent to which the conclusions can be generalised to also apply to other speech phonemes. The current study re-examines the REA in dichotic listening by using both CV and vowel-consonant (VC) syllables and combinations thereof. Results showed a replication of the REA response pattern for both CV and VC syllables, thus indicating that the general assumption of left-side localisation of processing can be applied for both types of stimuli. Further, on trials where a CV is presented in one ear and a VC is presented in the other ear, the CV is selected more often than the VC, indicating that these phonemes have an acoustic or processing advantage.

  12. Lip Movements for an Unfamiliar Vowel: Mandarin Front Rounded Vowel Produced by Japanese Speakers

    ERIC Educational Resources Information Center

    Saito, Haruka

    2016-01-01

    Purpose: The study was aimed at investigating what kind of lip positions are selected by Japanese adult participants for an unfamiliar Mandarin rounded vowel /y/ and if their lip positions are related to and/or differentiated from those for their native vowels. Method: Videotaping and post hoc tracking measurements for lip positions, namely…

  13. The discrimination of baboon grunt calls and human vowel sounds by baboons

    NASA Astrophysics Data System (ADS)

    Hienz, Robert D.; Jones, April M.; Weerts, Elise M.

    2004-09-01

    The ability of baboons to discriminate changes in the formant structures of a synthetic baboon grunt call and an acoustically similar human vowel (/eh/) was examined to determine how comparable baboons are to humans in discriminating small changes in vowel sounds, and whether or not any species-specific advantage in discriminability might exist when baboons discriminate their own vocalizations. Baboons were trained to press and hold down a lever to produce a pulsed train of a standard sound (e.g., /eh/ or a baboon grunt call), and to release the lever only when a variant of the sound occurred. Synthetic variants of each sound had the same first and third through fifth formants (F1 and F3-5), but varied in the location of the second formant (F2). Thresholds for F2 frequency changes were 55 and 67 Hz for the grunt and vowel stimuli, respectively, and were not statistically different from one another. Baboons discriminated changes in vowel formant structures comparable to those discriminated by humans. No distinct advantages in discrimination performances were observed when the baboons discriminated these synthetic grunt vocalizations.

  14. Discrimination and identification of vowels by young, hearing-impaired adults

    NASA Astrophysics Data System (ADS)

    Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen

    2003-11-01

    This study examined the effects of mild-to-moderate sensorineural hearing loss on vowel perception abilities of young, hearing-impaired (YHI) adults. Stimuli were presented at a low conversational level with a flat frequency response (approximately 60 dB SPL), and in two gain conditions: (a) high level gain with a flat frequency response (95 dB SPL), and (b) frequency-specific gain shaped according to each listener's hearing loss (designed to simulate the frequency response provided by a linear hearing aid to an input signal of 60 dB SPL). Listeners discriminated changes in the vowels /smcapi ee eh invv æ/ when F1 or F2 varied, and later categorized the vowels. YHI listeners performed better in the two gain conditions than in the conversational level condition. Performances in the two gain conditions were similar, suggesting that upward spread of masking was not seen at these signal levels for these tasks. Results were compared with those from a group of elderly, hearing-impaired (EHI) listeners, reported in Coughlin, Kewley-Port, and Humes [J. Acoust. Soc. Am. 104, 3597-3607 (1998)]. Comparisons revealed no significant differences between the EHI and YHI groups, suggesting that hearing impairment, not age, is the primary contributor to decreased vowel perception in these listeners.

  15. Processing interactions between phonology and melody: vowels sing but consonants speak.

    PubMed

    Kolinsky, Régine; Lidji, Pascale; Peretz, Isabelle; Besson, Mireille; Morais, José

    2009-07-01

    The aim of this study was to determine if two dimensions of song, the phonological part of lyrics and the melodic part of tunes, are processed in an independent or integrated way. In a series of five experiments, musically untrained participants classified bi-syllabic nonwords sung on two-tone melodic intervals. Their response had to be based on pitch contour, on nonword identity, or on the combination of pitch and nonword. When participants had to ignore irrelevant variations of the non-attended dimension, patterns of interference and facilitation allowed us to specify the processing interactions between dimensions. Results showed that consonants are processed more independently from melodic information than vowels are (Experiments 1-4). This difference between consonants and vowels was neither related to the sonority of the phoneme (Experiment 3), nor to the acoustical correlates between vowel quality and pitch height (Experiment 5). The implication of these results for our understanding of the functional relationships between musical and linguistic systems is discussed in light of the different evolutionary origins and linguistic functions of consonants and vowels.

  16. English vowel identification in quiet and noise: effects of listeners' native language background

    PubMed Central

    Jin, Su-Hyun; Liu, Chang

    2014-01-01

    Purpose: To investigate the effect of listener's native language (L1) and the types of noise on English vowel identification in noise. Method: Identification of 12 English vowels was measured in quiet and in long-term speech-shaped noise and multi-talker babble (MTB) noise for English- (EN), Chinese- (CN) and Korean-native (KN) listeners at various signal-to-noise ratios (SNRs). Results: Compared to non-native listeners, EN listeners performed significantly better in quiet and in noise. Vowel identification in long-term speech-shaped noise and in MTB noise was similar between CN and KN listeners. This is different from our previous study in which KN listeners performed better than CN listeners in English sentence recognition in MTB noise. Discussion: Results from the current study suggest that depending on speech materials, the effect of non-native listeners' L1 on speech perception in noise may be different. That is, in the perception of speech materials with little linguistic cues like isolated vowels, the characteristics of non-native listener's native language may not play a significant role. On the other hand, in the perception of running speech in which listeners need to use more linguistic cues (e.g., acoustic-phonetic, semantic, and prosodic cues), the non-native listener's native language background might result in a different masking effect. PMID:25400538

  17. Zebra finches and Dutch adults exhibit the same cue weighting bias in vowel perception.

    PubMed

    Ohms, Verena R; Escudero, Paola; Lammers, Karin; ten Cate, Carel

    2012-03-01

    Vocal tract resonances, called formants, are the most important parameters in human speech production and perception. They encode linguistic meaning and have been shown to be perceived by a wide range of species. Songbirds are also sensitive to different formant patterns in human speech. They can categorize words differing only in their vowels based on the formant patterns independent of speaker identity in a way comparable to humans. These results indicate that speech perception mechanisms are more similar between songbirds and humans than realized before. One of the major questions regarding formant perception concerns the weighting of different formants in the speech signal ("acoustic cue weighting") and whether this process is unique to humans. Using an operant Go/NoGo design, we trained zebra finches to discriminate syllables, whose vowels differed in their first three formants. When subsequently tested with novel vowels, similar in either their first formant or their second and third formants to the familiar vowels, similarity in the higher formants was weighted much more strongly than similarity in the lower formant. Thus, zebra finches indeed exhibit a cue weighting bias. Interestingly, we also found that Dutch speakers when tested with the same paradigm exhibit the same cue weighting bias. This, together with earlier findings, supports the hypothesis that human speech evolution might have exploited general properties of the vertebrate auditory system.

  18. Inverse acoustic scattering problem in half-space with anisotropic random impedance

    NASA Astrophysics Data System (ADS)

    Helin, Tapio; Lassas, Matti; Päivärinta, Lassi

    2017-02-01

    We study an inverse acoustic scattering problem in half-space with a probabilistic impedance boundary value condition. The Robin coefficient (surface impedance) is assumed to be a Gaussian random function with a pseudodifferential operator describing the covariance. We measure the amplitude of the backscattered field averaged over the frequency band and assume that the data is generated by a single realization of λ. Our main result is to show that under certain conditions the principal symbol of the covariance operator of λ is uniquely determined. Most importantly, no approximations are needed and we can solve the full non-linear inverse problem. We concentrate on anisotropic models for the principal symbol, which leads to the analysis of a novel anisotropic spherical Radon transform and its invertibility.

  19. Perturbation and Nonlinear Dynamic Analysis of Acoustic Phonatory Signal in Parkinsonian Patients Receiving Deep Brain Stimulation

    ERIC Educational Resources Information Center

    Lee, Victoria S.; Zhou, Xiao Ping; Rahn, Douglas A., III; Wang, Emily Q.; Jiang, Jack J.

    2008-01-01

    Nineteen PD patients who received deep brain stimulation (DBS), 10 non-surgical (control) PD patients, and 11 non-pathologic age- and gender-matched subjects performed sustained vowel phonations. The following acoustic measures were obtained on the sustained vowel phonations: correlation dimension (D[subscript 2]), percent jitter, percent shimmer,…

  20. Baryon acoustic oscillations in 2D: Modeling redshift-space power spectrum from perturbation theory

    SciTech Connect

    Taruya, Atsushi; Nishimichi, Takahiro; Saito, Shun

    2010-09-15

    We present an improved prescription for the matter power spectrum in redshift space taking proper account of both nonlinear gravitational clustering and redshift distortion, which are of particular importance for accurately modeling baryon acoustic oscillations (BAOs). Contrary to the models of redshift distortion phenomenologically introduced but frequently used in the literature, the new model includes the corrections arising from the nonlinear coupling between the density and velocity fields associated with two competitive effects of redshift distortion, i.e., Kaiser and Finger-of-God effects. Based on the improved treatment of perturbation theory for gravitational clustering, we compare our model predictions with the monopole and quadrupole power spectra of N-body simulations, and an excellent agreement is achieved over the scales of BAOs. Potential impacts on constraining dark energy and modified gravity from the redshift-space power spectrum are also investigated based on the Fisher-matrix formalism, particularly focusing on the measurements of the Hubble parameter, angular diameter distance, and growth rate for structure formation. We find that the existing phenomenological models of redshift distortion produce a systematic error on measurements of the angular diameter distance and Hubble parameter by 1%-2%, and the growth-rate parameter by {approx}5%, which would become non-negligible for future galaxy surveys. Correctly modeling redshift distortion is thus essential, and the new prescription for the redshift-space power spectrum including the nonlinear corrections can be used as an accurate theoretical template for anisotropic BAOs.

  1. Intelligibility of American English vowels and consonants spoken by international students in the United States.

    PubMed

    Jin, Su-Hyun; Liu, Chang

    2014-04-01

    PURPOSE The purpose of this study was to examine the intelligibility of English consonants and vowels produced by Chinese-native (CN), and Korean-native (KN) students enrolled in American universities. METHOD 16 English-native (EN), 32 CN, and 32 KN speakers participated in this study. The intelligibility of 16 American English consonants and 16 vowels spoken by native and nonnative speakers of English was evaluated by EN listeners. All nonnative speakers also completed a survey of their language backgrounds. RESULTS Although the intelligibility of consonants and diphthongs for nonnative speakers was comparable to that of native speakers, the intelligibility of monophthongs was significantly lower for CN and KN speakers than for EN speakers. Sociolinguistic factors such as the age of arrival in the United States and daily use of English, as well as a linguistic factor, difference in vowel space between native (L1) and nonnative (L2) language, partially contributed to vowel intelligibility for CN and KN groups. There was no significant correlation between the length of U.S. residency and phoneme intelligibility. CONCLUSION Results indicated that the major difficulty in phonemic production in English for Chinese and Korean speakers is with vowels rather than consonants. This might be useful for developing training methods to improve English intelligibility for foreign students in the United States.

  2. Space Shuttle Orbiter Main Engine Ignition Acoustic Pressure Loads Issue: Recent Actions to Install Wireless Instrumentation on STS-129

    NASA Technical Reports Server (NTRS)

    Wells, Nathan; Studor, George

    2009-01-01

    This slide presentation reviews the development and construction of the wireless acoustic instruments surrounding the space shuttle's main engines in preparation for STS-129. The presentation also includes information on end-of-life processing and the mounting procedure for the devices.

  3. Application of acoustic surface wave filter-beam lead component technology to deep space multimission hardware design

    NASA Technical Reports Server (NTRS)

    Kermode, A. W.; Boreham, J. F.

    1974-01-01

    This paper discusses the utilization of acoustic surface wave filters, beam lead components, and thin film metallized ceramic substrate technology as applied to the design of deep space, long-life, multimission transponder. The specific design to be presented is for a second mixer local oscillator module, operating at frequencies as high as 249 MHz.

  4. Study for Identification of Beneficial Uses of Space (BUS). Volume 2: Technical report. Book 4: Development and business analysis of space processed surface acoustic wave devices

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Preliminary development plans, analysis of required R and D and production resources, the costs of such resources, and, finally, the potential profitability of a commercial space processing opportunity for the production of very high frequency surface acoustic wave devices are presented.

  5. An Amplitude-Based Estimation Method for International Space Station (ISS) Leak Detection and Localization Using Acoustic Sensor Networks

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Madaras, Eric I.

    2009-01-01

    The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.

  6. Optimizing cochlear implant frequency boundary tables for vowel perception: A computer simulation

    NASA Astrophysics Data System (ADS)

    Fourakis, Marios S.; Hawks, John W.; Schwager, Amy

    2004-05-01

    For cochlear implants, the assignment of frequency bands to electrodes is a variable parameter that determines what region of the acoustic spectrum will be represented by each electrode's output. Technology will soon allow for considerable flexibility in programming this parameter. In a first attempt to optimize these assignments for vowel perception, a computer program was written to evaluate different assignment schemes for categorization accuracy based strictly on the frequency values of the first two or three formants. Databases [J. Hillenbrand et al., J. Acoust. Soc. Am. 97, 3099-3111 (1995)] of formant measurements from American English vowels as uttered by men, women, and children were used. For this simulation, it was assumed that each formant frequency was associated with only the frequency band its center frequency fell within. Each pattern of frequency bands was assigned a vowel category identification based on the plurality of tokens whose intended identification category fell within that pattern. A range of frequency scaling schemes for 19 and 20 electrode arrays was evaluated, with the best of these fine tuned for minimum error. The results indicate that manufacturer's default assignments categorize reasonably well, but a bark-scaled scheme yielded the best unmodified classifications.

  7. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  8. Online Damage Detection on Metal and Composite Space Structures by Active and Passive Acoustic Methods

    NASA Astrophysics Data System (ADS)

    Scheerer, M.; Cardone, T.; Rapisarda, A.; Ottaviano, S.; Ftancesconi, D.

    2012-07-01

    In the frame of ESA funded programme Future Launcher Preparatory Programme Period 1 “Preparatory Activities on M&S”, Aerospace & Advanced Composites and Thales Alenia Space-Italia, have conceived and tested a structural health monitoring approach based on integrated Acoustic Emission - Active Ultrasound Damage Identification. The monitoring methods implemented in the study are both passive and active methods and the purpose is to cover large areas with a sufficient damage size detection capability. Two representative space sub-structures have been built and tested: a composite overwrapped pressure vessel (COPV) and a curved, stiffened Al-Li panel. In each structure, typical critical damages have been introduced: delaminations caused by impacts in the COPV and a crack in the stiffener of the Al-Li panel which was grown during a fatigue test campaign. The location and severity of both types of damages have been successfully assessed online using two commercially available systems: one 6 channel AE system from Vallen and one 64 channel AU system from Acellent.

  9. The Effect of Training on the Discrimination of English Vowels.

    ERIC Educational Resources Information Center

    Cenoz, Jasone; Lecumberri, Luisa Garcie

    1999-01-01

    Analyzes the effect of training on perception of English vowels by native speakers of Basque and Spanish. University students who took a training course in English phonetics completed questionnaires and vowel perception tests. Findings confirm that training exerts a positive effect on the perception of English vowels and that this effect is also…

  10. Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels

    ERIC Educational Resources Information Center

    Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz

    2012-01-01

    Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…

  11. Palatalization and Intrinsic Prosodic Vowel Features in Russian

    ERIC Educational Resources Information Center

    Ordin, Mikhail

    2011-01-01

    The presented study is aimed at investigating the interaction of palatalization and intrinsic prosodic features of the vowel in CVC (consonant+vowel+consonant) syllables in Russian. The universal nature of intrinsic prosodic vowel features was confirmed with the data from the Russian language. It was found that palatalization of the consonants…

  12. Reading in Thai: The Case of Misaligned Vowels

    ERIC Educational Resources Information Center

    Winskel, Heather

    2009-01-01

    Thai has its own distinctive alphabetic script with syllabic characteristics as it has implicit vowels for some consonants. Consonants are written in a linear order, but vowels can be written non-linearly above, below or to either side of the consonant. Of particular interest to the current study are that vowels can precede the consonant in…

  13. The Role of Consonant/Vowel Organization in Perceptual Discrimination

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Drabs, Virginie; Content, Alain

    2014-01-01

    According to a recent hypothesis, the CV pattern (i.e., the arrangement of consonant and vowel letters) constrains the mental representation of letter strings, with each vowel or vowel cluster being the core of a unit. Six experiments with the same/different task were conducted to test whether this structure is extracted prelexically. In the…

  14. Bandwidth of spectral resolution for two-formant synthetic vowels and two-tone complex signals

    NASA Astrophysics Data System (ADS)

    Xu, Qiang; Jacewicz, Ewa; Feth, Lawrence L.; Krishnamurthy, Ashok K.

    2004-04-01

    Spectral integration refers to the summation of activity beyond the bandwidth of the peripheral auditory filter. Several experimental lines have sought to determine the bandwidth of this ``supracritical'' band phenomenon. This paper reports on two experiments which tested the limit on spectral integration in the same listeners. Experiment 1 verified the critical separation of 3.5 bark in two-formant synthetic vowels as advocated by the center-of-gravity (COG) hypothesis. According to the COG effect, two formants are integrated into a single perceived peak if their separation does not exceed approximately 3.5 bark. With several modifications to the methods of a classic COG matching task, the present listeners responded to changes in pitch in two-formant synthetic vowels, not estimating their phonetic quality. By changing the amplitude ratio of the formants, the frequency of the perceived peak was closer to that of the stronger formant. This COG effect disappeared with larger formant separation. In a second experiment, auditory spectral resolution bandwidths were measured for the same listeners using common-envelope, two-tone complex signals. Results showed that the limits of spectral averaging in two-formant vowels and two-tone spectral resolution bandwidth were related for two of the three listeners. The third failed to perform the discrimination task. For the two subjects who completed both tasks, the results suggest that the critical region in vowel task and the complex-tone discriminability estimates are linked to a common mechanism, i.e., to an auditory spectral resolving power. A signal-processing model is proposed to predict the COG effect in two-formant synthetic vowels. The model introduces two modifications to Hermansky's [J. Acoust. Soc. Am. 87, 1738-1752 (1990)] perceptual linear predictive (PLP) model. The model predictions are generally compatible with the present experimental results and with the predictions of several earlier models accounting for

  15. Learning Vowel Categories from Maternal Speech in Gurindji Kriol

    ERIC Educational Resources Information Center

    Jones, Caroline; Meakins, Felicity; Muawiyath, Shujau

    2012-01-01

    Distributional learning is a proposal for how infants might learn early speech sound categories from acoustic input before they know many words. When categories in the input differ greatly in relative frequency and overlap in acoustic space, research in bilingual development suggests that this affects the course of development. In the present…

  16. Embedded Vowels: Remedying the Problems Arising out of Embedded Vowels in the English Writings of Arab Learners

    ERIC Educational Resources Information Center

    Khan, Mohamed Fazlulla

    2013-01-01

    L1 habits often tend to interfere with the process of learning a second language. The vowel habits of Arab learners of English are one such interference. Arabic orthography is such that certain vowels indicated by diacritics are often omitted, since an experienced reader of Arabic knows, by habit, the exact vowel sound in each phonetic…

  17. Vowel Quality and Consonant Voicing: The Production of English Vowels and Final Stops by Korean Speakers of English.

    ERIC Educational Resources Information Center

    Ryoo, Mi-Lim

    2001-01-01

    This study examined the quality of three English vowels and their Korean counterpart vowels by measuring F1/F2 frequencies and investigating how the different vowel qualities influenced consonant voicing. Participants were six native speakers (NS) of English and six NS of Korean who were graduate students at a large U.S. university. F1/F2…

  18. An EMA/EPG Study of Vowel-to-Vowel Articulation across Velars in Southern British English

    ERIC Educational Resources Information Center

    Fletcher, Janet

    2004-01-01

    Recent studies have attested that the extent of transconsonantal vowel-to-vowel coarticulation is at least partly dependent on degree of prosodic accentuation, in languages like English. A further important factor is the mutual compatibility of consonant and vowel gestures associated with the segments in question. In this study two speakers of…

  19. Language experience and consonantal context effects on perceptual assimilation of French vowels by American-English learners of French.

    PubMed

    Levy, Erika S

    2009-02-01

    Recent research has called for an examination of perceptual assimilation patterns in second-language speech learning. This study examined the effects of language learning and consonantal context on perceptual assimilation of Parisian French (PF) front rounded vowels /y/ and /oe/ by American English (AE) learners of French. AE listeners differing in their French language experience (no experience, formal instruction, formal-plus-immersion experience) performed an assimilation task involving PF /y, oe, u, o, i, epsilon, a/ in bilabial /rabVp/ and alveolar /radVt/ contexts, presented in phrases. PF front rounded vowels were assimilated overwhelmingly to back AE vowels. For PF /oe/, assimilation patterns differed as a function of language experience and consonantal context. However, PF /y/ revealed no experience effect in alveolar context. In bilabial context, listeners with extensive experience assimilated PF /y/ to (j)u less often than listeners with no or only formal experience, a pattern predicting the poorest /u-y/ discrimination for the most experienced group. An "internal consistency" analysis indicated that responses were most consistent with extensive language experience and in bilabial context. Acoustical analysis revealed that acoustical similarities among PF vowels alone cannot explain context-specific assimilation patterns. Instead it is suggested that native-language allophonic variation influences context-specific perceptual patterns in second-language learning.

  20. Experimental investigation of acoustic self-oscillation influence on decay process for underexpanded supersonic jet in submerged space

    NASA Astrophysics Data System (ADS)

    Aleksandrov, V. Yu.; Arefyev, K. Yu.; Ilchenko, M. A.

    2016-07-01

    Intensification of mixing between the gaseous working body ejected through a jet nozzle with ambient medium is an important scientific and technical problem. Effective mixing can increase the total efficiency of power and propulsion apparatuses. The promising approach, although poorly studied, is generation of acoustic self-oscillation inside the jet nozzle: this impact might enhance the decay of a supersonic jet and improve the mixing parameters. The paper presents peculiar properties of acoustic self-excitation in jet nozzle. The paper presents results of experimental study performed for a model injector with a set of plates placed into the flow channel, enabling the excitation of acoustic self-oscillations. The study reveals the regularity of under-expanded supersonic jet decay in submerged space for different flow modes. Experimental data support the efficiency of using the jet nozzle with acoustic self-oscillation in application to the systems of gas fuel supply. Experimental results can be used for designing new power apparatuses for aviation and space industry and for process plants.

  1. Mathematical Modeling of Space-time Variations in Acoustic Transmission and Scattering from Schools of Swim Bladder Fish (FY14 Annual Report)

    DTIC Science & Technology

    2014-09-30

    Mathematical modeling of space-time variations in acoustic transmission and scattering from schools of swim bladder fish (FY14 Annual Report...domain theory of acoustic scattering from, and propagation through, schools of swim bladder fish at and near the swim bladder resonance frequency...coupled differential equations. It incorporates a verified swim bladder scattering kernel for the individual fish, includes multiple scattering

  2. Gust Acoustic Response of a Single Airfoil Using the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Scott, James (Technical Monitor); Wang, X. Y.; Chang, S. C.; Himansu, A.; Jorgenson, P. C. E.

    2003-01-01

    A 2D parallel Euler code based on the space-time conservation element and solution element (CE/SE) method is validated by solving the benchmark problem I in Category 3 of the Third CAA Workshop. This problem concerns the acoustic field generated by the interaction of a convected harmonic vortical gust with a single airfoil. Three gust frequencies, two gust configurations, and three airfoil geometries are considered. Numerical results at both near and far fields are presented and compared with the analytical solutions, a frequency-domain solver GUST3D solutions, and a time-domain high-order Discontinuous Spectral Element Method (DSEM) solutions. It is shown that the CE/SE solutions agree well with the GUST3D solution for the lowest frequency, while there are discrepancies between CE/SE and GUST3D solutions for higher frequencies. However, the CE/SE solution is in good agreement with the DSEM solution for these higher frequencies. It demonstrates that the CE/SE method can produce accurate results of CAA problems involving complex geometries by using unstructured meshes.

  3. Where Do Illusory Vowels Come from?

    ERIC Educational Resources Information Center

    Dupoux, Emmanuel; Parlato, Erika; Frota, Sonia; Hirose, Yuki; Peperkamp, Sharon

    2011-01-01

    Listeners of various languages tend to perceive an illusory vowel inside consonant clusters that are illegal in their native language. Here, we test whether this phenomenon arises after phoneme categorization or rather interacts with it. We assess the perception of illegal consonant clusters in native speakers of Japanese, Brazilian Portuguese,…

  4. Teaching About Vowels in Second Grade.

    ERIC Educational Resources Information Center

    Hillerich, Robert L.

    The usefulness of teaching vowel generalizations was studied using three treatment groups, with two second-grade classes in each treatment. The study was considered a pilot investigation to provide direction rather than a definitive research study. In Treatment 1, the McKee Reading for Meaning program was followed, including the teaching of all…

  5. Existence domains of slow and fast ion-acoustic solitons in two-ion space plasmas

    SciTech Connect

    Maharaj, S. K.; Bharuthram, R.; Singh, S. V. Lakhina, G. S.

    2015-03-15

    A study of large amplitude ion-acoustic solitons is conducted for a model composed of cool and hot ions and cool and hot electrons. Using the Sagdeev pseudo-potential formalism, the scope of earlier studies is extended to consider why upper Mach number limitations arise for slow and fast ion-acoustic solitons. Treating all plasma constituents as adiabatic fluids, slow ion-acoustic solitons are limited in the order of increasing cool ion concentrations by the number densities of the cool, and then the hot ions becoming complex valued, followed by positive and then negative potential double layer regions. Only positive potentials are found for fast ion-acoustic solitons which are limited only by the hot ion number density having to remain real valued. The effect of neglecting as opposed to including inertial effects of the hot electrons is found to induce only minor quantitative changes in the existence regions of slow and fast ion-acoustic solitons.

  6. Vibration, acoustic, and shock design and test criteria for components on the Solid Rocket Boosters (SRB), Lightweight External Tank (LWT), and Space Shuttle Main Engines (SSME)

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The vibration, acoustics, and shock design and test criteria for components and subassemblies on the space shuttle solid rocket booster (SRB), lightweight tank (LWT), and main engines (SSME) are presented. Specifications for transportation, handling, and acceptance testing are also provided.

  7. Arabic Phonology: An Acoustical and Physiological Investigation.

    ERIC Educational Resources Information Center

    Al-Ani, Salman H.

    This book presents an acoustical and physiological Investigation of contemporary standard Arabic as spoken in Iraq. Spectrograms and X-ray sound films are used to perform the analysis for the study. With this equipment, the author considers the vowels, consonants, pharyngealized consonants, pharyngeals and glottals, duration, gemination, and…

  8. New design of the pulsed electro-acoustic upper electrode for space charge measurements during electronic irradiation.

    PubMed

    Riffaud, J; Griseri, V; Berquez, L

    2016-07-01

    The behaviour of space charges injected in irradiated dielectrics has been studied for many years for space industry applications. In our case, the pulsed electro-acoustic method is chosen in order to determine the spatial distribution of injected electrons. The feasibility of a ring-shaped electrode which will allow the measurements during irradiation is presented. In this paper, a computer simulation is made in order to determine the parameters to design the electrode and find its position above the sample. The obtained experimental results on polyethylene naphthalate samples realized during electronic irradiation and through relaxation under vacuum will be presented and discussed.

  9. New design of the pulsed electro-acoustic upper electrode for space charge measurements during electronic irradiation

    NASA Astrophysics Data System (ADS)

    Riffaud, J.; Griseri, V.; Berquez, L.

    2016-07-01

    The behaviour of space charges injected in irradiated dielectrics has been studied for many years for space industry applications. In our case, the pulsed electro-acoustic method is chosen in order to determine the spatial distribution of injected electrons. The feasibility of a ring-shaped electrode which will allow the measurements during irradiation is presented. In this paper, a computer simulation is made in order to determine the parameters to design the electrode and find its position above the sample. The obtained experimental results on polyethylene naphthalate samples realized during electronic irradiation and through relaxation under vacuum will be presented and discussed.

  10. Developmental Dyslexics Show Deficits in the Processing of Temporal Auditory Information in German Vowel Length Discrimination

    ERIC Educational Resources Information Center

    Groth, Katarina; Lachmann, Thomas; Riecker, Axel; Muthmann, Irene; Steinbrink, Claudia

    2011-01-01

    The present study investigated auditory temporal processing in developmental dyslexia by using a vowel length discrimination task. Both temporal and phonological processing were studied in a single experiment. Seven German vowel pairs differing in vowel height were used. The vowels of each pair differed only with respect to vowel length (e.g., /a/…

  11. Volume I. Percussion Sextet. (original Composition). Volume II. The Simulation of Acoustical Space by Means of Physical Modeling.

    NASA Astrophysics Data System (ADS)

    Manzara, Leonard Charles

    1990-01-01

    The dissertation is in two parts:. 1. Percussion Sextet. The Percussion Sextet is a one movement musical composition with a length of approximately fifteen minutes. It is for six instrumentalists, each on a number of percussion instruments. The overriding formal problem was to construct a coherent and compelling structure which fuses a diversity of musical materials and textures into a dramatic whole. Particularly important is the synthesis of opposing tendencies contained in stochastic and deterministic processes: global textures versus motivic detail, and randomness versus total control. Several compositional techniques are employed in the composition. These methods of composition will be aided, in part, by the use of artificial intelligence techniques programmed on a computer. Finally, the percussion ensemble is the ideal medium to realize the above processes since it encompasses a wide range of both pitched and unpitched timbres, and since a great variety of textures and densities can be created with a certain economy of means. 2. The simulation of acoustical space by means of physical modeling. This is a written report describing the research and development of a computer program which simulates the characteristics of acoustical space in two dimensions. With the computer program the user can simulate most conventional acoustical spaces, as well as those physically impossible to realize in the real world. The program simulates acoustical space by means of geometric modeling. This involves defining wall equations, phantom source points and wall diffusions, and then processing input files containing digital signals through the program, producing output files ready for digital to analog conversion. The user of the program is able to define wall locations and wall reflectivity and roughness characteristics, all of which can be changed over time. Sound source locations are also definable within the acoustical space and these locations can be changed independently at

  12. Vowel Categorization during Word Recognition in Bilingual Toddlers

    PubMed Central

    Ramon-Casas, Marta; Swingley, Daniel; Sebastián-Gallés, Núria; Bosch, Laura

    2009-01-01

    Toddlers’ and preschoolers’ knowledge of the phonological forms of words was tested in Spanish-learning, Catalan-learning, and bilingual children. These populations are of particular interest because of differences in the Spanish and Catalan vowel systems: Catalan has two vowels in a phonetic region where Spanish has only one. The proximity of the Spanish vowel to the Catalan ones might pose special learning problems. Children were shown picture pairs; the target picture’s name was spoken correctly, or a vowel in the target word was altered. Altered vowels either contrasted with the usual vowel in Spanish and Catalan, or only in Catalan. Children’s looking to the target picture was used as a measure of word recognition. Monolinguals’ word recognition was hindered by within-language, but not non-native, vowel changes. Surprisingly, bilingual toddlers did not show sensitivity to changes in vowels contrastive only in Catalan. Among preschoolers, Catalan-dominant bilinguals but not Spanish-dominant bilinguals revealed mispronunciation sensitivity for the Catalan-only contrast. These studies reveal monolingual children’s robust knowledge of native-language vowel categories in words, and show that bilingual children whose two languages contain phonetically overlapping vowel categories may not treat those categories as separate in language comprehension. PMID:19338984

  13. Dynamic spectral structure specifies vowels for children and adultsa

    PubMed Central

    Nittrouer, Susan

    2008-01-01

    When it comes to making decisions regarding vowel quality, adults seem to weight dynamic syllable structure more strongly than static structure, although disagreement exists over the nature of the most relevant kind of dynamic structure: spectral change intrinsic to the vowel or structure arising from movements between consonant and vowel constrictions. Results have been even less clear regarding the signal components children use in making vowel judgments. In this experiment, listeners of four different ages (adults, and 3-, 5-, and 7-year-old children) were asked to label stimuli that sounded either like steady-state vowels or like CVC syllables which sometimes had middle sections masked by coughs. Four vowel contrasts were used, crossed for type (front/back or closed/open) and consonant context (strongly or only slightly constraining of vowel tongue position). All listeners recognized vowel quality with high levels of accuracy in all conditions, but children were disproportionately hampered by strong coarticulatory effects when only steady-state formants were available. Results clarified past studies, showing that dynamic structure is critical to vowel perception for all aged listeners, but particularly for young children, and that it is the dynamic structure arising from vocal-tract movement between consonant and vowel constrictions that is most important. PMID:17902868

  14. Perception of vowels by learners of Spanish and English

    NASA Astrophysics Data System (ADS)

    Garcia-Bayonas, Mariche

    2005-04-01

    This study investigates the perception of English vowels /i I/, /u U/, and /e EI/ and Spanish /i u e/ by native-speakers (NS) and learners (L) and compares these two sets of vowels cross-linguistically. Research on the acquisition of vowels indicates that learners can improve their perception with exposure to the second language [Bohn and Flege (1990)]. Johnson, Flemming, and Wright (1993) investigated the hyperspace effect and how listeners tended to choose extreme vowel qualities in a method of adjustment (MOA) task. The theoretical framework of this study is Fleges (1995) Speech Learning Model. The research question is: Are vowels selected differently by NS and L using synthesized data? Spanish learners (n=54) and English learners (n=17) completed MOA tasks in which they were exposed to 330 synthetically produced vowels to analyze spectral differences in the acquisition of both sound systems, and how the learners vowel system may vary from that of the NS. In the MOA tasks they were asked to select which synthesized vowel sounds resembled the most the ones whose spelling was presented to them. The results include an overview of the vowel formant analysis performed, and which vowels are the most challenging ones to learners.

  15. Clustering of the Least Squares Lattice PARCOR (Partial Correlation) Coefficients: A Pattern-Recognition Approach to Steady State Synthetic Vowel Identification.

    DTIC Science & Technology

    1983-08-01

    the " formant space" defined by F1 and F2 corresopnds to the spatial location of the tongue hump in the two-dimensional representation of the oral...computationally efficient method of vowel identification than identification by formant frequencies, which involves the computation of poles and zeros and the...back-calculation of formant frequencies and formant bandwidths. It is ll documented in the literature that steady state vowel sounds may be identified

  16. Characterization of Pump-Induced Acoustics in Space Launch System Main Propulsion System Liquid Hydrogen Feedline Using Airflow Test Data

    NASA Technical Reports Server (NTRS)

    Eberhart, C. J.; Snellgrove, L. M.; Zoladz, T. F.

    2015-01-01

    High intensity acoustic edgetones located upstream of the RS-25 Low Pressure Fuel Turbo Pump (LPFTP) were previously observed during Space Launch System (STS) airflow testing of a model Main Propulsion System (MPS) liquid hydrogen (LH2) feedline mated to a modified LPFTP. MPS hardware has been adapted to mitigate the problematic edgetones as part of the Space Launch System (SLS) program. A follow-on airflow test campaign has subjected the adapted hardware to tests mimicking STS-era airflow conditions, and this manuscript describes acoustic environment identification and characterization born from the latest test results. Fluid dynamics responsible for driving discrete excitations were well reproduced using legacy hardware. The modified design was found insensitive to high intensity edgetone-like discretes over the bandwidth of interest to SLS MPS unsteady environments. Rather, the natural acoustics of the test article were observed to respond in a narrowband-random/mixed discrete manner to broadband noise thought generated by the flow field. The intensity of these responses were several orders of magnitude reduced from those driven by edgetones.

  17. On the number of channels needed to classify vowels: Implications for cochlear implants

    NASA Astrophysics Data System (ADS)

    Fourakis, Marios; Hawks, John W.; Davis, Erin

    2005-09-01

    In cochlear implants the incoming signal is analyzed by a bank of filters. Each filter is associated with an electrode to constitute a channel. The present research seeks to determine the number of channels needed for optimal vowel classification. Formant measurements of vowels produced by men and women [Hillenbrand et al., J. Acoust. Soc. Am. 97, 3099-3111 (1995)] were converted to channel assignments. The number of channels varied from 4 to 20 over two frequency ranges (180-4000 and 180-6000 Hz) in equal bark steps. Channel assignments were submitted to linear discriminant analysis (LDA). Classification accuracy increased with the number of channels, ranging from 30% with 4 channels to 98% with 20 channels, both for the female voice. To determine asymptotic performance, LDA classification scores were plotted against the number of channels and fitted with quadratic equations. The number of channels at which no further improvement occurred was determined, averaging 19 across all conditions with little variation. This number of channels seems to resolve the frequency range spanned by the first three formants finely enough to maximize vowel classification. This resolution may not be achieved using six or eight channels as previously proposed. [Work supported by NIH.

  18. Prosodic effects on glide-vowel sequences in three Romance languages

    NASA Astrophysics Data System (ADS)

    Chitoran, Ioana

    2004-05-01

    Glide-vowel sequences occur in many Romance languages. In some they can vary in production, ranging from diphthongal pronunciation [ja,je] to hiatus [ia,ie]. According to native speakers' impressionistic perceptions, Spanish and Romanian both exhibit this variation, but to different degrees. Spanish favors glide-vowel sequences, while Romanian favors hiatus, occasionally resulting in different pronunciations of the same items: Spanish (b[j]ela, ind[j]ana), Romanian (b[i]ela, ind[i]ana). The third language, French, has glide-vowel sequences consistently (b[j]elle). This study tests the effect of position in the word on the acoustic duration of the sequences. Shorter duration indicates diphthong production [jV], while longer duration, hiatus [iV]. Eleven speakers (4 Spanish, 4 Romanian, 3 French), were recorded. Spanish and Romanian showed a word position effect. Word-initial sequences were significantly longer than word-medial ones (p<0.001), consistent with native speakers more frequent description of hiatus word-initially than medially. The effect was not found in French (p>0.05). In the Spanish and Romanian sentences, V in the sequence bears pitch accent, but not in French. It is therefore possible that duration is sensitive not to the presence/absence of the word boundary, but to its position relative to pitch accent. The results suggest that the word position effect is crucially enhanced by pitch accent on V.

  19. Prosodic effects on glide-vowel sequences in three Romance languages

    NASA Astrophysics Data System (ADS)

    Chitoran, Ioana

    2001-05-01

    Glide-vowel sequences occur in many Romance languages. In some they can vary in production, ranging from diphthongal pronunciation [ja,je] to hiatus [ia,ie]. According to native speakers' impressionistic perceptions, Spanish and Romanian both exhibit this variation, but to different degrees. Spanish favors glide-vowel sequences, while Romanian favors hiatus, occasionally resulting in different pronunciations of the same items: Spanish (b[j]ela, ind[j]ana), Romanian (b[i]ela, ind[i]ana). The third language, French, has glide-vowel sequences consistently (b[j]elle). This study tests the effect of position in the word on the acoustic duration of the sequences. Shorter duration indicates diphthong production [jV], while longer duration, hiatus [iV]. Eleven speakers (4 Spanish, 4 Romanian, 3 French), were recorded. Spanish and Romanian showed a word position effect. Word-initial sequences were significantly longer than word-medial ones (p<0.001), consistent with native speakers more frequent description of hiatus word-initially than medially. The effect was not found in French (p>0.05). In the Spanish and Romanian sentences, V in the sequence bears pitch accent, but not in French. It is therefore possible that duration is sensitive not to the presence/absence of the word boundary, but to its position relative to pitch accent. The results suggest that the word position effect is crucially enhanced by pitch accent on V.

  20. Simulating and understanding the effects of velar coupling area on nasalized vowel spectra

    NASA Astrophysics Data System (ADS)

    Pruthi, Tarun; Espy-Wilson, Carol Y.

    2005-09-01

    MRI-based area functions for the nasal cavity of one speaker were combined with the area functions for the vowels /iy/ and /aa/ to study nasalized vowels. The oral cavity was compensated for the falling velum by decreasing the oral cavity area by an amount equal to the increase in the nasal cavity area. Susceptance plots were used along with the simulated transfer functions to understand the effects of velar coupling on nasalized vowel spectra. Susceptance plots of -(Bp+Bo) and Bn suggested significant deviation from the rules suggested by O. Fujimura and J. Lindqvist [J. Acoust. Soc. Am. 49(2), 541-558 (1971)]. In particular, the plots showed that: (1) the frequency of zero crossings of the susceptance plots changes with a change in the coupling area, and (2) formant frequencies need not shift monotonically upward with an increase in coupling area. Further, as a consequence of (1), and the fact that an increase in the coupling area results in a shift of Bn to the right and -(Bp+Bo) to the left, it is postulated that zero crossings of the two plots can cross each other. [MRI data from Brad Story. Work supported by NSF Grant No. BCS0236707.

  1. Nonlinear dust-acoustic structures in space plasmas with superthermal electrons, positrons, and ions

    NASA Astrophysics Data System (ADS)

    Saberian, E.; Esfandyari-Kalejahi, A.; Afsari-Ghazi, M.

    2017-01-01

    Some features of nonlinear dust-acoustic (DA) structures are investigated in a space plasma consisting of superthermal electrons, positrons, and positive ions in the presence of negatively charged dust grains with finite-temperature by employing a pseudo-potential technique in a hydrodynamic model. For this purpose, it is assumed that the electrons, positrons, and ions obey a kappa-like (κ) distribution in the background of adiabatic dust population. In the linear analysis, it is found that the dispersion relation yield two positive DA branches, i.e., the slow and fast DA waves. The upper branch (fast DA waves) corresponds to the case in which both (negatively charged) dust particles and (positively charged) ion species oscillate in phase with electrons and positrons. On the other hand, the lower branch (slow DA waves) corresponds to the case in which only dust particles oscillate in phase with electrons and positrons, while ion species are in antiphase with them. On the other hand, the fully nonlinear analysis shows that the existence domain of solitons and their characteristics depend strongly on the dust charge, ion charge, dust temperature, and the spectral index κ. It is found that the minimum/maximum Mach number increases as the spectral index κ increases. Also, it is found that only solitons with negative polarity can propagate and that their amplitudes increase as the parameter κ increases. Furthermore, the domain of Mach number shifts to the lower values, when the value of the dust charge Z d increases. Moreover, it is found that the Mach number increases with an increase in the dust temperature. Our analysis confirms that, in space plasmas with highly charged dusts, the presence of superthermal particles (electrons, positrons, and ions) may facilitate the formation of DA solitary waves. Particularly, in two cases of hydrogen ions H+ ( Z i = 1) and doubly ionized Helium atoms He2+ ( Z i = 2), the mentioned results are the same. Additionally, the

  2. An analytical solution versus half space BEM formulation for acoustic radiation and scattering from a rigid sphere

    NASA Astrophysics Data System (ADS)

    Soenarko, B.; Setiadikarunia, D.

    2016-11-01

    A half space problem in acoustics is described by introducing an infinite plane boundary that reflects the wave coming into the plane. A numerical solution using Boundary Element Method (BEM) has been known which is formulated using a modified Green's function in the Helmholtz Integral Formulation, which eliminates the discretization over the infinite plane. Hence, the discretization are confined to the body or obstacle in question only. This feature constitutes the main advantage of the BEM formulation for half space problems. However, no general analytical solution is available to verify the BEM results for half space problems. This paper is aimed to propose an analytical solution for the BEM to compare with, hence to verify the BEM calculation. This analytical approach is currently developed for a half space problem involving radiation and scattering of acoustic waves from a rigid sphere. The image of sphere as well as the image of the field point are defined with respect to the infinite plane. Then, an ad hoc solution is assumed involving a constant and the distance from the center of the sphere to the field point and the distance from the center of the image of the sphere to the field point. The constant is determined by imposing the boundary conditions. Test cases were run with several configuration involving the location of field points and the sphere. Comparison of the analytical solution with BEM calculations shows a good agreement between the two results..

  3. Role of vocal tract morphology in speech development: perceptual targets and sensorimotor maps for synthesized French vowels from birth to adulthood.

    PubMed

    Ménard, Lucie; Schwartz, Jean-Luc; Boë, Louis-Jean

    2004-10-01

    The development of speech from infancy to adulthood results from the interaction of neurocognitive factors, by which phonological representations and motor control abilities are gradually acquired, and physical factors, involving the complex changes in the morphology of the articulatory system. In this article, an articulatory-to-acoustic model, integrating nonuniform vocal tract growth, is used to describe the effect of morphology in the acoustic and perceptual domains. While simulating mature control abilities of the articulators (freezing neurocognitive factors), the size and shape of the vocal apparatus are varied, to represent typical values of speakers from birth to adulthood. The results show that anatomy does not prevent even the youngest speaker from producing vowels perceived as the 10 French oral vowels /i y u e phi o epsilon oe [symbol: see text] a/. However, the specific configuration of the vocal tract for the newborn seems to favor the production of those vowels perceived as low and front. An examination of the acoustic effects of articulatory variation for different growth stages led to the proposed variable sensorimotor maps for newbornlike, childlike, and adultlike vocal tracts. These maps could be used by transcribers of infant speech, to complete existing systems and to provide some hints about underlying articulatory gestures recruited during growth to reach perceptual vowel targets in French.

  4. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    ERIC Educational Resources Information Center

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  5. The Perception of Scale in Vowels

    DTIC Science & Technology

    2007-11-02

    identification experiments. The horizontal axis is log glottal pulse rate in Hz. The vertical axis is log Spectral Envelope Ratio (SER). The...glottal pulse rate in Hz and Spectral Envelope Ratio (SER). The SER values are shown above the glottal pulse rate axis. Smooth curves through the data...is the anatomical basis for the differences in the vowels of men, women and children? To understand this we need to know how the complex tonal sounds

  6. Vowelling and semantic priming effects in Arabic.

    PubMed

    Mountaj, Nadia; El Yagoubi, Radouane; Himmi, Majid; Lakhdar Ghazal, Faouzi; Besson, Mireille; Boudelaa, Sami

    2015-01-01

    In the present experiment we used a semantic judgment task with Arabic words to determine whether semantic priming effects are found in the Arabic language. Moreover, we took advantage of the specificity of the Arabic orthographic system, which is characterized by a shallow (i.e., vowelled words) and a deep orthography (i.e., unvowelled words), to examine the relationship between orthographic and semantic processing. Results showed faster Reaction Times (RTs) for semantically related than unrelated words with no difference between vowelled and unvowelled words. By contrast, Event Related Potentials (ERPs) revealed larger N1 and N2 components to vowelled words than unvowelled words suggesting that visual-orthographic complexity taxes the early word processing stages. Moreover, semantically unrelated Arabic words elicited larger N400 components than related words thereby demonstrating N400 effects in Arabic. Finally, the Arabic N400 effect was not influenced by orthographic depth. The implications of these results for understanding the processing of orthographic, semantic, and morphological structures in Modern Standard Arabic are discussed.

  7. Discrimination of synthesized English vowels by American and Korean listeners

    NASA Astrophysics Data System (ADS)

    Yang, Byunggon

    2004-05-01

    This study explored the discrimination of synthesized English vowel pairs by 27 American and Korean, male and female listeners. The average formant values of nine monophthongs produced by ten American English male speakers were employed to synthesize the vowels. Then, subjects were instructed explicitly to respond to AX discrimination tasks in which the standard vowel was followed by another one with the increment or decrement of the original formant values. The highest and lowest formant values of the same vowel quality were collected and compared to examine patterns of vowel discrimination. Results showed that the American and Korean groups discriminated the vowel pairs almost identically and their center formant frequency values of the high and low boundary fell almost exactly on those of the standards. In addition, the acceptable range of the same vowel quality was similar among the language and gender groups. The acceptable thresholds of each vowel formed an oval to maintain perceptual contrast from adjacent vowels. Pedagogical implications of those findings are discussed.

  8. Intrinsic fundamental frequency of vowels is moderated by regional dialect

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352

  9. Baryon acoustic oscillations in 2D. II. Redshift-space halo clustering in N-body simulations

    NASA Astrophysics Data System (ADS)

    Nishimichi, Takahiro; Taruya, Atsushi

    2011-08-01

    We measure the halo power spectrum in redshift space from cosmological N-body simulations, and test the analytical models of redshift distortions particularly focusing on the scales of baryon acoustic oscillations. Remarkably, the measured halo power spectrum in redshift space exhibits a large-scale enhancement in amplitude relative to the real-space clustering, and the effect becomes significant for the massive or highly biased halo samples. These findings cannot be simply explained by the so-called streaming model frequently used in the literature. By contrast, a physically motivated perturbation theory model developed in the previous paper reproduces the halo power spectrum very well, and the model combining a simple linear scale-dependent bias can accurately characterize the clustering anisotropies of halos in two dimensions, i.e., line-of-sight and its perpendicular directions. The results highlight the significance of nonlinear coupling between density and velocity fields associated with two competing effects of redshift distortions, i.e., Kaiser and Finger-of-God effects, and a proper account of this effect would be important in accurately characterizing the baryon acoustic oscillations in two dimensions.

  10. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  11. Measurement of a broadband negative index with space-coiling acoustic metamaterials.

    PubMed

    Xie, Yangbo; Popa, Bogdan-Ioan; Zigoneanu, Lucian; Cummer, Steven A

    2013-04-26

    We report the experimental demonstration of a broadband negative refractive index obtained in a labyrinthine acoustic metamaterial structure. Two different approaches were employed to prove the metamaterial negative index nature: one-dimensional extractions of effective parameters from reflection and transmission measurements and two-dimensional prism-based measurements that convincingly show the transmission angle corresponding to negative refraction. The transmission angles observed in the latter case also agree very well with the refractive index obtained in the one-dimensional measurements and numerical simulations. We expect this labyrinthine metamaterial to become the unit cell of choice for practical acoustic metamaterial devices that require broadband and significantly negative indices of refraction.

  12. Radiometric and photometric design for an Acoustic Containerless Experiment System. [for space processing

    NASA Technical Reports Server (NTRS)

    Glavich, T. A.

    1981-01-01

    The design of an optical system for a high temperature Acoustic Containerless Experiment System is examined. The optical system provides two-axis video, cine and infrared images of an acoustically positioned sample over a temperature range of 20 to 1200 C. Emphasis is placed on the radiometric and photometric characterization of the elements in the optical system and the oven to assist image data determination. Sample visibility due to wall radiance is investigated along with visibility due to strobe radiance. The optical system is designed for operation in Spacelab, and is used for a variety of materials processing experiments.

  13. Vibro-Acoustic Forecast for Space Shuttle Launches at Vandenberg AFB: The Payload Changeout Room and the Administration Building,

    DTIC Science & Technology

    2014-09-26

    7 RD-0156 944 VIBRO-RCOUS’IC FORECAST FOR SPACE SHUTTLE LAUNCHES AT / VANDENBERG AFB: THE..( U ) WESTON OB ERVATORY MR F A CROWLEY ET AL. 31 OCT 84...altitude of 300 meters. At thi v t ir the enuivatent acoustic source is 100 meters below the Shuttle WI. Thie ,’ASP1E ma xir: is 1R4.5 Wb (151 b for . S ...is constrained to use only the first 1𔃺 meters of Shuttle traject ory. As the Shuttle moves south, backscatter OtI the PPR south wall .haould nearly

  14. Acoustic puncture assist device™ versus conventional loss of resistance technique for thoracic paravertebral space identification: Clinical and ultrasound evaluation

    PubMed Central

    Ali, Monaz Abdulrahman; Abdellatif, Ashraf Abualhasan

    2017-01-01

    Background: Acoustic puncture assist device (APAD™) is a pressure measurement combined with a related acoustic signal that has been successfully used to facilitate epidural punctures. The principal of loss of resistance (LOR) is similar when performing paravertebral block (PVB). We investigated the usefulness of APAD™ by comparing it with the conventional LOR techniques for identifying paravertebral space (PVS). Subjects and Methods: A total of 100 women who were scheduled for elective breast surgery under general anesthesia with PVB were randomized into two equal groups. The first group (APAD group) was scheduled for PVB using APAD™. The second group (C group) was scheduled for PVB using conventional LOR technique. We recorded the success rate assessed by clinical and ultrasound findings, the time required to identify the PVS, the depth of the PVS and the number of attempts. The attending anesthesiologist was also questioned about the usefulness of the acoustic signal for detection of the PVS. Results: The incidence of successful PVB was (49) in APAD group compared to (42) in C group P < 0.05. The time required to do PVB was significantly shorter in APAD group than in C group (3.5 ± 1.35 vs. 4.1 ± 1.42) minutes. Two patients in APAD group needed two or more attempts compared to four patients in C group. The attending anesthesiologist found the acoustic signal valuable in all patients in APAD group. Conclusion: Using APAD™ compared to the conventional LOR technique showed a lower failure rate and a shorter time to identify the PVS. PMID:28217050

  15. Arbitrary amplitude slow electron-acoustic solitons in three-electron temperature space plasmas

    SciTech Connect

    Mbuli, L. N.; Maharaj, S. K.; Bharuthram, R.; Singh, S. V.; Lakhina, G. S.

    2015-06-15

    We examine the characteristics of large amplitude slow electron-acoustic solitons supported in a four-component unmagnetised plasma composed of cool, warm, hot electrons, and cool ions. The inertia and pressure for all the species in this plasma system are retained by assuming that they are adiabatic fluids. Our findings reveal that both positive and negative potential slow electron-acoustic solitons are supported in the four-component plasma system. The polarity switch of the slow electron-acoustic solitons is determined by the number densities of the cool and warm electrons. Negative potential solitons, which are limited by the cool and warm electron number densities becoming unreal and the occurrence of negative potential double layers, are found for low values of the cool electron density, while the positive potential solitons occurring for large values of the cool electron density are only limited by positive potential double layers. Both the lower and upper Mach numbers for the slow electron-acoustic solitons are computed and discussed.

  16. Correlations of decision weights and cognitive function for the masked discrimination of vowels by young and old adults.

    PubMed

    Gilbertson, Lynn; Lutfi, Robert A

    2014-11-01

    Older adults are often reported in the literature to have greater difficulty than younger adults understanding speech in noise [Helfer and Wilber (1988). J. Acoust. Soc. Am, 859-893]. The poorer performance of older adults has been attributed to a general deterioration of cognitive processing, deterioration of cochlear anatomy, and/or greater difficulty segregating speech from noise. The current work used perturbation analysis [Berg (1990). J. Acoust. Soc. Am., 149-158] to provide a more specific assessment of the effect of cognitive factors on speech perception in noise. Sixteen older (age 56-79 years) and seventeen younger (age 19-30 years) adults discriminated a target vowel masked by randomly selected masker vowels immediately preceding and following the target. Relative decision weights on target and maskers resulting from the analysis revealed large individual differences across participants despite similar performance scores in many cases. On the most difficult vowel discriminations, the older adult decision weights were significantly correlated with inhibitory control (Color Word Interference test) and pure-tone threshold averages (PTA). Young adult decision weights were not correlated with any measures of peripheral (PTA) or central function (inhibition or working memory).

  17. Correlations of decision weights and cognitive function for the masked discrimination of vowels by young and old adults

    PubMed Central

    Lutfi, Robert A.

    2014-01-01

    Older adults are often reported in the literature to have greater difficulty than younger adults understanding speech in noise [Helfer and Wilber (1988). J. Acoust. Soc. Am, 859–893]. The poorer performance of older adults has been attributed to a general deterioration of cognitive processing, deterioration of cochlear anatomy, and/or greater difficulty segregating speech from noise. The current work used perturbation analysis [Berg (1990). J. Acoust. Soc. Am., 149–158] to provide a more specific assessment of the effect of cognitive factors on speech perception in noise. Sixteen older (age 56–79 years) and seventeen younger (age 19–30 years) adults discriminated a target vowel masked by randomly selected masker vowels immediately preceding and following the target. Relative decision weights on target and maskers resulting from the analysis revealed large individual differences across participants despite similar performance scores in many cases. On the most difficult vowel discriminations, the older adult decision weights were significantly correlated with inhibitory control (Color Word Interference test) and pure-tone threshold averages (PTA). Young adult decision weights were not correlated with any measures of peripheral (PTA) or central function (inhibition or working memory). PMID:25256580

  18. Automatic pronunciation error detection in non-native speech: the case of vowel errors in Dutch.

    PubMed

    van Doremalen, Joost; Cucchiarini, Catia; Strik, Helmer

    2013-08-01

    This research is aimed at analyzing and improving automatic pronunciation error detection in a second language. Dutch vowels spoken by adult non-native learners of Dutch are used as a test case. A first study on Dutch pronunciation by L2 learners with different L1s revealed that vowel pronunciation errors are relatively frequent and often concern subtle acoustic differences between the realization and the target sound. In a second study automatic pronunciation error detection experiments were conducted to compare existing measures to a metric that takes account of the error patterns observed to capture relevant acoustic differences. The results of the two studies do indeed show that error patterns bear information that can be usefully employed in weighted automatic measures of pronunciation quality. In addition, it appears that combining such a weighted metric with existing measures improves the equal error rate by 6.1 percentage points from 0.297, for the Goodness of Pronunciation (GOP) algorithm, to 0.236.

  19. Acoustic-phonetic correlates of talker intelligibility for adults and children

    NASA Astrophysics Data System (ADS)

    Hazan, Valerie; Markham, Duncan

    2004-11-01

    This study investigated acoustic-phonetic correlates of intelligibility for adult and child talkers, and whether the relative intelligibility of different talkers was dependent on listener characteristics. In experiment 1, word intelligibility was measured for 45 talkers (18 women, 15 men, 6 boys, 6 girls) from a homogeneous accent group. The material consisted of 124 words familiar to 7-year-olds that adequately covered all frequent consonant confusions; stimuli were presented to 135 adult and child listeners in low-level background noise. Seven-to-eight-year-old listeners made significantly more errors than 12-year-olds or adults, but the relative intelligibility of individual talkers was highly consistent across groups. In experiment 2, listener ratings on a number of voice dimensions were obtained for the adults talkers identified in experiment 1 as having the highest and lowest intelligibility. Intelligibility was significantly correlated with subjective dimensions reflecting articulation, voice dynamics, and general quality. Finally, in experiment 3, measures of fundamental frequency, long-term average spectrum, word duration, consonant-vowel intensity ratio, and vowel space size were obtained for all talkers. Overall, word intelligibility was significantly correlated with the total energy in the 1- to 3-kHz region and word duration; these measures predicted 61% of the variability in intelligibility. The fact that the relative intelligibility of individual talkers was remarkably consistent across listener age groups suggests that the acoustic-phonetic characteristics of a talker's utterance are the primary factor in determining talker intelligibility. Although some acoustic-phonetic correlates of intelligibility were identified, variability in the profiles of the ``best'' talkers suggests that high intelligibility can be achieved through a combination of different acoustic-phonetic characteristics. .

  20. Validity and Reliability of the Vowel Matching Test.

    ERIC Educational Resources Information Center

    Perney, Jan; Morris, Darrell

    1984-01-01

    The Vowel Matching Test (VMT) measures primary grade children's ability to discriminate medial vowel sounds. Based on the responses of 43 first-grade pupils, the results indicated that the VMT correlated .76 with a word recognition test. It exhibited an internal consistency coefficient of .78. (Author/BW)

  1. Influences of Tone on Vowel Articulation in Mandarin Chinese

    ERIC Educational Resources Information Center

    Shaw, Jason A.; Chen, Wei-rong; Proctor, Michael I.; Derrick, Donald

    2016-01-01

    Purpose: Models of speech production often abstract away from shared physiology in pitch control and lingual articulation, positing independent control of tone and vowel units. We assess the validity of this assumption in Mandarin Chinese by evaluating the stability of lingual articulation for vowels across variation in tone. Method:…

  2. Bite Block Vowel Production in Apraxia of Speech

    ERIC Educational Resources Information Center

    Jacks, Adam

    2008-01-01

    Purpose: This study explored vowel production and adaptation to articulatory constraints in adults with acquired apraxia of speech (AOS) plus aphasia. Method: Five adults with acquired AOS plus aphasia and 5 healthy control participants produced the vowels [iota], [epsilon], and [ash] in four word-length conditions in unconstrained and bite block…

  3. Criteria for the Segmentation of Vowels on Duplex Oscillograms.

    ERIC Educational Resources Information Center

    Naeser, Margaret A.

    This paper develops criteria for the segmentation of vowels on duplex oscillograms. Previous vowel duration studies have primarily used sound spectrograms. The use of duplex oscillograms, rather than sound spectrograms, permits faster production (real time) at less expense (adding machine paper may be used). The speech signal can be more spread…

  4. Studies in the Phonology of Asian Languages; IV, Vietnamese Vowels.

    ERIC Educational Resources Information Center

    Han, Mieko S.

    This study applies the spectrographic techniques to the analysis of the 11 vowels in the Hanoi dialect of Vietnamese. The analysis involves 5,500 spectrograms made of 869 common words containing these vowels, which are identified and described in terms of their Formant 1 and Formant 2 frequencies. Chapter 1 discusses the dialectal features of the…

  5. Variation in vowel duration among southern African American English speakers

    PubMed Central

    Holt, Yolanda Feimster; Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    Purpose Atypical duration of speech segments can signal a speech disorder. This study examined variation in vowel duration in African American English (AAE) relative to White American English (WAE) speakers living in the same dialect region in the South in order to characterize the nature of systematic variation between the two groups. The goal was to establish whether segmental durations in minority populations differ from the well-established patterns in mainstream populations. Method Participants were 32 AAE and 32 WAE speakers differing in age who, in their childhood, attended either segregated (older speakers) or integrated (younger speakers) public schools. Speech materials consisted of 14 vowels produced in hVd-frame. Results AAE vowels were significantly longer than WAE vowels. Vowel duration did not differ as a function of age. The temporal tense-lax contrast was minimized for AAE relative to WAE. Female vowels were significantly longer than male vowels for both AAE and WAE. Conclusions African Americans should be expected to produce longer vowels relative to White speakers in a common geographic area. These longer durations are not deviant but represent a typical feature of AAE. This finding has clinical importance in guiding assessments of speech disorders in AAE speakers. PMID:25951511

  6. Mommy, Speak Clearly: Induced Hearing Loss Shapes Vowel Hyperarticulation

    ERIC Educational Resources Information Center

    Lam, Christa; Kitamura, Christine

    2012-01-01

    Talkers hyperarticulate vowels when communicating with listeners that require increased speech intelligibility. Vowel hyperarticulation is said to be motivated by knowledge of the listener's linguistic needs because it typically occurs in speech to infants, foreigners and hearing-impaired listeners, but not to non-verbal pets. However, the degree…

  7. Vowel Categorization during Word Recognition in Bilingual Toddlers

    ERIC Educational Resources Information Center

    Ramon-Casas, Marta; Swingley, Daniel; Sebastian-Galles, Nuria; Bosch, Laura

    2009-01-01

    Toddlers' and preschoolers' knowledge of the phonological forms of words was tested in Spanish-learning, Catalan-learning, and bilingual children. These populations are of particular interest because of differences in the Spanish and Catalan vowel systems: Catalan has two vowels in a phonetic region where Spanish has only one. The proximity of the…

  8. Phonological Specificity of Vowel Contrasts at 18-Months

    ERIC Educational Resources Information Center

    Mani, Nivedita; Coleman, John; Plunkett, Kim

    2008-01-01

    Previous research has shown that English infants are sensitive to mispronunciations of vowels in familiar words by as early as 15-months of age. These results suggest that not only are infants sensitive to large mispronunciations of the vowels in words, but also sensitive to smaller mispronunciations, involving changes to only one dimension of the…

  9. Discrimination of Phonemic Vowel Length by Japanese Infants

    ERIC Educational Resources Information Center

    Sato, Yutaka; Sogabe, Yuko; Mazuka, Reiko

    2010-01-01

    Japanese has a vowel duration contrast as one component of its language-specific phonemic repertory to distinguish word meanings. It is not clear, however, how a sensitivity to vowel duration can develop in a linguistic context. In the present study, using the visual habituation-dishabituation method, the authors evaluated infants' abilities to…

  10. Vowel formant frequency characteristics of preadolescent males and females.

    PubMed

    Bennett, S

    1981-01-01

    This report describes the vowel formant frequency characteristics (F1-F4 of five vowels produced in a fixed phonetic context) of 42 seven and eight year old boys and girls and the relationship of vocal tract resonances to several indices of body size. Results showed that the vowel resonances of male children were consistently lower than those of females, and that the extent of the sexual differences varied as a function of formant number and vowel category. Average across all measured formants of all five vowels, the overall sexual distinction was approximately 10%. The range of differences extended from about 3% for F1 of /i/ to 16%for F1 of /ae/. Measures of body size were always significantly related to these children's formant frequencies (range in multiple r's -0.506 to -0.866). The origin of the sexual differences in vocal tract resonance characteristics is discussed with reference to differences in vocal tract size and articulatory behaviors.

  11. Different ERP profiles for learning rules over consonants and vowels.

    PubMed

    Monte-Ordoño, Júlia; Toro, Juan M

    2017-03-01

    The Consonant-Vowel hypothesis suggests that consonants and vowels tend to be used differently during language processing. In this study we explored whether these functional differences trigger different neural responses in a rule learning task. We recorded ERPs while nonsense words were presented in an Oddball paradigm. An ABB rule was implemented either over the consonants (Consonant condition) or over the vowels (Vowel condition) composing standard words. Deviant stimuli were composed by novel phonemes. Deviants could either implement the same ABB rule as standards (Phoneme deviants) or implement a different ABA rule (Rule deviants). We observed shared early components (P1 and MMN) for both types of deviants across both conditions. We also observed differences across conditions around 400ms. In the Consonant condition, Phoneme deviants triggered a posterior negativity. In the Vowel condition, Rule deviants triggered an anterior negativity. Such responses demonstrate different neural responses after the violation of abstract rules over distinct phonetic categories.

  12. Sound in ecclesiastical spaces in Cordoba. Architectural projects incorporating acoustic methodology (El sonido del espacio eclesial en Cordoba. El proyecto arquitectonico como procedimiento acustico)

    NASA Astrophysics Data System (ADS)

    Suarez, Rafael

    2003-11-01

    This thesis is concerned with the acoustic analysis of ecclesiastical spaces, and the subsequent implementation of acoustic design methodology in architectural renovations. One begins with an adequate architectural design of specific elements (shape, materials, and textures), with the intention of elimination of acoustic deficiencies that are common in such spaces. These are those deficiencies that impair good speech intelligibility and good musical audibility. The investigation is limited to churches in the province of Cordoba and to churches built after the reconquest of Spain (1236) and up until the 18th century. Selected churches are those that have undergone architectural renovations to adapt them to new uses or to make them more suitable for liturgical use. The thesis attempts to summarize the acoustic analyses and the acoustical solutions that have been implemented. The results are presented in a manner that should be useful for the adoption of a model for the functional renovation of ecclesiastical spaces. Such would allow those involved in architectural projects to specify the nature of the sound, even though somewhat intangible, within the ecclesiastical space. Thesis advisors: Jaime Navarro and Juan J. Sendra Copies of this thesis written in Spanish may be obtained by contacting the advisor, Jaime Navarro, E.T.S. de Arquitectura de Sevilla, Dpto. de Construcciones Arquitectonicas I, Av. Reina Mercedes, 2, 41012 Sevilla, Spain. E-mail address: jnavarro@us.es

  13. Engaging spaces: Intimate electro-acoustic display in alternative performance venues

    NASA Astrophysics Data System (ADS)

    Bahn, Curtis; Moore, Stephan

    2001-05-01

    In past presentations to the ASA, we have described the design and construction of four generations of unique spherical speakers (multichannel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays, (SenSAs: combinations of various sensor devices with outward-radiating multichannel speaker arrays). This presentation will detail the ways in which arrays of these speakers have been employed in alternative performance venues-providing presence and intimacy in the performance of electro-acoustic chamber music and sound installation, while engaging natural and unique acoustical qualities of various locations. We will present documentation of the use of multichannel sonic diffusion arrays in small clubs, ``black-box'' theaters, planetariums, and art galleries.

  14. The acquisition of Taiwan Mandarin vowels by native American English speakers

    NASA Astrophysics Data System (ADS)

    Lin, Cyun-Jhan

    2005-04-01

    Previous work on the production of English and French phones by native American English speakers indicated that equivalence classification prevent L2 learners from approximating L2 phonetic norms of similar phones and that learning French would not affect English speakers' production of L1 similar phone /u/ (Flege, 1987). In this study, there were five subjects, including 2 advanced native American English learners of Taiwan Mandarin, 2 basic native American English learners of Taiwan Mandarin, and 1 monolingual Taiwan Mandarin speaker. The corpus were 12 English words ``heed, who'd, hod; leak, Luke, lock; beat, suit, bot; peat, suit, pot,'' and 12 Mandarin words [i,u, a; li, lu, la; pi, pu, pa; phi, phu, pha]. Both advanced and basic learners' production of English and Mandarin words and monolingual Taiwan Mandarin speaker's production of Mandarin words were directly recorded onto a PC. Vowel formants were taken from spectrograms generated by Praat. Preliminary results showed the vowel space of advanced learners between Taiwan Mandarin [i] and [u] was larger than that of basic learners, and closer to the Taiwan Mandarin norms. Besides, the vowel space between English [i] and [u] by basic learners was dramatically smaller than that of American English norms.

  15. Virtual acoustic reproduction of historical spaces for interactive music performance and recording

    NASA Astrophysics Data System (ADS)

    Martens, William; Woszczyk, Wieslaw

    2004-10-01

    For the most authentic and successful musical result, a performer engaged in recording pianoforte pieces of Haydn needs to hear the instrument as it would have sounded in historically typical room reverberation, such as that of the original room's in which Haydn taught his students to play pianoforte. After capturing the acoustic response of such historical room's, as described in the companion presentation, there remains the problem of how best to reproduce the virtual acoustical response of the room as a performer moves relative to the instrument and the rooms boundaries. This can be done with a multichannel loudspeaker array enveloping the performer, interactively presenting simulated indirect sound to generate a sense of presence in the previously captured room. The resulting interaction between live musical instrument performance and the sound of the virtual room can be captured both binaurally for the performer's subsequent evaluation, readjusted to provide the most desirable acoustic feedback to the performer, and finally remixed for distribution via conventional 5.1 channel audio media.

  16. The Acoustic Voice Quality Index: Toward Improved Treatment Outcomes Assessment in Voice Disorders

    ERIC Educational Resources Information Center

    Maryn, Youri; De Bodt, Marc; Roy, Nelson

    2010-01-01

    Voice practitioners require an objective index of dysphonia severity as a means to reliably track treatment outcomes. To ensure ecological validity however, such a measure should survey both sustained vowels and continuous speech. In an earlier study, a multivariate acoustic model referred to as the Acoustic Voice Quality Index (AVQI), consisting…

  17. The Spelling of Vowels Is Influenced by Australian and British English Dialect Differences

    ERIC Educational Resources Information Center

    Kemp, Nenagh

    2009-01-01

    Two experiments examined the influence of dialect on the spelling of vowel sounds. British and Australian children (6 to 8 years) and university students wrote words whose unstressed vowel sound is spelled i or e and pronounced /I/ or /schwa/. Participants often (mis)spelled these vowel sounds as they pronounced them. When vowels were pronounced…

  18. A Comparative Study of English and Spanish Vowel Systems: Theoretical and Practical Implications for Teaching Pronunciation.

    ERIC Educational Resources Information Center

    Odisho, Edward Y.

    A study examines two major types of vowel systems in languages, centripetal and centrifugal. English is associated with the centripetal system, in which vowel quality and quantity (rhythm) are heavily influenced by stress. In this system, vowels have a strong tendency to move toward the center of the vowel area. Spanish is associated with the…

  19. Perception of Vowel Length by Japanese- and English-Learning Infants

    ERIC Educational Resources Information Center

    Mugitani, Ryoko; Pons, Ferran; Fais, Laurel; Dietrich, Christiane; Werker, Janet F.; Amano, Shigeaki

    2009-01-01

    This study investigated vowel length discrimination in infants from 2 language backgrounds, Japanese and English, in which vowel length is either phonemic or nonphonemic. Experiment 1 revealed that English 18-month-olds discriminate short and long vowels although vowel length is not phonemically contrastive in English. Experiments 2 and 3 revealed…

  20. We're Not in Kansas Anymore: The TOTO Strategy for Decoding Vowel Pairs

    ERIC Educational Resources Information Center

    Meese, Ruth Lyn

    2016-01-01

    Vowel teams such as vowel digraphs present a challenge to struggling readers. Some researchers assert that phonics generalizations such as the "two vowels go walking and the first one does the talking" rule do not hold often enough to be reliable for children. Others suggest that some vowel teams are highly regular and that children can…

  1. Speaking Clearly for the Blind: Acoustic and Articulatory Correlates of Speaking Conditions in Sighted and Congenitally Blind Speakers

    PubMed Central

    Ménard, Lucie; Trudeau-Fisette, Pamela; Côté, Dominique; Turgeon, Christine

    2016-01-01

    Compared to conversational speech, clear speech is produced with longer vowel duration, greater intensity, increased contrasts between vowel categories, and decreased dispersion within vowel categories. Those acoustic correlates are produced by larger movements of the orofacial articulators, including visible (lips) and invisible (tongue) articulators. Thus, clear speech provides the listener with audible and visual cues that are used to increase the overall intelligibility of speech produced by the speaker. It is unclear how those cues are produced by visually impaired speakers who never had access to vision. In this paper, we investigate the acoustic and articulatory correlates of vowels in clear versus conversational speech, and in sighted and congenitally blind speakers. Participants were recorded using electroarticulography while producing multiple repetitions of the ten Quebec French oral vowels in carrier sentences in both speaking conditions. Articulatory variables (lip, jaw, and tongue positions) as well as acoustic variables (contrasts between vowels, within-vowel dispersion, pitch, duration, and intensity) were measured. Lip movements were larger when going from conversational to clear speech in sighted speakers only. On the other hand, tongue movements were affected to a larger extent in blind speakers compared to their sighted peers. These findings confirm that vision plays an important role in the maintenance of speech intelligibility. PMID:27643997

  2. Tutorial on architectural acoustics

    NASA Astrophysics Data System (ADS)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  3. Statistical Space-Time-Frequency Characterization of MIMO Shallow Water Acoustic Channels

    DTIC Science & Technology

    2010-06-01

    top and bottom. The surface and bottom boundaries reflect an acoustic signal, which results in multiple eigenrays travelling between the Tx and Rx, as...Rx receives 2S downward arriving eigenrays , each one having different number of s surface and b bottom reflections, where 1 ≤ s ≤ S, and s−1 ≤ b ≤ s...Similarly, there are 2B upward arriving eigenrays with b bottom and s surface reflections, where 1 ≤ b ≤ B and b−1 ≤ s ≤ b. Note that exact positions

  4. The acoustic and visual factors influencing the construction of tranquil space in urban and rural environments tranquil spaces-quiet places?

    PubMed

    Pheasant, Robert; Horoshenkov, Kirill; Watts, Greg; Barrett, Brendan

    2008-03-01

    Prior to this work no structured mechanism existed in the UK to evaluate the tranquillity of open spaces with respect to the characteristics of both acoustic and visual stimuli. This is largely due to the fact that within the context of "tranquil" environments, little is known about the interaction of the audio-visual modalities and how they combine to lead to the perception of tranquillity. This paper presents the findings of a study in which visual and acoustic data, captured from 11 English rural and urban landscapes, were used by 44 volunteers to make subjective assessments of both their perceived tranquillity of a location, and the loudness of five generic soundscape components. The results were then analyzed alongside objective measurements taken in the laboratory. It was found that the maximum sound pressure level (L(Amax)) and the percentage of natural features present at a location were the key factors influencing tranquillity. Engineering formulas for the tranquillity as a function of the noise level and proportion of the natural features are proposed.

  5. Effects of coda voicing and aspiration on Hindi vowels

    NASA Astrophysics Data System (ADS)

    Lampp, Claire; Reklis, Heidi

    2001-05-01

    This study reexamines the well-attested coda voicing effect on vowel duration [Chen, Phonetica 22, 125-159 (1970)], in conjunction with the relationship between vowel duration and aspiration of codas. The first step was to replicate the results of Maddieson and Gandour [UCLA Working Papers Phonetics 31, 46-52 (1976)] with a larger, language-specific data set. Four nonsense syllables ending in [open-o] followed by [k, kh, g, gh] were read aloud in ten different carrier sentences by four native speakers of Hindi. Results confirm that longer vowels precede voiced word-final consonants and aspirated word-final consonants. Thus, among the syllables, vowel duration would be longest when preceding the voiced aspirate [gh]. Coda voicing, and thus, vowel duration, have been shown to correlate negatively to vowel F1 in English and Arabic [Wolf, J. Phonetics 6, 299-309 (1978); de Jong and Zawaydeh ibid, 30, 53-75 (2002)]. It is not known whether vowel F1 depends directly on coda voicing, or is determined indirectly via duration. Since voicing and aspiration both increase duration, F1 measurements of this data set (which will be presented) may answer that question.

  6. Phoneme recognition and confusions with multichannel cochlear implants: vowels.

    PubMed

    Välimaa, Taina T; Määttä, Taisto K; Löppönen, Heikki J; Sorri, Martti J

    2002-10-01

    The aim of this study was to investigate how postlingually severely or profoundly hearing-impaired adults relearn to recognize vowels after receiving multichannel cochlear implants. Vowel recognition of 19 Finnish-speaking subjects was studied for a minimum of 6 months and a maximum of 24 months using an open-set nonsense-syllable test in a prospective repeated-measure design. The responses were coded for phoneme errors, and 95% confidence intervals for recognition and confusions were calculated. The average vowel recognition was 68% (95% confidence interval = 66-70%) 6 months after switch-on and 80% (95% confidence interval = 78-82%) 24 months after switch-on. The vowels [ae], [u], [i], [o], and [a] were the easiest to recognize, and the vowels [y], [e], and [ø] were the most difficult. In conclusion, adaptation to electrical hearing using a multichannel cochlear implant was achieved well; but for at least 2 years, given two vowels with either F1 or F2 at roughly the some frequencies, confusions were drawn more towards the closest vowel with the next highest F1 or F2.

  7. Effects of coda voicing and aspiration on Hindi vowels

    NASA Astrophysics Data System (ADS)

    Lampp, Claire; Reklis, Heidi

    2004-05-01

    This study reexamines the well-attested coda voicing effect on vowel duration [Chen, Phonetica 22, 125-159 (1970)], in conjunction with the relationship between vowel duration and aspiration of codas. The first step was to replicate the results of Maddieson and Gandour [UCLA Working Papers Phonetics 31, 46-52 (1976)] with a larger, language-specific data set. Four nonsense syllables ending in [open-o] followed by [k, kh, g, gh] were read aloud in ten different carrier sentences by four native speakers of Hindi. Results confirm that longer vowels precede voiced word-final consonants and aspirated word-final consonants. Thus, among the syllables, vowel duration would be longest when preceding the voiced aspirate [gh]. Coda voicing, and thus, vowel duration, have been shown to correlate negatively to vowel F1 in English and Arabic [Wolf, J. Phonetics 6, 299-309 (1978); de Jong and Zawaydeh ibid, 30, 53-75 (2002)]. It is not known whether vowel F1 depends directly on coda voicing, or is determined indirectly via duration. Since voicing and aspiration both increase duration, F1 measurements of this data set (which will be presented) may answer that question.

  8. Dissociation of tone and vowel processing in Mandarin idioms.

    PubMed

    Hu, Jiehui; Gao, Shan; Ma, Weiyi; Yao, Dezhong

    2012-09-01

    Using event-related potentials, this study measured the access of suprasegmental (tone) and segmental (vowel) information in spoken word recognition with Mandarin idioms. Participants performed a delayed-response acceptability task, in which they judged the correctness of the last word of each idiom, which might deviate from the correct word in either tone or vowel. Results showed that, compared with the correct idioms, a larger early negativity appeared only for vowel violation. Additionally, a larger N400 effect was observed for vowel mismatch than tone mismatch. A control experiment revealed that these differences were not due to low-level physical differences across conditions; instead, they represented the greater constraining power of vowels than tones in the lexical selection and semantic integration of the spoken words. Furthermore, tone violation elicited a more robust late positive component than vowel violation, suggesting different reanalyses of the two types of information. In summary, the current results support a functional dissociation of tone and vowel processing in spoken word recognition.

  9. Coordinated Control of Acoustical Field of View and Flight in Three-Dimensional Space for Consecutive Capture by Echolocating Bats during Natural Foraging.

    PubMed

    Sumiya, Miwa; Fujioka, Emyo; Motoi, Kazuya; Kondo, Masaru; Hiryu, Shizuko

    2017-01-01

    Echolocating bats prey upon small moving insects in the dark using sophisticated sonar techniques. The direction and directivity pattern of the ultrasound broadcast of these bats are important factors that affect their acoustical field of view, allowing us to investigate how the bats control their acoustic attention (pulse direction) for advanced flight maneuvers. The purpose of this study was to understand the behavioral strategies of acoustical sensing of wild Japanese house bats Pipistrellus abramus in three-dimensional (3D) space during consecutive capture flights. The results showed that when the bats successively captured multiple airborne insects in short time intervals (less than 1.5 s), they maintained not only the immediate prey but also the subsequent one simultaneously within the beam widths of the emitted pulses in both horizontal and vertical planes before capturing the immediate one. This suggests that echolocating bats maintain multiple prey within their acoustical field of view by a single sensing using a wide directional beam while approaching the immediate prey, instead of frequently shifting acoustic attention between multiple prey. We also numerically simulated the bats' flight trajectories when approaching two prey successively to investigate the relationship between the acoustical field of view and the prey direction for effective consecutive captures. This simulation demonstrated that acoustically viewing both the immediate and the subsequent prey simultaneously increases the success rate of capturing both prey, which is considered to be one of the basic axes of efficient route planning for consecutive capture flight. The bat's wide sonar beam can incidentally cover multiple prey while the bat forages in an area where the prey density is high. Our findings suggest that the bats then keep future targets within their acoustical field of view for effective foraging. In addition, in both the experimental results and the numerical simulations

  10. Coordinated Control of Acoustical Field of View and Flight in Three-Dimensional Space for Consecutive Capture by Echolocating Bats during Natural Foraging

    PubMed Central

    Sumiya, Miwa; Fujioka, Emyo; Motoi, Kazuya; Kondo, Masaru; Hiryu, Shizuko

    2017-01-01

    Echolocating bats prey upon small moving insects in the dark using sophisticated sonar techniques. The direction and directivity pattern of the ultrasound broadcast of these bats are important factors that affect their acoustical field of view, allowing us to investigate how the bats control their acoustic attention (pulse direction) for advanced flight maneuvers. The purpose of this study was to understand the behavioral strategies of acoustical sensing of wild Japanese house bats Pipistrellus abramus in three-dimensional (3D) space during consecutive capture flights. The results showed that when the bats successively captured multiple airborne insects in short time intervals (less than 1.5 s), they maintained not only the immediate prey but also the subsequent one simultaneously within the beam widths of the emitted pulses in both horizontal and vertical planes before capturing the immediate one. This suggests that echolocating bats maintain multiple prey within their acoustical field of view by a single sensing using a wide directional beam while approaching the immediate prey, instead of frequently shifting acoustic attention between multiple prey. We also numerically simulated the bats’ flight trajectories when approaching two prey successively to investigate the relationship between the acoustical field of view and the prey direction for effective consecutive captures. This simulation demonstrated that acoustically viewing both the immediate and the subsequent prey simultaneously increases the success rate of capturing both prey, which is considered to be one of the basic axes of efficient route planning for consecutive capture flight. The bat’s wide sonar beam can incidentally cover multiple prey while the bat forages in an area where the prey density is high. Our findings suggest that the bats then keep future targets within their acoustical field of view for effective foraging. In addition, in both the experimental results and the numerical simulations

  11. Nonlinear propagation of positron-acoustic waves in a four component space plasma

    NASA Astrophysics Data System (ADS)

    Shah, M. G.; Hossen, M. R.; Mamun, A. A.

    2015-10-01

    > The nonlinear propagation of positron-acoustic waves (PAWs) in an unmagnetized, collisionless, four component, dense plasma system (containing non-relativistic inertial cold positrons, relativistic degenerate electron and hot positron fluids as well as positively charged immobile ions) has been investigated theoretically. The Korteweg-de Vries (K-dV), modified K-dV (mK-dV) and further mK-dV (fmK-dV) equations have been derived by using reductive perturbation technique. Their solitary wave solutions have been numerically analysed in order to understand the localized electrostatic disturbances. It is observed that the relativistic effect plays a pivotal role on the propagation of positron-acoustic solitary waves (PASW). It is also observed that the effects of degenerate pressure and the number density of inertial cold positrons, hot positrons, electrons and positively charged static ions significantly modify the fundamental features of PASW. The basic features and the underlying physics of PASW, which are relevant to some astrophysical compact objects (such as white dwarfs, neutron stars etc.), are concisely discussed.

  12. Instrumental Dimensioning of Normal and Pathological Phonation Using Acoustic Measurements

    ERIC Educational Resources Information Center

    Putzer, Manfred; Barry, William J.

    2008-01-01

    The present study deals with the dimensions of normal and pathological phonation. Separation of normal voices from pathological voices is tested under different aspects. Using a new parametrization of voice-quality properties in the acoustic signal, the vowel productions of 534 speakers (267 M, 267 F) without any reported voice pathology and the…

  13. Sex-Related Acoustic Changes in Voiceless English Fricatives

    ERIC Educational Resources Information Center

    Fox, Robert Allen; Nissen, Shawn L.

    2005-01-01

    This investigation is a comprehensive acoustic study of 4 voiceless fricatives (/f [theta] s [esh]/) in English produced by adults and pre-and postpubescent children aged 6-14 years. Vowel duration, amplitude, and several different spectral measures (including spectral tilt and spectral moments) were examined. Of specific interest was the pattern…

  14. Acoustic telemetry and network analysis reveal the space use of multiple reef predators and enhance marine protected area design.

    PubMed

    Lea, James S E; Humphries, Nicolas E; von Brandis, Rainer G; Clarke, Christopher R; Sims, David W

    2016-07-13

    Marine protected areas (MPAs) are commonly employed to protect ecosystems from threats like overfishing. Ideally, MPA design should incorporate movement data from multiple target species to ensure sufficient habitat is protected. We used long-term acoustic telemetry and network analysis to determine the fine-scale space use of five shark and one turtle species at a remote atoll in the Seychelles, Indian Ocean, and evaluate the efficacy of a proposed MPA. Results revealed strong, species-specific habitat use in both sharks and turtles, with corresponding variation in MPA use. Defining the MPA's boundary from the edge of the reef flat at low tide instead of the beach at high tide (the current best in Seychelles) significantly increased the MPA's coverage of predator movements by an average of 34%. Informed by these results, the larger MPA was adopted by the Seychelles government, demonstrating how telemetry data can improve shark spatial conservation by affecting policy directly.

  15. Acoustic emission frequency discrimination

    NASA Technical Reports Server (NTRS)

    Sugg, Frank E. (Inventor); Graham, Lloyd J. (Inventor)

    1988-01-01

    In acoustic emission nondestructive testing, broadband frequency noise is distinguished from narrow banded acoustic emission signals, since the latter are valid events indicative of structural flaws in the material being examined. This is accomplished by separating out those signals which contain frequency components both within and beyond (either above or below) the range of valid acoustic emission events. Application to acoustic emission monitoring during nondestructive bond verification and proof loading of undensified tiles on the Space Shuttle Orbiter is considered.

  16. Generalization of von Neumann analysis for a model of two discrete half-spaces: The acoustic case

    USGS Publications Warehouse

    Haney, M.M.

    2007-01-01

    Evaluating the performance of finite-difference algorithms typically uses a technique known as von Neumann analysis. For a given algorithm, application of the technique yields both a dispersion relation valid for the discrete time-space grid and a mathematical condition for stability. In practice, a major shortcoming of conventional von Neumann analysis is that it can be applied only to an idealized numerical model - that of an infinite, homogeneous whole space. Experience has shown that numerical instabilities often arise in finite-difference simulations of wave propagation at interfaces with strong material contrasts. These interface instabilities occur even though the conventional von Neumann stability criterion may be satisfied at each point of the numerical model. To address this issue, I generalize von Neumann analysis for a model of two half-spaces. I perform the analysis for the case of acoustic wave propagation using a standard staggered-grid finite-difference numerical scheme. By deriving expressions for the discrete reflection and transmission coefficients, I study under what conditions the discrete reflection and transmission coefficients become unbounded. I find that instabilities encountered in numerical modeling near interfaces with strong material contrasts are linked to these cases and develop a modified stability criterion that takes into account the resulting instabilities. I test and verify the stability criterion by executing a finite-difference algorithm under conditions predicted to be stable and unstable. ?? 2007 Society of Exploration Geophysicists.

  17. Phonological Space in the Speech of the Hearing Impaired.

    ERIC Educational Resources Information Center

    Shukla, R. S.

    1989-01-01

    First and second formant frequencies of the vowels /a/, /i/, and /u/ were measured to determine the phonological space in the speech of 30 Kannada-speaking hearing-impaired individuals in India. Compared to controls, subjects' phonological space was found to be reduced, primarily due to the lowering of the second formant of the vowel /i/.…

  18. Influence of electron-electron collisions on the propagation of ion-acoustic space-charge waves in a warm plasma waveguide

    NASA Astrophysics Data System (ADS)

    Lee, Myoung-Jae; Jung, Young-Dae

    2017-04-01

    The influence of electron–electron collisions on the propagation of the ion-acoustic space-charge wave is investigated in a cylindrical waveguide filled with warm collisional plasma by employing the normal mode analysis and the method of separation of variables. It is shown that the frequency of the ion-acoustic space-charge wave with higher-harmonic modes is always smaller than that with lower-harmonic modes, especially in intermediate wave number domains. It is also shown that the collisional damping rate of the ion-acoustic space-charge wave due to the electron–electron collision effect with higher-harmonic modes is smaller than that with lower-harmonic modes. In addition, it is found that the maximum position of the collisional damping rate shifts to large wave numbers with an increase of the harmonic mode. The variation of the wave frequency and the collisional damping rate of the ion-acoustic space-charge wave is also discussed.

  19. Vowel formant discrimination II: Effects of stimulus uncertainty, consonantal context, and training.

    PubMed

    Kewley-Port, D

    2001-10-01

    This study is one in a series that has examined factors contributing to vowel perception in everyday listening. Four experimental variables have been manipulated to examine systematical differences between optimal laboratory testing conditions and those characterizing everyday listening. These include length of phonetic context, level of stimulus uncertainty, linguistic meaning, and amount of subject training. The present study investigated the effects of stimulus uncertainty from minimal to high uncertainty in two phonetic contexts, /V/ or /bVd/, when listeners had either little or extensive training. Thresholds for discriminating a small change in a formant for synthetic female vowels /I,E,ae,a,inverted v,o/ were obtained using adaptive tracking procedures. Experiment I optimized extensive training for five listeners by beginning under minimal uncertainty (only one formant tested per block) and then increasing uncertainty from 8-to-16-to-22 formants per block. Effects of higher uncertainty were less than expected; performance only decreased by about 30%. Thresholds for CVCs were 25% poorer than for isolated vowels. A previous study using similar stimuli [Kewley-Port and Zheng. J. Acoust. Soc. Am. 106, 2945-2958 (1999)] determined that the ability to discriminate formants was degraded by longer phonetic context. A comparison of those results with the present ones indicates that longer phonetic context degrades formant frequency discrimination more than higher levels of stimulus uncertainty. In experiment 2, performance in the 22-formant condition was tracked over 1 h for 37 typical listeners without formal laboratory training. Performance for typical listeners was initially about 230% worse than for trained listeners. Individual listeners' performance ranged widely with some listeners occasionally achieving performance similar to that of the trained listeners in just one hour.

  20. The acoustics of uvulars in Tlingit

    NASA Astrophysics Data System (ADS)

    Denzer-King, Ryan E.

    This paper looks at the acoustics of uvulars in Tlingit, an Athabaskan language spoken in Alaska and Canada. Data from five native speakers was used for acoustic analysis for tokens from five phoneme groups (alveolars, plain velars, labialized velars, plain uvulars, and labialized uvulars). The tokens were analyzed by computing spectral moments of plosive bursts and fricatives, and F2 and F3 values for post-consonantal vowels, which were used to calculate locus equations, a descriptive measure of the relationship between F2 at vowel onset and midpoint. Several trends were observed, including a greater difference between F2 and F 3 after uvulars than after velars, as well as a higher center of gravity (COG) and lower skew and kurtosis for uvulars than for velars. The comparison of plain versus labialized consonants supports the finding of Suh (2008) that labialization lowers mean burst energy, or COG, and additionally found labialization to raise skew and kurtosis.

  1. Nonlinear Dust Acoustic Waves in Dissipative Space Dusty Plasmas with Superthermal Electrons and Nonextensive Ions

    NASA Astrophysics Data System (ADS)

    El-Hanbaly, A. M.; El-Shewy, E. K.; Sallah, M.; Darweesh, H. F.

    2016-05-01

    The nonlinear characteristics of the dust acoustic (DA) waves are studied in a homogeneous, collisionless, unmagnetized, and dissipative dusty plasma composed of negatively charged dusty grains, superthermal electrons, and nonextensive ions. Sagdeev pseudopotential technique has been employed to study the large amplitude DA waves. It (Sagdeev pseudopotential) has an evidence for the existence of compressive and rarefractive solitons. The global features of the phase portrait are investigated to understand the possible types of solutions of the Sagdeev form. On the other hand, the reductive perturbation technique has been used to study small amplitude DA waves and yields the Korteweg-de Vries-Burgers (KdV-Burgers) equation that exhibits both soliton and shock waves. The behavior of the obtained results of both large and small amplitude is investigated graphically in terms of the plasma parameters like dust kinematic viscosity, superthermal and nonextensive parameters.

  2. Mathematical Modeling of Space-Time Variations in Acoustic Transmission and Scattering from Schools of Swim Bladder Fish

    DTIC Science & Technology

    2015-09-30

    1996 (Ref. 1), based upon the harmonic solution of sets of coupled differential equations, each describing scattering from one fish. The Love swim...side of the empty core, thus reducing the acoustic interactions between them. REFERENCES (1) C. Feuillade, R. W. Nero and R. H. Love , "A low...frequency acoustic scattering model for small schools offish," J. Acoust. Soc. Am., 99, 196-208 (1996). (2) R. H. Love , "Resonant acoustic scattering by

  3. Measurement of Space Charges in Dielectric Materials by Pulse Electro-acoustic Method after Irradiation by High-energy Electron Beam

    NASA Astrophysics Data System (ADS)

    Xiaogang, Qin; Kai, Li; Mayali; Xiaoquan, Zheng; Xiaodong, Liu

    2009-01-01

    Dielectric materials are widely used in space environment. When they are irradiated, charges will accumulate in the bulk and on the surface of the material, leading to pulse discharge events that can cause permanent changes in their physical and chemical structure. In this paper, a special method called PEA (pulse electro-acoustic) was used to measure and analyze the space charging of several dielectric materials after they have been irradiated by different high-energy electron beams.

  4. The role of selective attention in the acquisition of English tense and lax vowels by native Spanish listeners: comparison of three training methods

    PubMed Central

    Kondaurova, Maria V.; Francis, Alexander L.

    2010-01-01

    This study investigates the role of two processes, cue enhancement (learning to attend to acoustic cues which characterize a speech contrast for native listeners) and cue inhibition (learning to ignore cues that do not), in the acquisition of the American English tense and lax ([i] vs.[I]) vowels by native Spanish listeners. This contrast is acoustically distinguished by both vowel spectrum and duration. However, while native English listeners rely primarily on spectrum, inexperienced Spanish listeners tend to rely exclusively on duration. Twenty-nine native Spanish listeners, initially reliant on vowel duration, received either enhancement training, inhibition training, or training with a natural cue distribution. Results demonstrated that reliance on spectrum properties increased over baseline for all three groups. However, inhibitory training was more effective relative to enhancement training and both inhibitory and enhancement training were more effective relative to natural distribution training in decreasing listeners’ attention to duration. These results suggest that phonetic learning may involve two distinct cognitive processes, cue enhancement and cue inhibition, that function to shift selective attention between separable acoustic dimensions. Moreover, cue-specific training (whether enhancing or inhibitory) appears to be more effective for the acquisition of second language speech contrasts. PMID:21499531

  5. The role of selective attention in the acquisition of English tense and lax vowels by native Spanish listeners: comparison of three training methods.

    PubMed

    Kondaurova, Maria V; Francis, Alexander L

    2010-10-01

    This study investigates the role of two processes, cue enhancement (learning to attend to acoustic cues which characterize a speech contrast for native listeners) and cue inhibition (learning to ignore cues that do not), in the acquisition of the American English tense and lax ([i] vs.[I]) vowels by native Spanish listeners. This contrast is acoustically distinguished by both vowel spectrum and duration. However, while native English listeners rely primarily on spectrum, inexperienced Spanish listeners tend to rely exclusively on duration. Twenty-nine native Spanish listeners, initially reliant on vowel duration, received either enhancement training, inhibition training, or training with a natural cue distribution. Results demonstrated that reliance on spectrum properties increased over baseline for all three groups. However, inhibitory training was more effective relative to enhancement training and both inhibitory and enhancement training were more effective relative to natural distribution training in decreasing listeners' attention to duration. These results suggest that phonetic learning may involve two distinct cognitive processes, cue enhancement and cue inhibition, that function to shift selective attention between separable acoustic dimensions. Moreover, cue-specific training (whether enhancing or inhibitory) appears to be more effective for the acquisition of second language speech contrasts.

  6. Mathematical modeling of vowel perception by users of analog multichannel cochlear implants: temporal and channel-amplitude cues.

    PubMed

    Svirsky, M A

    2000-03-01

    A "multidimensional phoneme identification" (MPI) model is proposed to account for vowel perception by cochlear implant users. A multidimensional extension of the Durlach-Braida model of intensity perception, this model incorporates an internal noise model and a decision model to account separately for errors due to poor sensitivity and response bias. The MPI model provides a complete quantitative description of how listeners encode and combine acoustic cues, and how they use this information to determine which sound they heard. Thus, it allows for testing specific hypotheses about phoneme identification in a very stringent fashion. As an example of the model's application, vowel identification matrices obtained with synthetic speech stimuli (including "conflicting cue" conditions [Dorman et al., J. Acoust. Soc. Am. 92, 3428-3432 (1992)] were examined. The listeners were users of the "compressed-analog" stimulation strategy, which filters the speech spectrum into four partly overlapping frequency bands and delivers each signal to one of four electrodes in the cochlea. It was found that a simple model incorporating one temporal cue (i.e., an acoustic cue based only on the time waveforms delivered to the most basal channel) and spectral cues (based on the distribution of amplitudes among channels) can be quite successful in explaining listener responses. The new approach represented by the MPI model may be used to obtain useful insights about speech perception by cochlear implant users in particular, and by all kinds of listeners in general.

  7. An evaluation of Space Shuttle STS-2 payload bay acoustic data and comparison with predictions

    NASA Technical Reports Server (NTRS)

    Wilby, J. F.; Piersol, A. G.; Wilby, E. G.

    1982-01-01

    Space average sound pressure levels computed from measurements at 18 locations in the payload bay of the Space Shuttle orbiter vehicle during the STS-2 launch were compared with predicted levels obtained using the PACES computer program. The comparisons were performed over the frequency range 12.5 Hz to 1000 Hz, since the test data at higher frequencies are contaminated by instrumentation background noise. In general the PACES computer program tends to overpredict the space average sound levels in the payload bay, although the magnitude of the discrepancy is usually small. Furthermore the discrepancy depends to some extent on the manner in which the payload is modeled analytically, and the method used to determine the "measured' space average sound pressure levels. Thus the difference between predicted and measured sound levels, averaged over the 20 one third octave bands from 12.5 Hz to 1000 Hz, varies from 1 dB to 3.5 dB.

  8. The effects of indexical and phonetic variation on vowel perception in typically developing 9- to 12-year-old children

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    Purpose To investigate how linguistic knowledge interacts with indexical knowledge in older children's perception under demanding listening conditions created by extensive talker variability. Method Twenty five 9- to 12-year-old children, 12 from North Carolina (NC) and 13 from Wisconsin (WI), identified 12 vowels in isolated hVd-words produced by 120 talkers representing the two dialects (NC and WI), both genders and three age groups (generations) of residents from the same geographic locations as the listeners. Results Identification rates were higher for responses to talkers from the same dialect as the listeners and for female speech. Listeners were sensitive to systematic positional variations in vowels and their dynamic structure (formant movement) associated with generational differences in vowel pronunciation resulting from sound change in a speech community. Overall identification rate was 71.7%, which is 8.5% lower than for the adults responding to the same stimuli in Jacewicz and Fox (2012). Conclusions Typically developing older children are successful in dealing with both phonetic and indexical variation related to talker dialect, gender and generation. They are less consistent than the adults most likely due to their less efficient encoding of acoustic-phonetic information in the speech of multiple talkers and relative inexperience with indexical variation. PMID:24686520

  9. Greek perception and production of an English vowel contrast: A preliminary study

    NASA Astrophysics Data System (ADS)

    Podlipský, Václav J.

    2005-04-01

    This study focused on language-independent principles functioning in acquisition of second language (L2) contrasts. Specifically, it tested Bohn's Desensitization Hypothesis [in Speech perception and linguistic experience: Issues in Cross Language Research, edited by W. Strange (York Press, Baltimore, 1995)] which predicted that Greek speakers of English as an L2 would base their perceptual identification of English /i/ and /I/ on durational differences. Synthetic vowels differing orthogonally in duration and spectrum between the /i/ and /I/ endpoints served as stimuli for a forced-choice identification test. To assess L2 proficiency and to evaluate the possibility of cross-language category assimilation, productions of English /i/, /I/, and /ɛ/ and of Greek /i/ and /e/ were elicited and analyzed acoustically. The L2 utterances were also rated for the degree of foreign accent. Two native speakers of Modern Greek with low and 2 with intermediate experience in English participated. Six native English (NE) listeners and 6 NE speakers tested in an earlier study constituted the control groups. Heterogeneous perceptual behavior was observed for the L2 subjects. It is concluded that until acquisition in completely naturalistic settings is tested, possible interference of formally induced meta-linguistic differentiation between a ``short'' and a ``long'' vowel cannot be eliminated.

  10. Speaker normalization using cortical strip maps: a neural model for steady-state vowel categorization.

    PubMed

    Ames, Heather; Grossberg, Stephen

    2008-12-01

    Auditory signals of speech are speaker dependent, but representations of language meaning are speaker independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by adaptive resonance theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [Peterson, G. E., and Barney, H.L., J. Acoust. Soc. Am. 24, 175-184 (1952).] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.

  11. Activations of human auditory cortex to phonemic and nonphonemic vowels during discrimination and memory tasks.

    PubMed

    Harinen, Kirsi; Rinne, Teemu

    2013-08-15

    We used fMRI to investigate activations within human auditory cortex (AC) to vowels during vowel discrimination, vowel (categorical n-back) memory, and visual tasks. Based on our previous studies, we hypothesized that the vowel discrimination task would be associated with increased activations in the anterior superior temporal gyrus (STG), while the vowel memory task would enhance activations in the posterior STG and inferior parietal lobule (IPL). In particular, we tested the hypothesis that activations in the IPL during vowel memory tasks are associated with categorical processing. Namely, activations due to categorical processing should be higher during tasks performed on nonphonemic (hard to categorize) than on phonemic (easy to categorize) vowels. As expected, we found distinct activation patterns during vowel discrimination and vowel memory tasks. Further, these task-dependent activations were different during tasks performed on phonemic or nonphonemic vowels. However, activations in the IPL associated with the vowel memory task were not stronger during nonphonemic than phonemic vowel blocks. Together these results demonstrate that activations in human AC to vowels depend on both the requirements of the behavioral task and the phonemic status of the vowels.

  12. Acoustic neuroma

    MedlinePlus

    Vestibular schwannoma; Tumor - acoustic; Cerebellopontine angle tumor; Angle tumor; Hearing loss - acoustic; Tinnitus - acoustic ... Acoustic neuromas have been linked with the genetic disorder neurofibromatosis type 2 (NF2). Acoustic neuromas are uncommon.

  13. A k-space Green's function solution for acoustic initial value problems in homogeneous media with power law absorption.

    PubMed

    Treeby, Bradley E; Cox, B T

    2011-06-01

    An efficient Green's function solution for acoustic initial value problems in homogeneous media with power law absorption is derived. The solution is based on the homogeneous wave equation for lossless media with two additional terms. These terms are dependent on the fractional Laplacian and separately account for power law absorption and dispersion. Given initial conditions for the pressure and its temporal derivative, the solution allows the pressure field for any time t>0 to be calculated in a single step using the Fourier transform and an exact k-space time propagator. For regularly spaced Cartesian grids, the former can be computed efficiently using the fast Fourier transform. Because no time stepping is required, the solution facilitates the efficient computation of the pressure field in one, two, or three dimensions without stability constraints. Several computational aspects of the solution are discussed, including the effect of using a truncated Fourier series to represent discrete initial conditions, the use of smoothing, and the properties of the encapsulated absorption and dispersion.

  14. Linear and nonlinear analysis of dust acoustic waves in dissipative space dusty plasmas with trapped ions

    NASA Astrophysics Data System (ADS)

    El-Hanbaly, A. M.; El-Shewy, E. K.; Sallah, M.; Darweesh, H. F.

    2015-05-01

    The propagation of linear and nonlinear dust acoustic waves in a homogeneous unmagnetized, collisionless and dissipative dusty plasma consisted of extremely massive, micron-sized, negative dust grains has been investigated. The Boltzmann distribution is suggested for electrons whereas vortex-like distribution for ions. In the linear analysis, the dispersion relation is obtained, and the dependence of damping rate of the waves on the carrier wave number , the dust kinematic viscosity coefficient and the ratio of the ions to the electrons temperatures is discussed. In the nonlinear analysis, the modified Korteweg-de Vries-Burgers (mKdV-Burgers) equation is derived via the reductive perturbation method. Bifurcation analysis is discussed for non-dissipative system in the absence of Burgers term. In the case of dissipative system, the tangent hyperbolic method is used to solve mKdV-Burgers equation, and yield the shock wave solution. The obtained results may be helpful in better understanding of waves propagation in the astrophysical plasmas as well as in inertial confinement fusion laboratory plasmas.

  15. Production of French vowels by American-English learners of French: language experience, consonantal context, and the perception-production relationship.

    PubMed

    Levy, Erika S; Law, Franzo F

    2010-09-01

    Second-language (L2) speech perception studies have demonstrated effects of language background and consonantal context on categorization and discrimination of vowels. The present study examined the effects of language experience and consonantal context on the production of Parisian French (PF) vowels by American English (AE) learners of French. Native AE speakers repeated PF vowels /i-y-u-oe-a/ in bilabial /bVp/ and alveolar /dVt/ contexts embedded in the phrase /raCVCa/. Three AE groups participated: speakers without French experience (NoExp), speakers with formal French experience (ModExp), and speakers with formal-plus-immersion experience (HiExp). Production accuracy was assessed by native PF listeners' judgments and by acoustic analysis. PF listeners identified L2 learners' productions more accurately when the learners had extensive language experience, although /y-u-oe/ by even HiExp speakers were frequently misidentified. A consonantal context effect was evident, including /u/ produced by ModExp misidentified more often in alveolar context than in bilabial, and /y/ misidentified more often in bilabial than in alveolar context, suggesting cross-language transfer of coarticulatory rules. Overall, groups distinguished front rounded /y/ from /u/ in production, but often in a non-native manner, e.g., producing /y/ as /(j)u/. Examination of perceptual data from the same individuals revealed a modest, yet complex, perception-production link for L2 vowels.

  16. Vowel normalization for accent: An investigation of perceptual plasticity in young adults

    NASA Astrophysics Data System (ADS)

    Evans, Bronwen G.; Iverson, Paul

    2004-05-01

    Previous work has emphasized the role of early experience in the ability to accurately perceive and produce foreign or foreign-accented speech. This study examines how listeners at a much later stage in language development-early adulthood-adapt to a non-native accent within the same language. A longitudinal study investigated whether listeners who had had no previous experience of living in multidialectal environments adapted their speech perception and production when attending university. Participants were tested before beginning university and then again 3 months later. An acoustic analysis of production was carried out and perceptual tests were used to investigate changes in word intelligibility and vowel categorization. Preliminary results suggest that listeners are able to adjust their phonetic representations and that these patterns of adjustment are linked to the changes in production that speakers typically make due to sociolinguistic factors when living in multidialectal environments.

  17. Quantifying loss of acoustic communication space for right whales in and around a U.S. National Marine Sanctuary.

    PubMed

    Hatch, Leila T; Clark, Christopher W; Van Parijs, Sofie M; Frankel, Adam S; Ponirakis, Dimitri W

    2012-12-01

    The effects of chronic exposure to increasing levels of human-induced underwater noise on marine animal populations reliant on sound for communication are poorly understood. We sought to further develop methods of quantifying the effects of communication masking associated with human-induced sound on contact-calling North Atlantic right whales (Eubalaena glacialis) in an ecologically relevant area (~10,000 km(2) ) and time period (peak feeding time). We used an array of temporary, bottom-mounted, autonomous acoustic recorders in the Stellwagen Bank National Marine Sanctuary to monitor ambient noise levels, measure levels of sound associated with vessels, and detect and locate calling whales. We related wind speed, as recorded by regional oceanographic buoys, to ambient noise levels. We used vessel-tracking data from the Automatic Identification System to quantify acoustic signatures of large commercial vessels. On the basis of these integrated sound fields, median signal excess (the difference between the signal-to-noise ratio and the assumed recognition differential) for contact-calling right whales was negative (-1 dB) under current ambient noise levels and was further reduced (-2 dB) by the addition of noise from ships. Compared with potential communication space available under historically lower noise conditions, calling right whales may have lost, on average, 63-67% of their communication space. One or more of the 89 calling whales in the study area was exposed to noise levels ≥120 dB re 1 μPa by ships for 20% of the month, and a maximum of 11 whales were exposed to noise at or above this level during a single 10-min period. These results highlight the limitations of exposure-threshold (i.e., dose-response) metrics for assessing chronic anthropogenic noise effects on communication opportunities. Our methods can be used to integrate chronic and wide-ranging noise effects in emerging ocean-planning forums that seek to improve management of cumulative effects

  18. Acoustic source for generating an acoustic beam

    DOEpatents

    Vu, Cung Khac; Sinha, Dipen N.; Pantea, Cristian

    2016-05-31

    An acoustic source for generating an acoustic beam includes a housing; a plurality of spaced apart piezo-electric layers disposed within the housing; and a non-linear medium filling between the plurality of layers. Each of the plurality of piezoelectric layers is configured to generate an acoustic wave. The non-linear medium and the plurality of piezo-electric material layers have a matching impedance so as to enhance a transmission of the acoustic wave generated by each of plurality of layers through the remaining plurality of layers.

  19. Structural-acoustic optimum design of shell structures in open/closed space based on a free-form optimization method

    NASA Astrophysics Data System (ADS)

    Shimoda, Masatoshi; Shimoide, Kensuke; Shi, Jin-Xing

    2016-03-01

    Noise reduction by structural geometry optimization has attracted much attention among designers. In the present work, we propose a free-form optimization method for the structural-acoustic design optimization of shell structures to reduce the noise of a targeted frequency or frequency range in an open or closed space. The objective of the design optimization is to minimize the average structural vibration-induced sound pressure at the evaluation points in the acoustic field under a volume constraint. For the shape design optimization, we carry out structural-acoustic coupling analysis and adjoint analysis to calculate the shape gradient functions. Then, we use the shape gradient functions in velocity analysis to update the shape of shell structures. We repeat this process until convergence is confirmed to obtain the optimum shape of the shell structures in a structural-acoustic coupling system. The numerical results for the considered examples showed that the proposed design optimization process can significantly reduce the noise in both open and closed spaces.

  20. Effects of frequency shifts and visual gender information on vowel category judgments

    NASA Astrophysics Data System (ADS)

    Glidden, Catherine; Assmann, Peter F.

    2003-10-01

    Visual morphing techniques were used together with a high-quality vocoder to study the audiovisual contribution of talker gender to the identification of frequency-shifted vowels. A nine-step continuum ranging from ``bit'' to ``bet'' was constructed from natural recorded syllables spoken by an adult female talker. Upward and downward frequency shifts in spectral envelope (scale factors of 0.85 and 1.0) were applied in combination with shifts in fundamental frequency, F0 (scale factors of 0.5 and 1.0). Downward frequency shifts generally resulted in malelike voices whereas upward shifts were perceived as femalelike. Two separate nine-step visual continua from ``bit'' to ``bet'' were also constructed, one from a male face and the other a female face, each producing the end-point words. Each step along the two visual continua was paired with the corresponding step on the acoustic continuum, creating natural audiovisual utterances. Category boundary shifts were found for both acoustic cues (F0 and formant frequency shifts) and visual cues (visual gender). The visual gender effect was larger when acoustic and visual information were matched appropriately. These results suggest that visual information provided by the speech signal plays an important supplemental role in talker normalization.

  1. Categorical speech perception during active discrimination of consonants and vowels.

    PubMed

    Altmann, Christian F; Uesaki, Maiko; Ono, Kentaro; Matsuhashi, Masao; Mima, Tatsuya; Fukuyama, Hidenao

    2014-11-01

    Categorical perception of phonemes describes the phenomenon that, when phonemes are classified they are often perceived to fall into distinct categories even though physically they follow a continuum along a feature dimension. While consonants such as plosives have been proposed to be perceived categorically, the representation of vowels has been described to be more continuous. We aimed at testing this difference in representation at a behavioral and neurophysiological level using human magnetoencephalography (MEG). To this end, we designed stimuli based on natural speech by morphing along a phonological continuum entailing changes of the voiced stop-consonant or the steady-state vowel of a consonant-vowel (CV) syllable. Then, while recording MEG, we presented participants with consecutive pairs of either same or different CV syllables. The differences were such that either both CV syllables were from within the same category or belonged to different categories. During the MEG experiment, the participants actively discriminated the stimulus pairs. Behaviorally, we found that discrimination was easier for the between-compared to the within-category contrast for both consonants and vowels. However, this categorical effect was significantly stronger for the consonants compared to vowels, in line with a more continuous representation of vowels. At the neural level, we observed significant repetition suppression of MEG evoked fields, i.e. lower amplitudes for physically same compared to different stimulus pairs, at around 430 to 500ms after the onset of the second stimulus. Source reconstruction revealed generating sources of this repetition suppression effect within left superior temporal sulcus and gyrus, posterior to Heschl׳s gyrus. A region-of-interest analysis within this region showed a clear categorical effect for consonants, but not for vowels, providing further evidence for the important role of left superior temporal areas in categorical representation

  2. Dynamic surface acoustic response to a thermal expansion source on an anisotropic half space.

    PubMed

    Zhao, Peng; Zhao, Ji-Cheng; Weaver, Richard

    2013-05-01

    The surface displacement response to a distributed thermal expansion source is solved using the reciprocity principle. By convolving the strain Green's function with the thermal stress field created by an ultrafast laser illumination, the complete surface displacement on an anisotropic half space induced by laser absorption is calculated in the time domain. This solution applies to the near field surface displacement due to pulse laser absorption. The solution is validated by performing ultrafast laser pump-probe measurements and showing very good agreement between the measured time-dependent probe beam deflection and the computed surface displacement.

  3. Vowel normalization and the perception of speaker changes: an exploration of the contextual tuning hypothesis.

    PubMed

    Barreda, Santiago

    2012-11-01

    Many experiments have reported a perceptual advantage for vowels presented in blocked-versus mixed-voice conditions. Nusbaum and colleagues [Nusbaum and Morin (1992). in Speech Perception, Speech Production, and Linguistic Structure, edited by Y. Tohkura, Y. Sagisaka, and E. Vatikiotis-Bateson (OHM, Tokyo), pp. 113-134; Magnuson and Nusbaum (2007). J. Exp. Psychol. Hum. Percept. Perform. 33(2), 391-409] present results which suggest that the size of this advantage may be related to the facility with which listeners can detect speaker changes, so that combinations of less similar voices can result in better performance than combinations of more similar voices. To test this, a series of synthetic voices (differing in their source characteristics and/or formant-spaces) was used in a speeded-monitoring task. Vowels were presented in blocks made up of tokens from one or two synthetic voices. Results indicate that formant-space differences, in the absence of source differences between voices in a block, were unlikely to result in the perception of multiple voices, leading to lower accuracy and relatively faster reaction times. Source differences between voices in a block resulted in the perception of multiple voices, increased reaction times, and a decreased negative effect of formant-space differences between voices on identification accuracy. These results are consistent with a process in which the detection of speaker changes guides the appropriate or inappropriate use of extrinsic information in normalization.

  4. Kinematic dust viscosity effect on linear and nonlinear dust-acoustic waves in space dusty plasmas with nonthermal ions

    NASA Astrophysics Data System (ADS)

    El-Hanbaly, A. M.; Sallah, M.; El-Shewy, E. K.; Darweesh, H. F.

    2015-10-01

    Linear and nonlinear dust-acoustic (DA) waves are studied in a collisionless, unmagnetized and dissipative dusty plasma consisting of negatively charged dust grains, Boltzmann-distributed electrons, and nonthermal ions. The normal mode analysis is used to obtain a linear dispersion relation illustrating the dependence of the wave damping rate on the carrier wave number, the dust viscosity coefficient, the ratio of the ion temperature to the electron temperatures, and the nonthermal parameter. The plasma system is analyzed nonlinearly via the reductive perturbation method that gives the KdV-Burgers equation. Some interesting physical solutions are obtained to study the nonlinear waves. These solutions are related to soliton, a combination between a shock and a soliton, and monotonic and oscillatory shock waves. Their behaviors are illustrated and shown graphically. The characteristics of the DA solitary and shock waves are significantly modified by the presence of nonthermal (fast) ions, the ratio of the ion temperature to the electron temperature, and the dust kinematic viscosity. The topology of the phase portrait and the potential diagram of the KdV-Burgers equation is illustrated, whose advantage is the ability to predict different classes of traveling wave solutions according to different phase orbits. The energy of the soliton wave and the electric field are calculated. The results in this paper can be generalized to analyze the nature of plasma waves in both space and laboratory plasma systems.

  5. Kinematic dust viscosity effect on linear and nonlinear dust-acoustic waves in space dusty plasmas with nonthermal ions

    SciTech Connect

    El-Hanbaly, A. M.; Sallah, M.; El-Shewy, E. K.; Darweesh, H. F.

    2015-10-15

    Linear and nonlinear dust-acoustic (DA) waves are studied in a collisionless, unmagnetized and dissipative dusty plasma consisting of negatively charged dust grains, Boltzmann-distributed electrons, and nonthermal ions. The normal mode analysis is used to obtain a linear dispersion relation illustrating the dependence of the wave damping rate on the carrier wave number, the dust viscosity coefficient, the ratio of the ion temperature to the electron temperatures, and the nonthermal parameter. The plasma system is analyzed nonlinearly via the reductive perturbation method that gives the KdV-Burgers equation. Some interesting physical solutions are obtained to study the nonlinear waves. These solutions are related to soliton, a combination between a shock and a soliton, and monotonic and oscillatory shock waves. Their behaviors are illustrated and shown graphically. The characteristics of the DA solitary and shock waves are significantly modified by the presence of nonthermal (fast) ions, the ratio of the ion temperature to the electron temperature, and the dust kinematic viscosity. The topology of the phase portrait and the potential diagram of the KdV-Burgers equation is illustrated, whose advantage is the ability to predict different classes of traveling wave solutions according to different phase orbits. The energy of the soliton wave and the electric field are calculated. The results in this paper can be generalized to analyze the nature of plasma waves in both space and laboratory plasma systems.

  6. Allocentric or Craniocentric Representation of Acoustic Space: An Electrotomography Study Using Mismatch Negativity

    PubMed Central

    Altmann, Christian F.; Getzmann, Stephan; Lewald, Jörg

    2012-01-01

    The world around us appears stable in spite of our constantly moving head, eyes, and body. How this is achieved by our brain is hardly understood and even less so in the auditory domain. Using electroencephalography and the so-called mismatch negativity, we investigated whether auditory space is encoded in an allocentric (referenced to the environment) or craniocentric representation (referenced to the head). Fourteen subjects were presented with noise bursts from loudspeakers in an anechoic environment. Occasionally, subjects were cued to rotate their heads and a deviant sound burst occurred, that deviated from the preceding standard stimulus either in terms of an allocentric or craniocentric frame of reference. We observed a significant mismatch negativity, i.e., a more negative response to deviants with reference to standard stimuli from about 136 to 188 ms after stimulus onset in the craniocentric deviant condition only. Distributed source modeling with sLORETA revealed an involvement of lateral superior temporal gyrus and inferior parietal lobule in the underlying neural processes. These findings suggested a craniocentric, rather than allocentric, representation of auditory space at the level of the mismatch negativity. PMID:22848643

  7. Impact of chevron spacing and asymmetric distribution on supersonic jet acoustics and flow

    NASA Astrophysics Data System (ADS)

    Heeb, N.; Gutmark, E.; Kailasanath, K.

    2016-05-01

    An experimental investigation into the effect of chevron spacing and distribution on supersonic jets was performed. Cross-stream and streamwise particle imaging velocimetry measurements were used to relate flow field modification to sound field changes measured by far-field microphones in the overexpanded, ideally expanded, and underexpanded regimes. Drastic modification of the jet cross-section was achieved by the investigated configurations, with both elliptic and triangular shapes attained downstream. Consequently, screech was nearly eliminated with reductions in the range of 10-25 dB depending on the operating condition. Analysis of the streamwise velocity indicated that both the mean shock spacing and strength were reduced resulting in an increase in the broadband shock associated noise spectral peak frequency and a reduction in the amplitude, respectively. Maximum broadband shock associated noise amplitude reductions were in the 5-7 dB range. Chevron proximity was found to be the primary driver of peak vorticity production, though persistence followed the opposite trend. The integrated streamwise vorticity modulus was found to be correlated with peak large scale turbulent mixing noise reduction, though optimal overall sound pressure level reductions did not necessarily follow due to the shock/fine scale mixing noise sources. Optimal large scale mixing noise reductions were in the 5-6 dB range.

  8. Estimating pore-space gas hydrate saturations from well log acoustic data

    USGS Publications Warehouse

    Lee, Myung W.; Waite, William F.

    2008-01-01

    Relating pore-space gas hydrate saturation to sonic velocity data is important for remotely estimating gas hydrate concentration in sediment. In the present study, sonic velocities of gas hydrate–bearing sands are modeled using a three-phase Biot-type theory in which sand, gas hydrate, and pore fluid form three homogeneous, interwoven frameworks. This theory is developed using well log compressional and shear wave velocity data from the Mallik 5L-38 permafrost gas hydrate research well in Canada and applied to well log data from hydrate-bearing sands in the Alaskan permafrost, Gulf of Mexico, and northern Cascadia margin. Velocity-based gas hydrate saturation estimates are in good agreement with Nuclear Magneto Resonance and resistivity log estimates over the complete range of observed gas hydrate saturations.

  9. Near noise field characteristics of Nike rocket motors for application to space vehicle payload acoustic qualification

    NASA Technical Reports Server (NTRS)

    Hilton, D. A.; Bruton, D.

    1977-01-01

    Results of a series of noise measurements that were made under controlled conditions during the static firing of two Nike solid propellant rocket motors are presented. The usefulness of these motors as sources for general spacecraft noise testing was assessed, and the noise expected in the cargo bay of the orbiter was reproduced. Brief descriptions of the Nike motor, the general procedures utilized for the noise tests, and representative noise data including overall sound pressure levels, one third octave band spectra, and octave band spectra were reviewed. Data are presented on two motors of different ages in order to show the similarity between noise measurements made on motors having different loading dates. The measured noise from these tests is then compared to that estimated for the space shuttle orbiter cargo bay.

  10. Vowels, Syllables, and Letter Names: Differences between Young Children's Spelling in English and Portuguese

    ERIC Educational Resources Information Center

    Pollo, Tatiana Cury; Kessler, Brett; Treiman, Rebecca

    2005-01-01

    Young Portuguese-speaking children have been reported to produce more vowel- and syllable-oriented spellings than have English speakers. To investigate the extent and source of such differences, we analyzed children's vocabulary and found that Portuguese words have more vowel letter names and a higher vowel-consonant ratio than do English words.…

  11. Vowel Harmony Is a Basic Phonetic Rule of the Turkic Languages

    ERIC Educational Resources Information Center

    Shoibekova, Gaziza B.; Odanova, Sagira A.; Sultanova, Bibigul M.; Yermekova, Tynyshtyk N.

    2016-01-01

    The present study comprehensively analyzes vowel harmony as an important phonetic rule in Turkic languages. Recent changes in the vowel harmony potential of Turkic sounds caused by linguistic and extra-linguistic factors were described. Vowels in the Kazakh, Turkish, and Uzbek language were compared. The way this or that phoneme sounded in the…

  12. Structural Generalizations over Consonants and Vowels in 11-Month-Old Infants

    ERIC Educational Resources Information Center

    Pons, Ferran; Toro, Juan M.

    2010-01-01

    Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we…

  13. Children's Perception of Conversational and Clear American-English Vowels in Noise

    NASA Astrophysics Data System (ADS)

    Leone, Dorothy

    A handful of studies have examined children's perception of clear speech in the presence of background noise. Although accurate vowel perception is important for listeners' comprehension, no study has focused on whether vowels uttered in clear speech aid intelligibility for children listeners. In the present study, American-English (AE) speaking children repeated the AE vowels /epsilon, ae, alpha, lambda in the nonsense word /g[schwa]bVp[schwa]/ in phrases produced in conversational and clear speech by two female AE-speaking adults. The recordings of the adults' speech were presented at a signal-to-noise ratio (SNR) of -6 dB to 15 AE-speaking children (ages 5.0-8.5) in an examination of whether the accuracy of AE school-age children's vowel identification in noise is more accurate when utterances are produced in clear speech than in conversational speech. Effects of the particular vowel uttered and talker effects were also examined. Clear speech vowels were repeated significantly more accurately (87%) than conversational speech vowels (59%), suggesting that clear speech aids children's vowel identification. Results varied as a function of the talker and particular vowel uttered. Child listeners repeated one talker's vowels more accurately than the other's and front vowels more accurately than central and back vowels. The findings support the use of clear speech for enhancing adult-to-child communication in AE, particularly in noisy environments.

  14. Textual Input Enhancement for Vowel Blindness: A Study with Arabic ESL Learners

    ERIC Educational Resources Information Center

    Alsadoon, Reem; Heift, Trude

    2015-01-01

    This study explores the impact of textual input enhancement on the noticing and intake of English vowels by Arabic L2 learners of English. Arabic L1 speakers are known to experience "vowel blindness," commonly defined as a difficulty in the textual decoding and encoding of English vowels due to an insufficient decoding of the word form.…

  15. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    ERIC Educational Resources Information Center

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Halle, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and…

  16. Improving L2 Listeners' Perception of English Vowels: A Computer-Mediated Approach

    ERIC Educational Resources Information Center

    Thomson, Ron I.

    2012-01-01

    A high variability phonetic training technique was employed to train 26 Mandarin speakers to better perceive ten English vowels. In eight short training sessions, learners identified 200 English vowel tokens, produced in a post bilabial stop context by 20 native speakers. Learners' ability to identify English vowels significantly improved in the…

  17. Asymmetries in the Processing of Vowel Height

    ERIC Educational Resources Information Center

    Scharinger, Mathias; Monahan, Philip J.; Idsardi, William J.

    2012-01-01

    Purpose: Speech perception can be described as the transformation of continuous acoustic information into discrete memory representations. Therefore, research on neural representations of speech sounds is particularly important for a better understanding of this transformation. Speech perception models make specific assumptions regarding the…

  18. The phonetic manifestation of French /s#∫/ and /∫#s/ sequences in different vowel contexts: on the occurrence and the domain of sibilant assimilation.

    PubMed

    Niebuhr, Oliver; Meunier, Christine

    2011-01-01

    While assimilation was initially regarded as a categorical replacement of phonemes or phonological features, subsequent detailed phonetic analyses showed that assimilation actually generates a wide spectrum of intermediate forms in terms of speech timing and spectrum. However, the focus of these analyses predominantly remained on the assimilated speech sound. In the present study we go one step ahead in two ways. First, we look at acoustic phonetic detail that differs in the French vowels /i, a, u/ preceding single /s/ and /∫/ sibilants as well as /s#∫/ and /∫#s/ sibilant sequences. Second, our vowel measurements include not only F1 and F2 frequencies, but also traditional prosodic parameters like duration, intensity and voice quality. The vowels and sibilants were recorded as the central part of CVC#CVC pseudo-names in a contextualized read-speech paradigm. In the single-sibilant conditions we found that the vowels preceding /∫/ were longer, breathier, less intense, and had more cardinal F2 values than before /s/. For the /s#∫/ and /∫#s/ conditions we found regressive and progressive /s/-to-[∫] assimilation that was complete in terms of spectral centre-of-gravity measurements, although French is said to have only voice assimilation. Moreover, the vowels preceding the /s#∫/ sequences still bear an imprint of /s/ despite the assimilation towards [ ∫∫]. We discuss the implications of these findings for the time window and the completeness of assimilation as well as for the basic units in speech communication.

  19. Nearfield Acoustical Holography

    NASA Astrophysics Data System (ADS)

    Hayek, Sabih I.

    Nearfield acoustical holography (NAH) is a method by which a set of acoustic pressure measurements at points located on a specific surface (called a hologram) can be used to image sources on vibrating surfaces on the acoustic field in three-dimensional space. NAH data are processed to take advantage of the evanescent wavefield to image sources that are separated less that one-eighth of a wavelength.

  20. Liquid Rocket Booster (LRB) for the Space Transportation System (STS) systems study. Appendix B: Liquid rocket booster acoustic and thermal environments

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The ascent thermal environment and propulsion acoustic sources for the Martin-Marietta Corporation designed Liquid Rocket Boosters (LRB) to be used with the Space Shuttle Orbiter and External Tank are described. Two designs were proposed: one using a pump-fed propulsion system and the other using a pressure-fed propulsion system. Both designs use LOX/RP-1 propellants, but differences in performance of the two propulsion systems produce significant differences in the proposed stage geometries, exhaust plumes, and resulting environments. The general characteristics of the two designs which are significant for environmental predictions are described. The methods of analysis and predictions for environments in acoustics, aerodynamic heating, and base heating (from exhaust plume effects) are also described. The acoustic section will compare the proposed exhaust plumes with the current SRB from the standpoint of acoustics and ignition overpressure. The sections on thermal environments will provide details of the LRB heating rates and indications of possible changes in the Orbiter and ET environments as a result of the change from SRBs to LRBs.

  1. AST Launch Vehicle Acoustics

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Counter, D.; Giacomoni, D.

    2015-01-01

    The liftoff phase induces acoustic loading over a broad frequency range for a launch vehicle. These external acoustic environments are then used in the prediction of internal vibration responses of the vehicle and components which result in the qualification levels. Thus, predicting these liftoff acoustic (LOA) environments is critical to the design requirements of any launch vehicle. If there is a significant amount of uncertainty in the predictions or if acoustic mitigation options must be implemented, a subscale acoustic test is a feasible pre-launch test option to verify the LOA environments. The NASA Space Launch System (SLS) program initiated the Scale Model Acoustic Test (SMAT) to verify the predicted SLS LOA environments and to determine the acoustic reduction with an above deck water sound suppression system. The SMAT was conducted at Marshall Space Flight Center and the test article included a 5% scale SLS vehicle model, tower and Mobile Launcher. Acoustic and pressure data were measured by approximately 250 instruments. The SMAT liftoff acoustic results are presented, findings are discussed and a comparison is shown to the Ares I Scale Model Acoustic Test (ASMAT) results.

  2. Trading relations between tongue-body raising and lip rounding in production of the vowel /u/: a pilot "motor equivalence" study.

    PubMed

    Perkell, J S; Matthies, M L; Svirsky, M A; Jordan, M I

    1993-05-01

    Articulatory and acoustic data were used to explore the following hypothesis for the vowel /u/: The objective of articulatory movements is an acoustic goal; varying and reciprocal contributions of different articulators may help to constrain acoustic variation in achieving the goal. Previous articulatory studies of similar hypotheses, expressed entirely in articulatory terms, have been confounded by interdependencies of the variables being studied (e.g., lip and mandible displacements). One case in which this problem may be minimized is that of lip rounding and tongue-body raising (formation of a velo-palatal constriction) for the vowel /u/. Lip rounding and tongue-body raising should have similar acoustic effects for /u/, mainly to lower F2. In multiple repetitions, reciprocal contributions of lip rounding and tongue-body raising could help limit F2 variability for /u/; thus this experiment looked for complementary covariation (negative correlations) in measures of these two parameters. An electro-magnetic midsagittal articulometer (EMMA) was used to track movements of midsagittal points on the tongue body, upper and lower lips, and mandible for large numbers of repetitions of utterances containing /u/. (Interpretation of the data was aided by results from area-function-to-formant modeling.) Three of four subjects showed weak negative correlations, tentatively supporting the hypothesis; a fourth showed the opposite pattern: positive correlations of lip rounding and tongue raising. The results are discussed with respect to ideas about motor equivalence, the nature of speech motor programming, and potential improvements to the paradigm.

  3. Vowel distortion in traumatic dysarthria: lip rounding versus tongue advancement.

    PubMed

    Ziegler, W; von Cramon, D

    1983-01-01

    Formant analysis of tense, high, German vowels was performed to the end of obtaining information about the role of insufficient lip rounding in distorted vowel production of 8 traumatic dysarthrics. A comparison was made between two allophones of /y/ in different consonantal contexts. Noticeable undershoot in lip rounding or protrusion proved to occur in a context of conflicting labial gestures. Where the articulatory realization of a CVC sequence required gross tongue movements, a lingual undershoot resulted as the prevailing deficit. No evidence for dyscoordinative defects was obtained from the results.

  4. Intelligibility of American English Vowels of Native and Non-Native Speakers in Quiet and Speech-Shaped Noise

    ERIC Educational Resources Information Center

    Liu, Chang; Jin, Su-Hyun

    2013-01-01

    This study examined intelligibility of twelve American English vowels produced by English, Chinese, and Korean native speakers in quiet and speech-shaped noise in which vowels were presented at six sensation levels from 0 dB to 10 dB. The slopes of vowel intelligibility functions and the processing time for listeners to identify vowels were…

  5. Cystic acoustic schwannomas.

    PubMed

    Lunardi, P; Missori, P; Mastronardi, L; Fortuna, A

    1991-01-01

    Three cases with large space-occupying cysts in the cerebellopontine angle are reported. CT and MRI findings were not typical for acoustic schwannomas but at operation, besides the large cysts, small acoustic schwannomas could be detected and removed. The clinical and neuroradiological features of this unusual variety and the CT and MRI differential diagnosis of cerebellopontine angle lesions are discussed.

  6. Native dialect matters: perceptual assimilation of Dutch vowels by Czech listeners.

    PubMed

    Chládková, Kateřina; Podlipský, Václav Jonáš

    2011-10-01

    Naive listeners' perceptual assimilations of non-native vowels to first-language (L1) categories can predict difficulties in the acquisition of second-language vowel systems. This study demonstrates that listeners having two slightly different dialects as their L1s can differ in the perception of foreign vowels. Specifically, the study shows that Bohemian Czech and Moravian Czech listeners assimilate Dutch high front vowels differently to L1 categories. Consequently, the listeners are predicted to follow different paths in acquiring these Dutch vowels. These findings underscore the importance of carefully considering the specific dialect background of participants in foreign- and second-language speech perception studies.

  7. Capabilities, Design, Construction and Commissioning of New Vibration, Acoustic, and Electromagnetic Capabilities Added to the World's Largest Thermal Vacuum Chamber at NASA's Space Power Facility

    NASA Technical Reports Server (NTRS)

    Motil, Susan M.; Ludwiczak, Damian R.; Carek, Gerald A.; Sorge, Richard N.; Free, James M.; Cikanek, Harry A., III

    2011-01-01

    NASA s human space exploration plans developed under the Exploration System Architecture Studies in 2005 included a Crew Exploration Vehicle launched on an Ares I launch vehicle. The mass of the Crew Exploration Vehicle and trajectory of the Ares I coupled with the need to be able to abort across a large percentage of the trajectory generated unprecedented testing requirements. A future lunar lander added to projected test requirements. In 2006, the basic test plan for Orion was developed. It included several types of environment tests typical of spacecraft development programs. These included thermal-vacuum, electromagnetic interference, mechanical vibration, and acoustic tests. Because of the size of the vehicle and unprecedented acoustics, NASA conducted an extensive assessment of options for testing, and as result, chose to augment the Space Power Facility at NASA Plum Brook Station, of the John H. Glenn Research Center to provide the needed test capabilities. The augmentation included designing and building the World s highest mass capable vibration table, the highest power large acoustic chamber, and adaptation of the existing World s largest thermal vacuum chamber as a reverberant electromagnetic interference test chamber. These augmentations were accomplished from 2007 through early 2011. Acceptance testing began in Spring 2011 and will be completed in the Fall of 2011. This paper provides an overview of the capabilities, design, construction and acceptance of this extraordinary facility.

  8. Intrinsic F0 in tense and lax vowels with special reference to German.

    PubMed

    Fischer-Jørgensen, E

    1990-01-01

    The main purpose of this paper is to show that the observation which is the starting point for almost all attempts at explaining intrinsic fundamental frequency (intrinsic F0) in vowels, i.e. that it is correlated with vowel height (interpreted as tongue height), does not hold if short lax vowels are included, since they have a considerably lower tongue height but practically the same F0 as their corresponding tense counterparts. Section 1 contains a discussion of some explanations of intrinsic F0 and vowel height and a short exposition of its connection with other vowel features. Section 2 gives a survey of the properties of tense and lax vowels based on data from the phonetic literature. Section 3 reports on an investigation of German tense and lax front unrounded vowels, including duration, tongue height, jaw opening, vertical lip opening, formant frequencies, and F0. Section 4 contains a discussion of various possible explanations of the results.

  9. Mismatch negativity at Fz in response to within-category changes of the vowel /i/.

    PubMed

    Marklund, Ellen; Schwarz, Iris-Corinna; Lacerda, Francisco

    2014-07-09

    The amplitude of the mismatch negativity response for acoustic within-category deviations in speech stimuli was investigated by presenting participants with different exemplars of the vowel /i/ in an odd-ball paradigm. The deviants differed from the standard either in terms of fundamental frequency, the first formant, or the second formant. Changes in fundamental frequency are generally more salient than changes in the first formant, which in turn are more salient than changes in the second formant. The mismatch negativity response was expected to reflect this with greater amplitude for more salient deviations. The fundamental frequency deviants did indeed result in greater amplitude than both first formant deviants and second formant deviants, but no difference was found between the first formant deviants and the second formant deviants. It is concluded that greater difference between standard and within-category deviants across different acoustic dimensions results in greater mismatch negativity amplitude, suggesting that the processing of linguistically irrelevant changes in speech sounds may be processed similar to nonspeech sound changes.

  10. Competing Triggers: Transparency and Opacity in Vowel Harmony

    ERIC Educational Resources Information Center

    Kimper, Wendell A.

    2011-01-01

    This dissertation takes up the issue of transparency and opacity in vowel harmony--that is, when a segment is unable to undergo a harmony process, will it be skipped over by harmony (transparent) or will it prevent harmony from propagating further (opaque)? I argue that the choice between transparency and opacity is best understood as a…

  11. Vowel Production in the Speech of Western Armenian Heritage Speakers

    ERIC Educational Resources Information Center

    Godson, Linda

    2004-01-01

    This study investigates whether the age at which English becomes dominant for Western Armenian bilinguals in the United States affects their vowel production in Western Armenian. Participating in the study were ten Western-Armenian bilinguals who learned English before age 8, ten bilinguals who did not learn English until adulthood, and one…

  12. Finding Words in a Language that Allows Words without Vowels

    ERIC Educational Resources Information Center

    El Aissati, Abder; McQueen, James M.; Cutler, Anne

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring "win" in "twin" because "t" cannot be a word). However, the constraint would be counter-productive in…

  13. Does Size Matter? Subsegmental Cues to Vowel Mispronunciation Detection

    ERIC Educational Resources Information Center

    Mani, Nivedita; Plunkett, Kim

    2011-01-01

    Children look longer at a familiar object when presented with either correct pronunciations or small mispronunciations of consonants in the object's label, but not following larger mispronunciations. The current article examines whether children display a similar graded sensitivity to different degrees of mispronunciations of the vowels in…

  14. Vowel Category Formation in Korean-English Bilingual Children

    ERIC Educational Resources Information Center

    Lee, Sue Ann S.; Iverson, Gregory K.

    2012-01-01

    Purpose: A previous investigation (Lee & Iverson, 2012) found that English and Korean stop categories were fully distinguished by Korean-English bilingual children at 10 years of age but not at 5 years of age. The present study examined vowels produced by Korean-English bilingual children of these same ages to determine whether and when bilinguals…

  15. Auditory Spectral Integration in the Perception of Static Vowels

    ERIC Educational Resources Information Center

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2011-01-01

    Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…

  16. Lingual Electromyography Related to Tongue Movements in Swedish Vowel Production.

    ERIC Educational Resources Information Center

    Hirose, Hajime; And Others

    1979-01-01

    In order to investigate the articulatory dynamics of the tongue in the production of Swedish vowels, electromyographic (EMG) and X-ray microbeam studies were performed on a native Swedish subject. The EMG signals were used to obtain average indication of the muscle activity of the tongue as a function of time. (NCR)

  17. Short Vowels. Fun with Phonics! Book 4. Grades K-1.

    ERIC Educational Resources Information Center

    Eaton, Deborah

    This book is a hands-on activity resource for kindergarten and first grade that makes phonics instruction easy and fun for teachers and children in the classroom. The book offers methods for practice, reinforcement, and assessment of phonetic skills using a poem as a foundation for teaching short vowels. The poem is duplicated so children can work…

  18. Comparing Identification of Standardized and Regionally Valid Vowels

    ERIC Educational Resources Information Center

    Wright, Richard; Souza, Pamela

    2012-01-01

    Purpose: In perception studies, it is common to use vowel stimuli from standardized recordings or synthetic stimuli created using values from well-known published research. Although the use of standardized stimuli is convenient, unconsidered dialect and regional accent differences may introduce confounding effects. The goal of this study was to…

  19. Cardiac Orienting and Vowel Discrimination in Newborns: Crucial Stimulus Parameters.

    ERIC Educational Resources Information Center

    Clarkson, Marsha G.; Berg, W. Keith

    1983-01-01

    Results from one experiment indicated that the temporal pattern and spectral complexity of moderately intense auditory stimuli influenced cardiac responses in 24 alert newborns. A second study extended the temporal-pattern effect to other vowel stimuli in a no-delay discrimination paradigm. (Author/RH)

  20. Vowel Epenthesis and Segment Identity in Korean Learners of English

    ERIC Educational Resources Information Center

    de Jong, Kenneth; Park, Hanyong

    2012-01-01

    Recent literature has sought to understand the presence of epenthetic vowels after the productions of postvocalic word-final consonants by second language (L2) learners whose first languages (L1s) restrict the presence of obstruents in coda position. Previous models include those in which epenthesis is seen as a strategy to mitigate the effects of…

  1. Vowel Harmony: A Variable Rule in Brazilian Portuguese.

    ERIC Educational Resources Information Center

    Bisol, Leda

    1989-01-01

    Examines vowel harmony in the "Gaucho dialect" of the Brazilian state of Rio Grande do Sul. Informants from four areas of the state were studied: the capital city (Porto Alegre), the border region with Uruguay, and two areas of the interior populated by descendants of nineteenth-century immigrants from Europe, mainly Germans and…

  2. Effect of Audio vs. Video on Aural Discrimination of Vowels

    ERIC Educational Resources Information Center

    McCrocklin, Shannon

    2012-01-01

    Despite the growing use of media in the classroom, the effects of using of audio versus video in pronunciation teaching has been largely ignored. To analyze the impact of the use of audio or video training on aural discrimination of vowels, 61 participants (all students at a large American university) took a pre-test followed by two training…

  3. Identification of Simple and Compound Vowels by First Graders.

    ERIC Educational Resources Information Center

    Wright, Ouida Marina

    The purpose of the study was to determine whether by structuring and sequencing the same monosyllabic CVC, CVVC, and CVCe English words in two different patterns (EI and EII), administered with the same controlled procedures, boys and girls in grade one would be facilitated in detecting, identifying, and discriminating among single vowels and…

  4. Hemispheric Differences in the Effects of Context on Vowel Perception

    ERIC Educational Resources Information Center

    Sjerps, Matthias J.; Mitterer, Holger; McQueen, James M.

    2012-01-01

    Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners' right or left…

  5. Nasal Vowels as Two Segments: Evidence from Borrowings.

    ERIC Educational Resources Information Center

    Paradis, Carole; Prunet, Jean-Francois

    2000-01-01

    Demonstrates that the substitution of a foreign segment in the borrowing of a database, which includes 14,350 segmental malformations from French and English loanwords in eight distinct languages, involves its replacement by a single native placement. This tendency is without exception, other than in cases where nasal vowels are concerned.…

  6. Vowels in Early Words: An Event-Related Potential Study

    ERIC Educational Resources Information Center

    Mani, Nivedita; Mills, Debra L.; Plunkett, Kim

    2012-01-01

    Previous behavioural research suggests that infants possess phonologically detailed representations of the vowels and consonants in familiar words. These tasks examine infants' sensitivity to mispronunciations of a target label in the presence of a target and distracter image. Sensitivity to the mispronunciation may, therefore, be contaminated by…

  7. The Phonetic Nature of Vowels in Modern Standard Arabic

    ERIC Educational Resources Information Center

    Salameh, Mohammad Yahya Bani; Abu-Melhim, Abdel-Rahman

    2014-01-01

    The aim of this paper is to explore the phonetic nature of vowels in Modern Standard Arabic (MSA). Although Arabic is a Semitic language, the speech sound system of Arabic is very comprehensive. Data used for this study were elicited from the standard speech of nine informants who are native speakers of Arabic. The researchers used themselves as…

  8. Stimulus Variability and Perceptual Learning of Nonnative Vowel Categories

    ERIC Educational Resources Information Center

    Brosseau-Lapre, Francoise; Rvachew, Susan; Clayards, Meghan; Dickson, Daniel

    2013-01-01

    English-speakers' learning of a French vowel contrast (/schwa/-/slashed o/) was examined under six different stimulus conditions in which contrastive and noncontrastive stimulus dimensions were varied orthogonally to each other. The distribution of contrastive cues was varied across training conditions to create single prototype, variable far…

  9. Experimental Approach to the Study of Vowel Perception in German

    ERIC Educational Resources Information Center

    Wangler, Hans-Heinrich; Weiss, Rudolf

    1975-01-01

    An experimental phonetic investigation is described whose goal it was to develop a test which could be used to establish norms in the perception of vowels by native speakers of German. Particular emphasis is placed upon the design of the experiment. The test procedure and the results are discussed. Available from Albert J. Phiebig, Inc., P.O. Box…

  10. Comparison of Nasal Acceleration and Nasalance across Vowels

    ERIC Educational Resources Information Center

    Thorp, Elias B.; Virnik, Boris T.; Stepp, Cara E.

    2013-01-01

    Purpose: The purpose of this study was to determine the performance of normalized nasal acceleration (NNA) relative to nasalance as estimates of nasalized versus nonnasalized vowel and sentence productions. Method: Participants were 18 healthy speakers of American English. NNA was measured using a custom sensor, and nasalance was measured using…

  11. Acoustic Cues to Perception of Word Stress by English, Mandarin, and Russian Speakers

    ERIC Educational Resources Information Center

    Chrabaszcz, Anna; Winn, Matthew; Lin, Candise Y.; Idsardi, William J.

    2014-01-01

    Purpose: This study investigated how listeners' native language affects their weighting of acoustic cues (such as vowel quality, pitch, duration, and intensity) in the perception of contrastive word stress. Method: Native speakers (N = 45) of typologically diverse languages (English, Russian, and Mandarin) performed a stress identification…

  12. The Acoustic and Perceptual Correlates of Emphasis in Urban Jordanian Arabic

    ERIC Educational Resources Information Center

    Al-Masri, Mohammad

    2009-01-01

    Acoustic and perceptual correlates of emphasis, a secondary articulation in the posterior vocal tract, in Urban Jordanian Arabic were studied. CVC monosyllables and CV.CVC bisyllables with emphatic and plain target consonants in word-initial, word-medial and word-final positions were examined. Spectral measurements on the target vowels at vowel…

  13. Articulatory-to-Acoustic Relations in Talkers with Dysarthria: A First Analysis

    ERIC Educational Resources Information Center

    Mefferd, Antje

    2015-01-01

    Purpose: The primary purpose of this study was to determine the strength of interspeaker and intraspeaker articulatory-to-acoustic relations of vowel contrast produced by talkers with dysarthria and controls. Methods: Six talkers with amyotrophic lateral sclerosis (ALS), six talkers with Parkinson's disease (PD), and 12 controls repeated a…

  14. Toward the Development of an Objective Index of Dysphonia Severity: A Four-Factor Acoustic Model

    ERIC Educational Resources Information Center

    Awan, Shaheen N.; Roy, Nelson

    2006-01-01

    During assessment and management of individuals with voice disorders, clinicians routinely attempt to describe or quantify the severity of a patient's dysphonia. This investigation used acoustic measures derived from sustained vowel samples to predict dysphonia severity (as determined by auditory-perceptual ratings), for a diverse set of voice…

  15. Acoustic Predictors of Intelligibility for Segmentally Interrupted Speech: Temporal Envelope, Voicing, and Duration

    ERIC Educational Resources Information Center

    Fogerty, Daniel

    2013-01-01

    Purpose: Temporal interruption limits the perception of speech to isolated temporal glimpses. An analysis was conducted to determine the acoustic parameter that best predicts speech recognition from temporal fragments that preserve different types of speech information--namely, consonants and vowels. Method: Young listeners with normal hearing…

  16. Acoustic Duration Changes Associated with Two Types of Treatment for Children Who Stutter.

    ERIC Educational Resources Information Center

    Riley, Glyndon D.; Ingham, Janis Costello

    2000-01-01

    This study examined acoustic durations in 12 children (ages 3 to 9) who stuttered and received treatment based either on speech motor training (SMT) or extended length of utterance (ELU). Although the ELU treatment reduced stuttering more than the SMT, the SMT was more effective in increasing vowel duration and decreasing stop gap duration.…

  17. Acoustic mode coupling induced by shallow water nonlinear internal waves: sensitivity to environmental conditions and space-time scales of internal waves.

    PubMed

    Colosi, John A

    2008-09-01

    While many results have been intuited from numerical simulation studies, the precise connections between shallow-water acoustic variability and the space-time scales of nonlinear internal waves (NLIWs) as well as the background environmental conditions have not been clearly established analytically. Two-dimensional coupled mode propagation through NLIWs is examined using a perturbation series solution in which each order n is associated with nth-order multiple scattering. Importantly, the perturbation solution gives resonance conditions that pick out specific NLIW scales that cause coupling, and seabed attenuation is demonstrated to broaden these resonances, fundamentally changing the coupling behavior at low frequency. Sound-speed inhomogeneities caused by internal solitary waves (ISWs) are primarily considered and the dependence of mode coupling on ISW amplitude, range width, depth structure, location relative to the source, and packet characteristics are delineated as a function of acoustic frequency. In addition, it is seen that significant energy transfer to modes with initially low or zero energy involves at least a second order scattering process. Under moderate scattering conditions, comparisons of first order, single scattering theoretical predictions to direct numerical simulation demonstrate the accuracy of the approach for acoustic frequencies upto 400 Hz and for single as well as multiple ISW wave packets.

  18. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  19. Hemispheric balance in processing attended and non-attended vowels and complex tones.

    PubMed

    Vihla, Minna; Salmelin, Riitta

    2003-04-01

    We compared cortical processing of attended and non-attended vowels and complex tones, using a whole-head neuromagnetometer, to test for possible hemispheric differences. Stimuli included vowels [a] and [i], spoken by two female Finnish speakers, and two complex tones, each with two pure tone components corresponding to the first and second formant frequencies (F1-F2) of the vowels spoken by speaker 1. Sequences including both vowels and complex tones were presented to eight Finnish males during passive and active (phoneme/speaker/complex tone identification) listening. Sequences including only vowels were presented to five of the subjects during passive listening and during a phoneme identification task. The vowel [i] spoken by speaker 1 and the corresponding complex tone were frequent, non-target stimuli. Responses evoked by these frequent stimuli were analyzed. Cortical activation at approximately 100 ms was stronger for the complex tone than the vowel in the right hemisphere (RH). Responses were similar during active and passive listening. Hemispheric balance remained the same when the vowel was presented in sequences including only vowels. The reduction of RH activation for vowels as compared with complex tones indicates a relative increase of left hemisphere involvement, possibly reflecting a shift towards more language-specific processing.

  20. Acoustic Neuroma

    MedlinePlus

    ... search IRSA's site Unique Hits since January 2003 Acoustic Neuroma Click Here for Acoustic Neuroma Practice Guideline ... to microsurgery. One doctor's story of having an acoustic neuroma In August 1991, Dr. Thomas F. Morgan ...

  1. Vowel production, speech-motor control, and phonological encoding in people who are lesbian, bisexual, or gay, and people who are not

    NASA Astrophysics Data System (ADS)

    Munson, Benjamin; Deboe, Nancy

    2003-10-01

    A recent study (Pierrehumbert, Bent, Munson, and Bailey, submitted) found differences in vowel production between people who are lesbian, bisexual, or gay (LBG) and people who are not. The specific differences (more fronted /u/ and /a/ in the non-LB women; an overall more-contracted vowel space in the non-gay men) were not amenable to an interpretation based on simple group differences in vocal-tract geometry. Rather, they suggested that differences were either due to group differences in some other skill, such as motor control or phonological encoding, or learned. This paper expands on this research by examining vowel production, speech-motor control (measured by diadochokinetic rates), and phonological encoding (measured by error rates in a tongue-twister task) in people who are LBG and people who are not. Analyses focus on whether the findings of Pierrehumbert et al. (submitted) are replicable, and whether group differences in vowel production are related to group differences in speech-motor control or phonological encoding. To date, 20 LB women, 20 non-LB women, 7 gay men, and 7 non-gay men have participated. Preliminary analyses suggest that there are no group differences in speech motor control or phonological encoding, suggesting that the earlier findings of Pierrehumbert et al. reflected learned behaviors.

  2. Sensitivity of envelope following responses to vowel polarity.

    PubMed

    Easwar, Vijayalakshmi; Beamish, Laura; Aiken, Steven; Choi, Jong Min; Scollie, Susan; Purcell, David

    2015-02-01

    Envelope following responses (EFRs) elicited by stimuli of opposite polarities are often averaged due to their insensitivity to polarity when elicited by amplitude modulated tones. A recent report illustrates that individuals exhibit varying degrees of polarity-sensitive differences in EFR amplitude when elicited by vowel stimuli (Aiken and Purcell, 2013). The aims of the current study were to evaluate the incidence and degree of polarity-sensitive differences in EFRs recorded in a large group of individuals, and to examine potential factors influencing the polarity-sensitive nature of EFRs. In Experiment I of the present study, we evaluated the incidence and degree of polarity-sensitive differences in EFR amplitude in a group of 39 participants. EFRs were elicited by opposite polarities of the vowel /ε/ in a natural /hVd/ context presented at 80 dB SPL. Nearly 30% of the participants with detectable responses (n = 24) showed a difference of greater than ∼39 nV in EFR response amplitude between the two polarities, that was unexplained by variations in noise estimates. In Experiment II, we evaluated the effect of vowel, frequency of harmonics and presence of the first harmonic (h1) on the polarity sensitivity of EFRs in 20 participants with normal hearing. For vowels /u/, /a/ and /i/, EFRs were elicited by two simultaneously presented carriers representing the first formant (resolved harmonics), and the second and higher formants (unresolved harmonics). Individual but simultaneous EFRs were elicited by the formant carriers by separating the fundamental frequency in the two carriers by 8 Hz. Vowels were presented as part of a naturally produced, but modified sequence /susaʃi/, at an overall level of 65 dB SPL. To evaluate the effect of h1 on polarity sensitivity of EFRs, EFRs were elicited by the same vowels without h1 in an identical sequence. A repeated measures analysis of variance indicated a significant effect of polarity on EFR amplitudes for the

  3. Speaker compensation for local perturbation of fricative acoustic feedback.

    PubMed

    Casserly, Elizabeth D

    2011-04-01

    Feedback perturbation studies of speech acoustics have revealed a great deal about how speakers monitor and control their productions of segmental (e.g., formant frequencies) and non-segmental (e.g., pitch) linguistic elements. The majority of previous work, however, overlooks the role of acoustic feedback in consonant production and makes use of acoustic manipulations that effect either entire utterances or the entire acoustic signal, rather than more temporally and phonetically restricted alterations. This study, therefore, seeks to expand the feedback perturbation literature by examining perturbation of consonant acoustics that is applied in a time-restricted and phonetically specific manner. The spectral center of the alveopalatal fricative [∫] produced in vowel-fricative-vowel nonwords was incrementally raised until it reached the potential for [s]-like frequencies, but the characteristics of high-frequency energy outside the target fricative remained unaltered. An "offline," more widely accessible signal processing method was developed to perform this manipulation. The local feedback perturbation resulted in changes to speakers' fricative production that were more variable, idiosyncratic, and restricted than the compensation seen in more global acoustic manipulations reported in the literature. Implications and interpretations of the results, as well as future directions for research based on the findings, are discussed.

  4. Persistent responsiveness of long-latency auditory cortical activities in response to repeated stimuli of musical timbre and vowel sounds.

    PubMed

    Kuriki, Shinya; Ohta, Keisuke; Koyama, Sachiko

    2007-11-01

    Long-latency auditory-evoked magnetic field and potential show strong attenuation of N1m/N1 responses when an identical stimulus is presented repeatedly due to adaptation of auditory cortical neurons. This adaptation is weak in subsequently occurring P2m/P2 responses, being weaker for piano chords than single piano notes. The adaptation of P2m is more suppressed in musicians having long-term musical training than in nonmusicians, whereas the amplitude of P2 is enhanced preferentially in musicians as the spectral complexity of musical tones increases. To address the key issues of whether such high responsiveness of P2m/P2 responses to complex sounds is intrinsic and common to nonmusical sounds, we conducted a magnetoencephalographic study on participants who had no experience of musical training, using consecutive trains of piano and vowel sounds. The dipole moment of the P2m sources located in the auditory cortex indicated significantly suppressed adaptation in the right hemisphere both to piano and vowel sounds. Thus, the persistent responsiveness of the P2m activity may be inherent, not induced by intensive training, and common to spectrally complex sounds. The right hemisphere dominance of the responsiveness to musical and speech sounds suggests analysis of acoustic features of object sounds to be a significant function of P2m activity.

  5. Evaluating acoustic speaker normalization algorithms: evidence from longitudinal child data.

    PubMed

    Kohn, Mary Elizabeth; Farrington, Charlie

    2012-03-01

    Speaker vowel formant normalization, a technique that controls for variation introduced by physical differences between speakers, is necessary in variationist studies to compare speakers of different ages, genders, and physiological makeup in order to understand non-physiological variation patterns within populations. Many algorithms have been established to reduce variation introduced into vocalic data from physiological sources. The lack of real-time studies tracking the effectiveness of these normalization algorithms from childhood through adolescence inhibits exploration of child participation in vowel shifts. This analysis compares normalization techniques applied to data collected from ten African American children across five time points. Linear regressions compare the reduction in variation attributable to age and gender for each speaker for the vowels BEET, BAT, BOT, BUT, and BOAR. A normalization technique is successful if it maintains variation attributable to a reference sociolinguistic variable, while reducing variation attributable to age. Results indicate that normalization techniques which rely on both a measure of central tendency and range of the vowel space perform best at reducing variation attributable to age, although some variation attributable to age persists after normalization for some sections of the vowel space.

  6. From Reduction to Apocope: Final Poststressed Vowel Devoicing in Brazilian Portuguese.

    PubMed

    Meneses, Francisco; Albano, Eleonora

    2015-01-01

    This is a study of final poststressed vowel devoicing following /s/ in Brazilian Portuguese. We contradict the literature describing it as deletion by arguing, first, that the vowel is not deleted, but overlapped and devoiced by the /s/, and, second, that gradient reduction with devoicing may lead to apocope diachronically. The following results support our view: (1) partially devoiced vowels are centralized; (2) centralization is inversely proportional to duration; (3) total devoicing is accompanied by lowering of the /s/ centroid; (4) the /s/ noise seems to be lengthened when the vowel is totally devoiced; (5) aerodynamic tests reveal that lengthened /s/ has a final vowel-like portion, too short to be voiced; (6) lengthened /s/ favors vowel recovery in perceptual tests. This seems to be a likely path from reduction to devoicing to listener-based apocope.

  7. Automated Classification of Vowel Category and Speaker Type in the High-Frequency Spectrum.

    PubMed

    Donai, Jeremy J; Motiian, Saeid; Doretto, Gianfranco

    2016-04-20

    The high-frequency region of vowel signals (above the third formant or F3) has received little research attention. Recent evidence, however, has documented the perceptual utility of high-frequency information in the speech signal above the traditional frequency bandwidth known to contain important cues for speech and speaker recognition. The purpose of this study was to determine if high-pass filtered vowels could be separated by vowel category and speaker type in a supervised learning framework. Mel frequency cepstral coefficients (MFCCs) were extracted from productions of six vowel categories produced by two male, two female, and two child speakers. Results revealed that the filtered vowels were well separated by vowel category and speaker type using MFCCs from the high-frequency spectrum. This demonstrates the presence of useful information for automated classification from the high-frequency region and is the first study to report findings of this nature in a supervised learning framework.

  8. Automated Classification of Vowel Category and Speaker Type in the High-Frequency Spectrum

    PubMed Central

    Donai, Jeremy J.; Motiian, Saeid; Doretto, Gianfranco

    2016-01-01

    The high-frequency region of vowel signals (above the third formant or F3) has received little research attention. Recent evidence, however, has documented the perceptual utility of high-frequency information in the speech signal above the traditional frequency bandwidth known to contain important cues for speech and speaker recognition. The purpose of this study was to determine if high-pass filtered vowels could be separated by vowel category and speaker type in a supervised learning framework. Mel frequency cepstral coefficients (MFCCs) were extracted from productions of six vowel categories produced by two male, two female, and two child speakers. Results revealed that the filtered vowels were well separated by vowel category and speaker type using MFCCs from the high-frequency spectrum. This demonstrates the presence of useful information for automated classification from the high-frequency region and is the first study to report findings of this nature in a supervised learning framework. PMID:27588160

  9. Advanced Distributed Measurements and Data Processing at the Vibro-Acoustic Test Facility, GRC Space Power Facility, Sandusky, Ohio - an Architecture and an Example

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.; Evans, Richard K.

    2009-01-01

    A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.

  10. Acoustic iridescence.

    PubMed

    Cox, Trevor J

    2011-03-01

    An investigation has been undertaken into acoustic iridescence, exploring how a device can be constructed which alter sound waves, in a similar way to structures in nature that act on light to produce optical iridescence. The main construction had many thin perforated sheets spaced half a wavelength apart for a specified design frequency. The sheets create the necessary impedance discontinuities to create backscattered waves, which then interfere to create strongly reflected sound at certain frequencies. Predictions and measurements show a set of harmonics, evenly spaced in frequency, for which sound is reflected strongly. And the frequency of these harmonics increases as the angle of observation gets larger, mimicking the iridescence seen in natural optical systems. Similar to optical systems, the reflections become weaker for oblique angles of reflection. A second construction was briefly examined which exploited a metamaterial made from elements and inclusions which were much smaller than the wavelength. Boundary element method predictions confirmed the potential for creating acoustic iridescence from layers of such a material.

  11. Scale Model Thruster Acoustic Measurement Results

    NASA Technical Reports Server (NTRS)

    Kenny, R. Jeremy; Vargas, Magda B.

    2013-01-01

    Subscale rocket acoustic data is used to predict acoustic environments for full scale rockets. Over the last several years acoustic data has been collected during horizontal tests of solid rocket motors. Space Launch System (SLS) Scale Model Acoustic Test (SMAT) was designed to evaluate the acoustics of the SLS vehicle including the liquid engines and solid rocket boosters. SMAT is comprised of liquid thrusters scalable to the Space Shuttle Main engines (SSME) and Rocket Assisted Take Off (RATO) motors scalable to the 5-segment Reusable Solid Rocket Motor (RSTMV). Horizontal testing of the liquid thrusters provided an opportunity to collect acoustic data from liquid thrusters to characterize the acoustic environments. Acoustic data was collected during the horizontal firings of a single thruster and a 4-thruster (Quad) configuration. Presentation scope. Discuss the results of the single and 4-thruster acoustic measurements. Compare the measured acoustic levels of the liquid thrusters to the Solid Rocket Test Motor V - Nozzle 2 (SRTMV-N2).

  12. On the resolution of phonological constraints in spoken production: Acoustic and response time evidence.

    PubMed

    Bürki, Audrey; Frauenfelder, Ulrich H; Alario, F-Xavier

    2015-10-01

    This study examines the production of words the pronunciation of which depends on the phonological context. Participants produced adjective-noun phrases starting with the French determiner un. The pronunciation of this determiner requires a liaison consonant before vowels. Naming latencies and determiner acoustic durations were shorter when the adjective and the noun both started with vowels or both with consonants, than when they had different onsets. These results suggest that the liaison process is not governed by the application of a local contextual phonological rule; they rather favor the hypothesis that pronunciation variants with and without the liaison consonant are stored in memory.

  13. Acoustic and auditory phonetics: the adaptive design of speech sound systems.

    PubMed

    Diehl, Randy L

    2008-03-12

    Speech perception is remarkably robust. This paper examines how acoustic and auditory properties of vowels and consonants help to ensure intelligibility. First, the source-filter theory of speech production is briefly described, and the relationship between vocal-tract properties and formant patterns is demonstrated for some commonly occurring vowels. Next, two accounts of the structure of preferred sound inventories, quantal theory and dispersion theory, are described and some of their limitations are noted. Finally, it is suggested that certain aspects of quantal and dispersion theories can be unified in a principled way so as to achieve reasonable predictive accuracy.

  14. Adaptive Multi-Rate Compression Effects on Vowel Analysis

    PubMed Central

    Ireland, David; Knuepffer, Christina; McBride, Simon J.

    2015-01-01

    Signal processing on digitally sampled vowel sounds for the detection of pathological voices has been firmly established. This work examines compression artifacts on vowel speech samples that have been compressed using the adaptive multi-rate codec at various bit-rates. Whereas previous work has used the sensitivity of machine learning algorithm to test for accuracy, this work examines the changes in the extracted speech features themselves and thus report new findings on the usefulness of a particular feature. We believe this work will have potential impact for future research on remote monitoring as the identification and exclusion of an ill-defined speech feature that has been hitherto used, will ultimately increase the robustness of the system. PMID:26347863

  15. Constraints of Tones, Vowels and Consonants on Lexical Selection in Mandarin Chinese.

    PubMed

    Wiener, Seth; Turnbull, Rory

    2016-03-01

    Previous studies have shown that when speakers of European languages are asked to turn nonwords into words by altering either a vowel or consonant, they tend to treat vowels as more mutable than consonants. These results inspired the universal vowel mutability hypothesis: listeners learn to cope with vowel variability because vowel information constrains lexical selection less tightly and allows for more potential candidates than does consonant information. The present study extends the word reconstruction paradigm to Mandarin Chinese--a Sino-Tibetan language, which makes use of lexically contrastive tone. Native speakers listened to word-like nonwords (e.g., su3) and were asked to change them into words by manipulating a single consonant (e.g., tu3), vowel (e.g., si3), or tone (e.g., su4). Additionally, items were presented in a fourth condition in which participants could change any part. The participants' reaction times and responses were recorded. Results revealed that participants responded faster and more accurately in both the free response and the tonal change conditions. Unlike previous reconstruction studies on European languages, where vowels were changed faster and more often than consonants, these results demonstrate that, in Mandarin, changes to vowels and consonants were both overshadowed by changes to tone, which was the preferred modification to the stimulus nonwords, while changes to vowels were the slowest and least accurate. Our findings show that the universal vowel mutability hypothesis is not consistent with a tonal language, that Mandarin tonal information is lower-priority than consonants and vowels and that vowel information most tightly constrains Mandarin lexical access.

  16. Determination of velum opening for French nasal vowels by magnetic resonance imaging.

    PubMed

    Demolin, Didier; Delvaux, Véronique; Metens, Thierry; Soquet, Alain

    2003-12-01

    MRI techniques have been used to describe velum opening of French vowels. Data based on 18 joined axial slices of 4 mm thickness were recorded with four subjects. Differences in velum opening are calculated from areas measured in the tract between the lowered velum and the back pharynx wall. Results show that for all subjects, the back vowel [symbol: see text] has the smallest opening, while some variations are observed for the other vowels.

  17. Quantitative enhancement of fatigue crack monitoring by imaging surface acoustic wave reflection in a space-cycle-load domain

    SciTech Connect

    Connolly, G. D.; Rokhlin, S. I.

    2011-06-23

    The surface wave acoustic method is applied to the in-situ monitoring of fatigue crack initiation and evolution on tension specimens. A small low-frequency periodic loading is also applied, resulting in a nonlinear modulation of reflected pulses. The acoustic wave reflections are collected for: each experimental cycle; a range of applied tension and modulation load levels; and a range of spatial propagation positions, and are presented in image form to aid pattern identification. Salient features of the image are then extracted and processed to evaluate the initiation time of the crack and its subsequent size evolution until sample failure. Additionally, a method for enhancing signal to noise ratio in Ti-6242 alloy samples is demonstrated.

  18. Nonlinear evolution of ion acoustic solitary waves in space plasmas: Fluid and particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Kakad, Bharati; Kakad, Amar; Omura, Yoshiharu

    2014-07-01

    Spacecraft observations revealed the presence of electrostatic solitary waves (ESWs) in various regions of the Earth's magnetosphere. Over the years, many researchers have attempted to model these observations in terms of electron/ion acoustic solitary waves by using nonlinear fluid theory/simulations. The ESW structures predicted by fluid models can be inadequate due to its inability in handling kinetic effects. To provide clear view on the application of the fluid and kinetic treatments in modeling the ESWs, we perform both fluid and particle-in-cell (PIC) simulations of ion acoustic solitary waves (IASWs) and estimate the quantitative differences in their characteristics like speed, amplitude, and width. We find that the number of trapped electrons in the wave potential is higher for the IASW, which are generated by large-amplitude initial density perturbation (IDP). The present fluid and PIC simulation results are in close agreement for small amplitude IDPs, whereas for large IDPs they show discrepancy in the amplitude, width, and speed of the IASW, which is attributed to negligence of kinetic effects in the former approach. The speed of IASW in the fluid simulations increases with the increase of IASW amplitude, while the reverse tendency is seen in the PIC simulation. The present study suggests that the fluid treatment is appropriate when the magnitude of phase velocity of IASW is less than the ion acoustic (IA) speed obtained from their linear dispersion relation, whereas when it exceeds IA speed, it is necessary to include the kinetic effects in the model.

  19. Perception of speaker size and sex of vowel sounds

    NASA Astrophysics Data System (ADS)

    Smith, David R. R.; Patterson, Roy D.

    2005-04-01

    Glottal-pulse rate (GPR) and vocal-tract length (VTL) are both related to speaker size and sex-however, it is unclear how they interact to determine our perception of speaker size and sex. Experiments were designed to measure the relative contribution of GPR and VTL to judgements of speaker size and sex. Vowels were scaled to represent people with different GPRs and VTLs, including many well beyond the normal population values. In a single interval, two response rating paradigm, listeners judged the size (using a 7-point scale) and sex/age of the speaker (man, woman, boy, or girl) of these scaled vowels. Results from the size-rating experiments show that VTL has a much greater influence upon judgements of speaker size than GPR. Results from the sex-categorization experiments show that judgements of speaker sex are influenced about equally by GPR and VTL for vowels with normal GPR and VTL values. For abnormal combinations of GPR and VTL, where low GPRs are combined with short VTLs, VTL has more influence than GPR in sex judgements. [Work supported by the UK MRC (G9901257) and the German Volkswagen Foundation (VWF 1/79 783).

  20. Perceiving unstressed vowels in foreign-accented English.

    PubMed

    Braun, Bettina; Lemhöfer, Kristin; Mani, Nivedita

    2011-01-01

    This paper investigated how foreign-accented stress cues affect on-line speech comprehension in British speakers of English. While unstressed English vowels are usually reduced to /ə/, Dutch speakers of English only slightly centralize them. Speakers of both languages differentiate stress by suprasegmentals (duration and intensity). In a cross-modal priming experiment, English listeners heard sentences ending in monosyllabic prime fragments--produced by either an English or a Dutch speaker of English--and performed lexical decisions on visual targets. Primes were either stress-matching ("ab" excised from absurd), stress-mismatching ("ab" from absence), or unrelated ("pro" from profound) with respect to the target (e.g., ABSURD). Results showed a priming effect for stress-matching primes only when produced by the English speaker, suggesting that vowel quality is a more important cue to word stress than suprasegmental information. Furthermore, for visual targets with word-initial secondary stress that do not require vowel reduction (e.g., CAMPAIGN), resembling the Dutch way of realizing stress, there was a priming effect for both speakers. Hence, our data suggest that Dutch-accented English is not harder to understand in general, but it is in instances where the language-specific implementation of lexical stress differs across languages.

  1. Vowel Perception in Listeners With Normal Hearing and in Listeners With Hearing Loss: A Preliminary Study

    PubMed Central

    Charles, Lauren; Street, Nicole Drakopoulos

    2015-01-01

    Objectives To determine the influence of hearing loss on perception of vowel slices. Methods Fourteen listeners aged 20-27 participated; ten (6 males) had hearing within normal limits and four (3 males) had moderate-severe sensorineural hearing loss (SNHL). Stimuli were six naturally-produced words consisting of the vowels /i a u æ ɛ ʌ/ in a /b V b/ context. Each word was presented as a whole and in eight slices: the initial transition, one half and one fourth of initial transition, full central vowel, one-half central vowel, ending transition, one half and one fourth of ending transition. Each of the 54 stimuli was presented 10 times at 70 dB SPL (sound press level); listeners were asked to identify the word. Stimuli were shaped using signal processing software for the listeners with SNHL to mimic gain provided by an appropriately-fitting hearing aid. Results Listeners with SNHL had a steeper rate of decreasing vowel identification with decreasing slice duration as compared to listeners with normal hearing, and the listeners with SNHL showed different patterns of vowel identification across vowels when compared to listeners with normal hearing. Conclusion Abnormal temporal integration is likely affecting vowel identification for listeners with SNHL, which in turn affects vowel internal representation at different levels of the auditory system. PMID:25729492

  2. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations

    PubMed Central

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women’s but not for men’s voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it’s spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels. PMID:27757218

  3. Acoustic borehole logging

    SciTech Connect

    Medlin, W.L.; Manzi, S.J.

    1990-10-09

    This patent describes an acoustic borehole logging method. It comprises traversing a borehole with a borehole logging tool containing a transmitter of acoustic energy having a free-field frequency spectrum with at least one characteristic resonant frequency of vibration and spaced-apart receiver, repeatedly exciting the transmitter with a swept frequency tone burst of a duration sufficiently greater than the travel time of acoustic energy between the transmitter and the receiver to allow borehole cavity resonances to be established within the borehole cavity formed between the borehole logging tool and the borehole wall, detecting acoustic energy amplitude modulated by the borehole cavity resonances with the spaced-apart receiver, and recording an amplitude verses frequency output of the receiver in correlation with depth as a log of the borehole frequency spectrum representative of the subsurface formation comprising the borehole wall.

  4. Intelligibility of normal speech I: Global and fine-grained acoustic-phonetic talker characteristics1,2

    PubMed Central

    Bradlow, Ann R.; Torretta, Gina M.; Pisoni, David B.

    2011-01-01

    This study used a multi-talker database containing intelligibility scores for 2000 sentences (20 talkers, 100 sentences), to identify talker-related correlates of speech intelligibility. We first investigated “global” talker characteristics (e.g., gender, F0 and speaking rate). Findings showed female talkers to be more intelligible as a group than male talkers. Additionally, we found a tendency for F0 range to correlate positively with higher speech intelligibility scores. However, F0 mean and speaking rate did not correlate with intelligibility. We then examined several fine-grained acoustic-phonetic talker-characteristics as correlates of overall intelligibility. We found that talkers with larger vowel spaces were generally more intelligible than talkers with reduced spaces. In investigating two cases of consistent listener errors (segment deletion and syllable affiliation), we found that these perceptual errors could be traced directly to detailed timing characteristics in the speech signal. Results suggest that a substantial portion of variability in normal speech intelligibility is traceable to specific acoustic-phonetic characteristics of the talker. Knowledge about these factors may be valuable for improving speech synthesis and recognition strategies, and for special populations (e.g., the hearing-impaired and second-language learners) who are particularly sensitive to intelligibility differences among talkers. PMID:21461127

  5. Acoustic-Phonetic Differences between Infant- and Adult-Directed Speech: The Role of Stress and Utterance Position

    ERIC Educational Resources Information Center

    Wang, Yuanyuan; Seidl, Amanda; Cristia, Alejandrina

    2015-01-01

    Previous studies have shown that infant-directed speech (IDS) differs from adult-directed speech (ADS) on a variety of dimensions. The aim of the current study was to investigate whether acoustic differences between IDS and ADS in English are modulated by prosodic structure. We compared vowels across the two registers (IDS, ADS) in both stressed…

  6. Acoustical heat pumping engine

    DOEpatents

    Wheatley, John C.; Swift, Gregory W.; Migliori, Albert

    1983-08-16

    The disclosure is directed to an acoustical heat pumping engine without moving seals. A tubular housing holds a compressible fluid capable of supporting an acoustical standing wave. An acoustical driver is disposed at one end of the housing and the other end is capped. A second thermodynamic medium is disposed in the housing near to but spaced from the capped end. Heat is pumped along the second thermodynamic medium toward the capped end as a consequence both of the pressure oscillation due to the driver and imperfect thermal contact between the fluid and the second thermodynamic medium.

  7. Acoustical heat pumping engine

    DOEpatents

    Wheatley, J.C.; Swift, G.W.; Migliori, A.

    1983-08-16

    The disclosure is directed to an acoustical heat pumping engine without moving seals. A tubular housing holds a compressible fluid capable of supporting an acoustical standing wave. An acoustical driver is disposed at one end of the housing and the other end is capped. A second thermodynamic medium is disposed in the housing near to but spaced from the capped end. Heat is pumped along the second thermodynamic medium toward the capped end as a consequence both of the pressure oscillation due to the driver and imperfect thermal contact between the fluid and the second thermodynamic medium. 2 figs.

  8. Study of acoustic correlates associate with emotional speech

    NASA Astrophysics Data System (ADS)

    Yildirim, Serdar; Lee, Sungbok; Lee, Chul Min; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Ebrahim; Narayanan, Shrikanth

    2004-10-01

    This study investigates the acoustic characteristics of four different emotions expressed in speech. The aim is to obtain detailed acoustic knowledge on how a speech signal is modulated by changes from neutral to a certain emotional state. Such knowledge is necessary for automatic emotion recognition and classification and emotional speech synthesis. Speech data obtained from two semi-professional actresses are analyzed and compared. Each subject produces 211 sentences with four different emotions; neutral, sad, angry, happy. We analyze changes in temporal and acoustic parameters such as magnitude and variability of segmental duration, fundamental frequency and the first three formant frequencies as a function of emotion. Acoustic differences among the emotions are also explored with mutual information computation, multidimensional scaling and acoustic likelihood comparison with normal speech. Results indicate that speech associated with anger and happiness is characterized by longer duration, shorter interword silence, higher pitch and rms energy with wider ranges. Sadness is distinguished from other emotions by lower rms energy and longer interword silence. Interestingly, the difference in formant pattern between [happiness/anger] and [neutral/sadness] are better reflected in back vowels such as /a/(/father/) than in front vowels. Detailed results on intra- and interspeaker variability will be reported.

  9. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species

    PubMed Central

    Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert

    2016-01-01

    State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718

  10. Directional Acoustic Density Sensor

    DTIC Science & Technology

    2010-09-13

    fluctuations of fluid density at a point . (2) DESCRIPTION OF THE PRIOR ART [0004] Conventional vector sensors measure particle velocity, v (vx,Vytvz...dipole-type or first order sensor that is realized by measuring particle velocity at a point , (which is the vector sensor sensing approach for...underwater sensors), or by measuring the gradient of the acoustic pressure at two closely spaced (less than the wavelength of an acoustic wave) points as it

  11. Pre-attentive sensitivity to vowel duration reveals native phonology and predicts learning of second-language sounds.

    PubMed

    Chládková, Kateřina; Escudero, Paola; Lipski, Silvia C

    2013-09-01

    In some languages (e.g. Czech), changes in vowel duration affect word meaning, while in others (e.g. Spanish) they do not. Yet for other languages (e.g. Dutch), the linguistic role of vowel duration remains unclear. To reveal whether Dutch represents vowel length in its phonology, we compared auditory pre-attentive duration processing in native and non-native vowels across Dutch, Czech, and Spanish. Dutch duration sensitivity patterned with Czech but was larger than Spanish in the native vowel, while it was smaller than Czech and Spanish in the non-native vowel. An interpretation of these findings suggests that in Dutch, duration is used phonemically but it might be relevant for the identity of certain native vowels only. Furthermore, the finding that Spanish listeners are more sensitive to duration in non-native than in native vowels indicates that a lack of duration differences in one's native language could be beneficial for second-language learning.

  12. Speech acoustics: How much science?

    PubMed

    Tiwari, Manjul

    2012-01-01

    Human vocalizations are sounds made exclusively by a human vocal tract. Among other vocalizations, for example, laughs or screams, speech is the most important. Speech is the primary medium of that supremely human symbolic communication system called language. One of the functions of a voice, perhaps the main one, is to realize language, by conveying some of the speaker's thoughts in linguistic form. Speech is language made audible. Moreover, when phoneticians compare and describe voices, they usually do so with respect to linguistic units, especially speech sounds, like vowels or consonants. It is therefore necessary to understand the structure as well as nature of speech sounds and how they are described. In order to understand and evaluate the speech, it is important to have at least a basic understanding of science of speech acoustics: how the acoustics of speech are produced, how they are described, and how differences, both between speakers and within speakers, arise in an acoustic output. One of the aims of this article is try to facilitate this understanding.

  13. Acoustic markers of syllabic stress in Spanish excellent oesophageal speakers.

    PubMed

    Cuenca, María Heliodora; Barrio, Marina M; Anaya, Pablo; Establier, Carmelo

    2012-01-01

    The purpose of this investigation is to explore the use by Spanish excellent oesophageal speakers of acoustic cues to mark syllabic stress. The speech material has consisted of five pairs of disyllabic words which only differed in stress position. Total 44 oesophageal and 9 laryngeal speakers were recorded and a computerised designed ad hoc perceptual test was run in order to assess the accurate realisation of stress. The items produced by eight excellent oesophageal speakers with highest accuracy levels in the perception experiment were analysed acoustically with Praat, to be compared with the laryngeal control group. Measures of duration, fundamental frequency, spectral balance and overall intensity were taken for each target vowel and syllable. Results revealed that Spanish excellent oesophageal speakers were able to retain appropriate acoustic relations between stressed and unstressed syllables. Although spectral balance revealed as a strong cue for syllabic stress in the two voicing modes, a different hierarchy of acoustic cues in each voicing mode was found.

  14. Acoustic cryocooler

    DOEpatents

    Swift, Gregory W.; Martin, Richard A.; Radenbaugh, Ray

    1990-01-01

    An acoustic cryocooler with no moving parts is formed from a thermoacoustic driver (TAD) driving a pulse tube refrigerator (PTR) through a standing wave tube. Thermoacoustic elements in the TAD are spaced apart a distance effective to accommodate the increased thermal penetration length arising from the relatively low TAD operating frequency in the range of 15-60 Hz. At these low operating frequencies, a long tube is required to support the standing wave. The tube may be coiled to reduce the overall length of the cryocooler. One or two PTR's are located on the standing wave tube adjacent antinodes in the standing wave to be driven by the standing wave pressure oscillations. It is predicted that a heat input of 1000 W at 1000 K will maintian a cooling load of 5 W at 80 K.

  15. Acoustic evaluation of short-term effects of repetitive transcranial magnetic stimulation on motor aspects of speech in Parkinson's disease.

    PubMed

    Eliasova, I; Mekyska, J; Kostalova, M; Marecek, R; Smekal, Z; Rektorova, I

    2013-04-01

    Hypokinetic dysarthria in Parkinson's disease (PD) can be characterized by monotony of pitch and loudness, reduced stress, variable rate, imprecise consonants, and a breathy and harsh voice. Using acoustic analysis, we studied the effects of high-frequency repetitive transcranial magnetic stimulation (rTMS) applied over the primary orofacial sensorimotor area (SM1) and the left dorsolateral prefrontal cortex (DLPFC) on motor aspects of voiced speech in PD. Twelve non-depressed and non-demented men with PD (mean age 64.58 ± 8.04 years, mean PD duration 10.75 ± 7.48 years) and 21 healthy age-matched men (a control group, mean age 64 ± 8.55 years) participated in the speech study. The PD patients underwent two sessions of 10 Hz rTMS over the dominant hemisphere with 2,250 stimuli/day in a random order: (1) over the SM1; (2) over the left DLPFC in the "on" motor state. Speech examination comprised the perceptual rating of global speech performance and an acoustic analysis based upon a standardized speech task. The Mann-Whitney U test was used to compare acoustic speech variables between controls and PD patients. The Wilcoxon test was used to compare data prior to and after each stimulation in the PD group. rTMS applied over the left SM1 was associated with a significant increase in harmonic-to-noise ratio and net speech rate in the sentence tasks. With respect to the vowel task results, increased median values and range of Teager-Kaiser energy operator, increased vowel space area, and significant jitter decrease were observed after the left SM1 stimulation. rTMS over the left DLPFC did not induce any significant effects. The positive results of acoustic analysis were not reflected in a subjective rating of speech performance quality as assessed by a speech therapist. Our pilot results indicate that one session of rTMS applied over the SM1 may lead to measurable improvement in voice quality and intensity and an increase in speech rate and tongue movements

  16. Point Vowel Duration in Children with Hearing Aids and Cochlear Implants at 4 and 5 Years of Age

    ERIC Educational Resources Information Center

    Vandam, Mark; Ide-Helvie, Dana; Moeller, Mary Pat

    2011-01-01

    This work investigates the developmental aspects of the duration of point vowels in children with normal hearing compared with those with hearing aids and cochlear implants at 4 and 5 years of age. Younger children produced longer vowels than older children, and children with hearing loss (HL) produced longer and more variable vowels than their…

  17. Another Study of the Pronunciation of Words Ending in a Vowel-Consonant-Final E Pattern and Similar Patterns.

    ERIC Educational Resources Information Center

    Greif, Ivo P.

    In response to criticism of a previous study, this paper reports a revision of a proposed phonics rule "when there are two vowels, one of which is a final e, the first vowel is long and the final e is silent" (cradle), which is called the VCE (Vowel Consonant E) rule. Following an introductory section, the paper examines previous research, citing…

  18. Evaluating Computational Models in Cognitive Neuropsychology: The Case from the Consonant/Vowel Distinction

    ERIC Educational Resources Information Center

    Knobel, Mark; Caramazza, Alfonso

    2007-01-01

    Caramazza et al. [Caramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing of consonants and vowels. "Nature," 403(6768), 428-430.] report two patients who exhibit a double dissociation between consonants and vowels in speech production. The patterning of this double dissociation cannot be explained by appealing to…

  19. Children's Perception of Conversational and Clear American-English Vowels in Noise

    ERIC Educational Resources Information Center

    Leone, Dorothy; Levy, Erika S.

    2015-01-01

    Purpose: Much of a child's day is spent listening to speech in the presence of background noise. Although accurate vowel perception is important for listeners' accurate speech perception and comprehension, little is known about children's vowel perception in noise. "Clear speech" is a speech style frequently used by talkers in the…

  20. Shallow and deep orthographies in Hebrew: the role of vowelization in reading development for unvowelized scripts.

    PubMed

    Schiff, Rachel

    2012-12-01

    The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts, whereas among fourth and sixth graders reading of unvowelized scripts developed to a greater degree than the reading of vowelized scripts. An analysis of the mediation effect for children's mastery of vowelized reading speed and accuracy on their mastery of unvowelized reading speed and comprehension revealed that in second grade, reading accuracy of vowelized words mediated the reading speed and comprehension of unvowelized scripts. In the fourth grade, accuracy in reading both vowelized and unvowelized words mediated the reading speed and comprehension of unvowelized scripts. By sixth grade, accuracy in reading vowelized words offered no mediating effect, either on reading speed or comprehension of unvowelized scripts. The current outcomes thus suggest that young Hebrew readers undergo a scaffolding process, where vowelization serves as the foundation for building initial reading abilities and is essential for successful and meaningful decoding of unvowelized scripts.