Science.gov

Sample records for acoustic vowel space

  1. Reference Data for the American English Acoustic Vowel Space

    ERIC Educational Resources Information Center

    Flipsen, Peter, Jr.; Lee, Sungbok

    2012-01-01

    Reference data for the acoustic vowel space area (VSA) in children and adolescents do not currently appear to be available in a form suitable for normative comparisons. In the current study, individual speaker formant data for the four corner vowels of American English (/i, u, ae, [alpha]/) were used to compute individual speaker VSAs. The sample…

  2. Uncovering the acoustic vowel space of a previously undescribed language: The vowels of Nambo.

    PubMed

    Kashima, Eri; Williams, Daniel; Mark Ellison, T; Schokkin, Dineke; Escudero, Paola

    2016-06-01

    This study presents the first acoustic description of the vowel space of a Papuan language-Nambo, spoken in southern Papua New Guinea-based on duration and first and second formant measurements from 19 adult male and female speakers across three age groups (young, middle-aged, senior). Phonemically, Nambo has six full vowels /i, e, æ, ɑ, o, u/ and a reduced vowel tentatively labeled /ə/. Unlike the full vowels, the quality of /ə/ showed great variation: seniors' and young females' realizations tended to be more open and retracted than those by young males, while middle-aged speakers' productions fell between these two variants. PMID:27369181

  3. Vowel Acoustic Space Development in Children: A Synthesis of Acoustic and Anatomic Data

    ERIC Educational Resources Information Center

    Vorperian, Houri K.; Kent, Ray D.

    2007-01-01

    Purpose: This article integrates published acoustic data on the development of vowel production. Age specific data on formant frequencies are considered in the light of information on the development of the vocal tract (VT) to create an anatomic-acoustic description of the maturation of the vowel acoustic space for English. Method: Literature…

  4. Vowel Acoustic Space Development in Children: A Synthesis of Acoustic and Anatomic Data

    PubMed Central

    Vorperian, Houri K.; Kent, Ray D.

    2008-01-01

    Purpose This article integrates published acoustic data on the development of vowel production. Age specific data on formant-frequencies are considered in the light of information on the development of the vocal tract (VT) to create an anatomic-acoustic description of the maturation of the vowel acoustic space for English. Method Literature searches identified 14 studies reporting data on vowel formant-frequencies. Data on corner vowels are summarized graphically to show age/sex related changes in the area and shape of the traditional vowel quadrilateral. Conclusions Vowel development is expressed as: (a) establishment of a language-appropriate acoustic representation (e.g., F1-F2 quadrilateral or F1-F2-F3 space), (b) gradual reduction in formant-frequencies and F1-F2 area with age, (c) reduction in formant-frequency variability, (d) emergence of male-female differences in formant-frequency by age 4 years with more apparent differences by 8 years, (e) jumps in formant-frequency at ages corresponding to growth spurts of the VT, and (f) a decline of f0 after age 1, with the decline being more rapid during early childhood and adolescence. Questions remain about optimal procedures for VT normalization, and the exact relationship between VT growth and formant-frequencies. Comments are included on nasalization and vocal fundamental-frequency as they relate to the development of vowel production. PMID:18055771

  5. The influence of phonetic context and formant measurement location on acoustic vowel space

    NASA Astrophysics Data System (ADS)

    Turner, Greg S.; Hutchings, David T.; Sylvester, Betsy; Weismer, Gary

    2003-04-01

    One way of depicting vowel production is by describing vowels within an F1/F2 acoustic vowel space. This acoustic measure illustrates the dispersion of F1 and F2 values at a specific moment in time (e.g., the temporal midpoint of a vowel) for the vowels of a given language. This measure has recently been used to portray vowel production in individuals with communication disorders such as dysarthria and is moderately related to the severity of the speech disorder. Studies aimed at identifying influential factors effecting measurement stability of vowel space have yet to be completed. The focus of the present study is to evaluate the influence of phonetic context and spectral measurement location on vowel space in a group of neurologically normal American English speakers. For this study, vowel space was defined in terms of the dispersion of the four corner vowels produced within a CVC syllable frame, where C includes six stop consonants in all possible combinations with each vowel. Spectral measures were made at the midpoint and formant extremes of the vowels. A discussion will focus on individual and group variation in vowel space as a function of phonetic context and temporal measurement location.

  6. A modeling investigation of vowel-to-vowel movement planning in acoustic and muscle spaces

    NASA Astrophysics Data System (ADS)

    Zandipour, Majid

    The primary objective of this research was to explore the coordinate space in which speech movements are planned. A two dimensional biomechanical model of the vocal tract (tongue, lips, jaw, and pharynx) was constructed based on anatomical and physiological data from a subject. The model transforms neural command signals into the actions of muscles. The tongue was modeled by a 221-node finite element mesh. Each of the eight tongue muscles defined within the mesh was controlled by a virtual muscle model. The other vocal-tract components were modeled as simple 2nd-order systems. The model's geometry was adapted to a speaker, using MRI scans of the speaker's vocal tract. The vocal tract model, combined with an adaptive controller that consisted of a forward model (mapping 12-dimensional motor commands to a 64-dimensional acoustic spectrum) and an inverse model (mapping acoustic trajectories to motor command trajectories), was used to simulate and explore the implications of two planning hypotheses: planning in motor space vs. acoustic space. The acoustic, kinematic, and muscle activation (EMG) patterns of vowel-to-vowel sequences generated by the model were compared to data from the speaker whose acoustic, kinematic and EMG were also recorded. The simulation results showed that: (a) modulations of the motor commands effectively accounted for the effects of speaking rate on EMG, kinematic, and acoustic outputs; (b) the movement and acoustic trajectories were influenced by vocal tract biomechanics; and (c) both planning schemes produced similar articulatory movement, EMG, muscle length, force, and acoustic trajectories, which were also comparable to the subject's data under normal speaking conditions. In addition, the effects of a bite-block on measured EMG, kinematics and formants were simulated by the model. Acoustic planning produced successful simulations but motor planning did not. The simulation results suggest that with somatosensory feedback but no auditory

  7. Can acoustic vowel space predict the habitual speech rate of the speaker?

    PubMed

    Tsao, Y-C; Iqbal, K

    2005-01-01

    This study aims to find whether the acoustic vowel space reflect the habitual speaking rate of the speaker. The vowel space is defined as the area of the quadrilateral formed by the four corner vowels (i.e.,/i/,/æ/,/u/,/α) in the F1F2- 2 plane. The study compares the acoustic vowel space in the speech of habitually slow and fast talkers and further analyzes them by gender. In addition to the measurement of vowel duration and midpoint frequencies of F1 and F2, the F1/F2 vowel space areas were measured and compared across speakers. The results indicate substantial overlap in vowel space area functions between slow and fast talkers, though the slow speakers were found to have larger vowel spaces. Furthermore, large variability in vowel space area functions was noted among interspeakers in each group. Both F1 and F2 formant frequencies were found to be gender sensitive in consistence with the existing data. No predictive relation between vowel duration and formant frequencies was observed among speakers. PMID:17282413

  8. An Acoustic Analysis of the Vowel Space in Young and Old Cochlear-Implant Speakers

    ERIC Educational Resources Information Center

    Neumeyer, Veronika; Harrington, Jonathan; Draxler, Christoph

    2010-01-01

    The main purpose of this study was to compare acoustically the vowel spaces of two groups of cochlear implantees (CI) with two age-matched normal hearing groups. Five young test persons (15-25 years) and five older test persons (55-70 years) with CI and two control groups of the same age with normal hearing were recorded. The speech material…

  9. Acoustic Analysis of Vowels Following Glossectomy

    ERIC Educational Resources Information Center

    Whitehill, Tara L.; Ciocca, Valter; Chan, Judy C-T.; Samman, Nabil

    2006-01-01

    This study examined the acoustic characteristics of vowels produced by speakers with partial glossectomy. Acoustic variables investigated included first formant (F1) frequency, second formant (F2) frequency, F1 range, F2 range and vowel space area. Data from the speakers with partial glossectomy were compared with age- and gender-matched controls.…

  10. Vowel Space Characteristics and Vowel Identification Accuracy

    ERIC Educational Resources Information Center

    Neel, Amy T.

    2008-01-01

    Purpose: To examine the relation between vowel production characteristics and intelligibility. Method: Acoustic characteristics of 10 vowels produced by 45 men and 48 women from the J. M. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler (1995) study were examined and compared with identification accuracy. Global (mean f0, F1, and F2;…

  11. Acoustic Characteristics of Vowels and Plosives/Affricates of Mandarin-Speaking Hearing-Impaired Children

    ERIC Educational Resources Information Center

    Tseng, Shu-Chuan; Kuei, Ko; Tsou, Pei-Chen

    2011-01-01

    This article presents the results of an acoustic analysis of vowels and plosives/affricates produced by 45 Mandarin-speaking children with hearing impairment. Vowel production is represented and categorized into three groups by vowel space size calculated with normalized F1 and F2 values of corner vowels. The correlation between speech…

  12. Acoustic and Durational Properties of Indian English Vowels

    ERIC Educational Resources Information Center

    Maxwell, Olga; Fletcher, Janet

    2009-01-01

    This paper presents findings of an acoustic phonetic analysis of vowels produced by speakers of English as a second language from northern India. The monophthongal vowel productions of a group of male speakers of Hindi and male speakers of Punjabi were recorded, and acoustic phonetic analyses of vowel formant frequencies and vowel duration were…

  13. Temporal and acoustic characteristics of Greek vowels produced by adults with cerebral palsy

    NASA Astrophysics Data System (ADS)

    Botinis, Antonis; Orfanidou, Ioanna; Fourakis, Marios; Fourakis, Marios

    2005-09-01

    The present investigation examined the temporal and spectral characteristics of Greek vowels as produced by speakers with intact (NO) versus cerebral palsy affected (CP) neuromuscular systems. Six NO and six CP native speakers of Greek produced the Greek vowels [i, e, a, o, u] in the first syllable of CVCV nonsense words in a short carrier phrase. Stress could be on either the first or second syllable. There were three female and three male speakers in each group. In terms of temporal characteristics, the results showed that: vowels produced by CP speakers were longer than vowels produced by NO speakers; stressed vowels were longer than unstressed vowels; vowels produced by female speakers were longer than vowels produced by male speakers. In terms of spectral characteristics the results showed that the vowel space of the CP speakers was smaller than that of the NO speakers. This is similar to the results recently reported by Liu et al. [J. Acoust. Soc. Am. 117, 3879-3889 (2005)] for CP speakers of Mandarin. There was also a reduction of the acoustic vowel space defined by unstressed vowels, but this reduction was much more pronounced in the vowel productions of CP speakers than NO speakers.

  14. Automatic assessment of vowel space area

    PubMed Central

    Sandoval, Steven; Berisha, Visar; Utianski, Rene L.; Liss, Julie M.; Spanias, Andreas

    2013-01-01

    Vowel space area (VSA) is an attractive metric for the study of speech production deficits and reductions in intelligibility, in addition to the traditional study of vowel distinctiveness. Traditional VSA estimates are not currently sufficiently sensitive to map to production deficits. The present report describes an automated algorithm using healthy, connected speech rather than single syllables and estimates the entire vowel working space rather than corner vowels. Analyses reveal a strong correlation between the traditional VSA and automated estimates. When the two methods diverge, the automated method seems to provide a more accurate area since it accounts for all vowels. PMID:24181994

  15. Automatic assessment of vowel space area.

    PubMed

    Sandoval, Steven; Berisha, Visar; Utianski, Rene L; Liss, Julie M; Spanias, Andreas

    2013-11-01

    Vowel space area (VSA) is an attractive metric for the study of speech production deficits and reductions in intelligibility, in addition to the traditional study of vowel distinctiveness. Traditional VSA estimates are not currently sufficiently sensitive to map to production deficits. The present report describes an automated algorithm using healthy, connected speech rather than single syllables and estimates the entire vowel working space rather than corner vowels. Analyses reveal a strong correlation between the traditional VSA and automated estimates. When the two methods diverge, the automated method seems to provide a more accurate area since it accounts for all vowels. PMID:24181994

  16. Multidimensional scaling of the perceptual space of basic vowels

    NASA Astrophysics Data System (ADS)

    Jassem, Wiktor; Nowak, Ignacy

    Up till now experimental studies of the articulation, acoustic characteristics, and perception of basic vowels have been inadequate, and their results have been the subject of controversy. Butcher was especially critical of basic vowels. He used a multidimensional scaling method to evaluate perceptual experiments and the other few experimental studies. He concluded that the system of basic vowels should be eliminated as scientifically unsound. Experiments conducted at the Acoustic Phonetics Workshop and summarized in this article and the processing of these results by means of a multidimensional scaling method yielded positive results. In light of these results, Butcher's criticism is unjustified. Basic vowels arrange themselves in a meaningful configuration in a two-dimensional perceptual space.

  17. Degraded Vowel Acoustics and the Perceptual Consequences in Dysarthria

    NASA Astrophysics Data System (ADS)

    Lansford, Kaitlin L.

    Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third

  18. Effect of body position on vocal tract acoustics: Acoustic pharyngometry and vowel formants

    PubMed Central

    Vorperian, Houri K.; Kurtzweil, Sara L.; Fourakis, Marios; Kent, Ray D.; Tillman, Katelyn K.; Austin, Diane

    2015-01-01

    The anatomic basis and articulatory features of speech production are often studied with imaging studies that are typically acquired in the supine body position. It is important to determine if changes in body orientation to the gravitational field alter vocal tract dimensions and speech acoustics. The purpose of this study was to assess the effect of body position (upright versus supine) on (1) oral and pharyngeal measurements derived from acoustic pharyngometry and (2) acoustic measurements of fundamental frequency (F0) and the first four formant frequencies (F1–F4) for the quadrilateral point vowels. Data were obtained for 27 male and female participants, aged 17 to 35 yrs. Acoustic pharyngometry showed a statistically significant effect of body position on volumetric measurements, with smaller values in the supine than upright position, but no changes in length measurements. Acoustic analyses of vowels showed significantly larger values in the supine than upright position for the variables of F0, F3, and the Euclidean distance from the centroid to each corner vowel in the F1-F2-F3 space. Changes in body position affected measurements of vocal tract volume but not length. Body position also affected the aforementioned acoustic variables, but the main vowel formants were preserved. PMID:26328699

  19. The Hebrew Vowel System: Raw and Normalized Acoustic Data.

    ERIC Educational Resources Information Center

    Most, Tova; Amir, Ofer; Tobin, Yishai

    2000-01-01

    Identified the acoustic features of the vowels produced by Hebrew speakers differing in age and sex. Ninety speakers were recorded. Vowels were presented in a nonword context that was placed in a meaningful Hebrew sentence. Results are discussed. (Author/VWL)

  20. The effect of reduced vowel working space on speech intelligibility in Mandarin-speaking young adults with cerebral palsy

    NASA Astrophysics Data System (ADS)

    Liu, Huei-Mei; Tsao, Feng-Ming; Kuhl, Patricia K.

    2005-06-01

    The purpose of this study was to examine the effect of reduced vowel working space on dysarthric talkers' speech intelligibility using both acoustic and perceptual approaches. In experiment 1, the acoustic-perceptual relationship between vowel working space area and speech intelligibility was examined in Mandarin-speaking young adults with cerebral palsy. Subjects read aloud 18 bisyllabic words containing the vowels /eye/, /aye/, and /you/ using their normal speaking rate. Each talker's words were identified by three normal listeners. The percentage of correct vowel and word identification were calculated as vowel intelligibility and word intelligibility, respectively. Results revealed that talkers with cerebral palsy exhibited smaller vowel working space areas compared to ten age-matched controls. The vowel working space area was significantly correlated with vowel intelligibility (r=0.632, p<0.005) and with word intelligibility (r=0.684, p<0.005). Experiment 2 examined whether tokens of expanded vowel working spaces were perceived as better vowel exemplars and represented with greater perceptual spaces than tokens of reduced vowel working spaces. The results of the perceptual experiment support this prediction. The distorted vowels of talkers with cerebral palsy compose a smaller acoustic space that results in shrunken intervowel perceptual distances for listeners. .

  1. Vowel Acoustics in Dysarthria: Mapping to Perception

    ERIC Educational Resources Information Center

    Lansford, Kaitlin L.; Liss, Julie M.

    2014-01-01

    Purpose: The aim of the present report was to explore whether vowel metrics, demonstrated to distinguish dysarthric and healthy speech in a companion article (Lansford & Liss, 2014), are able to predict human perceptual performance. Method: Vowel metrics derived from vowels embedded in phrases produced by 45 speakers with dysarthria were…

  2. Acoustic characteristics of vowels by normal Malaysian Malay young adults.

    PubMed

    Ting, Hua Nong; Chia, See Yan; Abdul Hamid, Badrulzaman; Mukari, Siti Zamratol-Mai Sarah

    2011-11-01

    The acoustic characteristics of sustained vowel have been widely investigated across various languages and ethnic groups. These acoustic measures, including fundamental frequency (F(0)), jitter (Jitt), relative average perturbation (RAP), five-point period perturbation quotient (PPQ5), shimmer (Shim), and 11-point amplitude perturbation quotient (APQ11) are not well established for Malaysian Malay young adults. This article studies the acoustic measures of Malaysian Malay adults using acoustical analysis. The study analyzed six sustained Malay vowels of 60 normal native Malaysian Malay adults with a mean of 21.19 years. The F(0) values of Malaysian Malay males and females were reported as 134.85±18.54 and 238.27±24.06Hz, respectively. Malaysian Malay females had significantly higher F(0) than that of males for all the vowels. However, no significant differences were observed between the genders for the perturbation measures in all the vowels, except RAP in /e/. No significant F(0) differences between the vowels were observed. Significant differences between the vowels were reported for all perturbation measures in Malaysian Malay males. As for Malaysian Malay females, significant differences between the vowels were reported for Shim and APQ11. Multiethnic comparisons indicate that F(0) varies between Malaysian Malay and other ethnic groups. However, the perturbation measures cannot be directly compared, where the measures vary significantly across different speech analysis softwares. PMID:21429707

  3. Acoustical analysis of Spanish vowels produced by laryngectomized subjects.

    PubMed

    Cervera, T; Miralles, J L; González-Alvarez, J

    2001-10-01

    The purpose of this study was to describe the acoustic characteristics of Spanish vowels in subjects who had undergone a total laryngectomy and to compare the results with those obtained in a control group of subjects who spoke normally. Our results are discussed in relation to those obtained in previous studies with English-speaking laryngectomized patients. The comparison between English and Spanish, which diFfer widely in the size of their vowel inventories, will help us to determine specific or universal vowel production characteristics in these patients. Our second objective was to relate the acoustic properties of these vowels to the perceptual data obtained in our previous work (J. L. Miralles & T. Cervera, 1995). In that study, results indicated that vowels produced by alaryngeal speakers were well perceived in word context. Vowels were produced in CVCV word context by two groups of patients who had undergone laryngectomy: tracheoesophageal speakers (TES) and esophageal speakers. In addition a control group of normal talkers was included. Audio recordings of 24 Spanish words produced by each speaker were analyzed using CSL (Kay Elemetrics). Results showed that F1, F2, and vowel duration of alaryngeal speakers differ significantly from normal values. In general, laryngectomized patients produce vowels with higher formant frequencies and longer durations than the group of laryngeal subjects. Thus, the data indicate modifications either in the frequency or temporal domain, following the same tendency found in previous studies with English-speaking laryngectomized speakers. PMID:11708538

  4. Variability in English vowels is comparable in articulation and acoustics

    PubMed Central

    Noiray, Aude; Iskarous, Khalil; Whalen, D. H.

    2014-01-01

    The nature of the links between speech production and perception has been the subject of longstanding debate. The present study investigated the articulatory parameter of tongue height and the acoustic F1-F0 difference for the phonological distinction of vowel height in American English front vowels. Multiple repetitions of /i, ɪ, e, ε, æ/ in [(h)Vd] sequences were recorded in seven adult speakers. Articulatory (ultrasound) and acoustic data were collected simultaneously to provide a direct comparison of variability in vowel production in both domains. Results showed idiosyncratic patterns of articulation for contrasting the three front vowel pairs /i-ɪ/, /e-ε/ and /ε-æ/ across subjects, with the degree of variability in vowel articulation comparable to that observed in the acoustics for all seven participants. However, contrary to what was expected, some speakers showed reversals for tongue height for /ɪ/-/e/ that was also reflected in acoustics with F1 higher for /ɪ/ than for /e/. The data suggest the phonological distinction of height is conveyed via speaker-specific articulatory-acoustic patterns that do not strictly match features descriptions. However, the acoustic signal is faithful to the articulatory configuration that generated it, carrying the crucial information for perceptual contrast. PMID:25101144

  5. Unmasking the acoustic effects of vowel-to-vowel coarticulation: A statistical modeling approach

    PubMed Central

    Cole, Jennifer; Linebaugh, Gary; Munson, Cheyenne; McMurray, Bob

    2010-01-01

    Coarticulation is a source of acoustic variability for vowels, but how large is this effect relative to other sources of variance? We investigate acoustic effects of anticipatory V-to-V coarticulation relative to variation due to the following C and individual speaker. We examine F1 and F2 from V1 in 48 V1-C#V2 contexts produced by 10 speakers of American English. ANOVA reveals significant effects of both V2 and C on F1 and F2 measures of V1. The influence of V2 and C on acoustic variability relative to that of speaker and target vowel identity is evaluated using hierarchical linear regression. Speaker and target vowel account for roughly 80% of the total variance in F1 and F2, but when this variance is partialed out C and V2 account for another 18% (F1) and 63% (F2) of the remaining target vowel variability. Multinomial logistic regression (MLR) models are constructed to test the power of target vowel F1 and F2 for predicting C and V2 of the upcoming context. Prediction accuracy is 58% for C-Place, 76% for C-Voicing and 54% for V2, but only when variance due to other sources is factored out. MLR is discussed as a model of the parsing mechanism in speech perception. PMID:21173864

  6. Spanish is better than English for discriminating Portuguese vowels: acoustic similarity versus vowel inventory size

    PubMed Central

    Elvin, Jaydene; Escudero, Paola; Vasiliev, Polina

    2014-01-01

    Second language (L2) learners often struggle to distinguish sound contrasts that are not present in their native language (L1). Models of non-native and L2 sound perception claim that perceptual similarity between L1 and L2 sound contrasts correctly predicts discrimination by naïve listeners and L2 learners. The present study tested the explanatory power of vowel inventory size versus acoustic properties as predictors of discrimination accuracy when naïve Australian English (AusE) and Iberian Spanish (IS) listeners are presented with six Brazilian Portuguese (BP) vowel contrasts. Our results show that IS listeners outperformed AusE listeners, confirming that cross-linguistic acoustic properties, rather than cross-linguistic vowel inventory sizes, successfully predict non-native discrimination difficulty. Furthermore, acoustic distance between BP vowels and closest L1 vowels successfully predicted differential levels of difficulty among the six BP contrasts, with BP /e-i/ and /o-u/ being the most difficult for both listener groups. We discuss the importance of our findings for the adequacy of models of L2 speech perception. PMID:25400599

  7. The effect of vowel inventory and acoustic properties in Salento Italian learners of Southern British English vowels.

    PubMed

    Escudero, Paola; Sisinni, Bianca; Grimaldi, Mirko

    2014-03-01

    Salento Italian (SI) listeners' categorization and discrimination of standard Southern British English (SSBE) vowels were examined in order to establish their initial state in the acquisition of the SSBE vowel system. The results of the vowel categorization task revealed that SI listeners showed single-category assimilation for many SSBE vowels and multiple-category assimilation for others. Additionally, SI vowel discrimination accuracy varied across contrasts, in line with the categorization results. This differential level of difficulty is discussed on the basis of current L2 perception models. The SI categorization results were then compared to the previously reported data on Peruvian Spanish (PS) listeners. Both SI and PS have a five-vowel inventory and therefore both listener groups were expected to have similar problems when distinguishing SSBE vowel contrasts, but were predicted to have different mappings of SSBE vowels to native categories due to the differences in the acoustic properties of vowels across the two languages. As predicted by the hypothesis that acoustic differences in production lead to a different nonnative perception, the comparison showed that there was large variability in how SSBE vowels are initially mapped to the specific five-vowel inventory. Predictions for differential L2 development across languages are also provided. PMID:24606292

  8. Acoustic and Perceptual Characteristics of Vowels Produced during Simultaneous Communication

    ERIC Educational Resources Information Center

    Schiavetti, Nicholas; Metz, Dale Evan; Whitehead, Robert L.; Brown, Shannon; Borges, Janie; Rivera, Sara; Schultz, Christine

    2004-01-01

    This study investigated the acoustical and perceptual characteristics of vowels in speech produced during simultaneous communication (SC). Twelve normal hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking a set of sentences containing monosyllabic words designed for measurement of vowel…

  9. The articulatory and acoustical characteristics of the ``apical vowels'' in Beijing Mandarin

    NASA Astrophysics Data System (ADS)

    Lee, Wai-Sum

    2005-09-01

    The study investigates the articulatory and acoustical characteristics of the two so-called ``apical vowels'' in Beijing Mandarin, which have been referred to as ``apical anterior vowel'' and ``apical posterior vowel'' by the linguists in China. The ``apical posterior vowel'' has also been described as a retroflex. The results of an EMA (electromagnetic articulograph) analysis show that both vowels are apical, with the tip of tongue approaching the alveolar region for the ``anterior vowel'' and the postalveolar region for the ``posterior vowel.'' The ``posterior vowel'' is pharyngealized, as the body of tongue in particular the posterodorsal portion is pulled backward toward the pharynx. Acoustical data obtained using the CSL4400 speech analysis software show that the two ``apical vowels'' have similar F1 value. The F2 value is slightly larger for the ``posterior vowel'' than ``anterior vowel.'' Thus, the correlation between a larger F2 and the advanced tongue position is not applicable to these ``apical vowels.'' The main difference between the two ``apical vowels'' is in F3, where the value is much smaller for the ``posterior vowel'' than ``anterior vowel.'' It is assumed that the smaller F3 value for the ``posterior vowel'' is due to pharyngealization.

  10. An Evaluation of Articulatory Working Space Area in Vowel Production of Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Bunton, Kate; Leddy, Mark

    2011-01-01

    Many adolescents and adults with Down syndrome have reduced speech intelligibility. Reasons for this reduction may relate to differences in anatomy and physiology, both of which are important for creating an intelligible speech signal. The purpose of this study was to document acoustic vowel space and articulatory working space for two adult…

  11. An assessment of acoustic contrast between long and short vowels using convex hulls.

    PubMed

    Haynes, Erin F; Taylor, Michael

    2014-08-01

    An alternative to the spectral overlap assessment metric (SOAM), first introduced by Wassink [(2006). J. Acoust. Soc. Am. 119(4), 2334-2350], is introduced. The SOAM quantifies the intra- and inter-language differences between long-short vowel pairs through a comparison of spectral (F1, F2) and temporal properties modeled with best fit ellipses (F1 × F2 space) and ellipsoids (F1 × F2 × duration). However, the SOAM ellipses and ellipsoids rely on a Gaussian distribution of vowel data and a dense dataset, neither of which can be assumed in endangered languages or languages with limited available data. The method presented in this paper, called the Vowel Overlap Assessment with Convex Hulls (VOACH) method, improves upon the earlier metric through the use of best-fit convex shapes. The VOACH method reduces the incorporation of "empty" data into calculations of vowel space. Both methods are applied to Numu (Oregon Northern Paiute), an endangered language of the western United States. Calculations from the VOACH method suggest that Numu is a primary quantity language, a result that is well aligned with impressionistic analyses of spectral and durational data from the language and with observations by field researchers. PMID:25096122

  12. Acoustic comparisons of Japanese and English vowels produced by native speakers of Japanese

    NASA Astrophysics Data System (ADS)

    Nishi, Kanae; Akahane-Yamada, Reiko; Kubo, Rieko; Strange, Winifred

    2003-10-01

    This study explored acoustic similarities/differences between Japanese (J) and American English (AE) vowels produced by native J speakers and compared production patterns to their perceptual assimilation of AE vowels [Strange et al., J. Phonetics 26, 311-344 (1998)]. Eight male native J speakers who had served as listeners in Strange et al. produced 18 Japanese (J) vowels (5 long-short pairs, 2 double vowels, and 3 long-short palatalized pairs) and 11 American English (AE) vowels in /hVbopena/ disyllables embedded in a carrier sentence. Acoustical parameters included formant frequencies at syllable midpoint (F1/F2/F3), formant change from 25% to 75% points in syllable (formant change), and vocalic duration. Results of linear discriminant analyses showed rather poor acoustic differentiation of J vowel categories when F1/F2/F3 served as input variables (60% correct classification), which greatly improved when duration and formant change were added. In contrast, correct classification of J speakers' AE vowels using F1/F2/F3 was very poor (66%) and did not improve much when duration and dynamic information were added. J speakers used duration to differentiate long/short AE vowel contrasts except for mid-to-low back vowels; these vowels were perceptually assimilated to a single Japanese vowel, and are very difficult for Japanese listeners to identify.

  13. [Acoustic study of sustained vowels made by patients with recurrent nerve paralysis after thyroidectomy].

    PubMed

    Fauth, C; Vaxelaire, B; Rodier, J F; Volkmar, P P; Sock, R

    2012-01-01

    The objective of this work is to evaluate the consequences of thyroid surgery on the voice of patients suffering from recurrent paralysis. The consequences of the surgery are evaluated using a corpus of sustained vowels in order to identify the various disruptions that this procedure may produce. This research also looks for possible compensatory and/or readjustment strategies that can be used by a patient alone and with the help of speech therapy. Acoustic measurements considered are fundamental frequency (F0), Harmonics-to-Noise Ratio (HNR), and vowel space area. This is a longitudinal study, as all patients are recorded once a month during three months after surgery. Results reveal a modification of all parameters in the early recording stages. However, time and speech therapy contribute to obtaining expected values of the measured parameters, and thus to improvement of vocal quality. PMID:23074822

  14. Colloquial Arabic vowels in Israel: a comparative acoustic study of two dialects.

    PubMed

    Amir, Noam; Amir, Ofer; Rosenhouse, Judith

    2014-10-01

    This study explores the acoustic properties of the vowel systems of two dialects of colloquial Arabic spoken in Israel. One dialect is spoken in the Galilee region in the north of Israel, and the other is spoken in the Triangle (Muthallath) region, in central Israel. These vowel systems have five short and five long vowels /i, i:, e, e:, a, a:, o, o:, u, u:/. Twenty men and twenty women from each region were included, uttering 30 vowels each. All speakers were adult Muslim native speakers of these two dialects. The studied vowels were uttered in non-pharyngeal and non-laryngeal environments in the context of CVC words, embedded in a carrier sentence. The acoustic parameters studied were the two first formants, F0, and duration. Results revealed that long vowels were approximately twice as long as short vowels and differed also in their formant values. The two dialects diverged mainly in the short vowels rather than in the long ones. An overlap was found between the two short vowel pairs /i/-/e/ and /u/-/o/. This study demonstrates the existence of dialectal differences in the colloquial Arabic vowel systems, underlining the need for further research into the numerous additional dialects found in the region. PMID:25324089

  15. Acoustic characteristics of the vowel systems of six regional varieties of American English

    PubMed Central

    Clopper, Cynthia G.; Pisoni, David B.; de Jong, Kenneth

    2012-01-01

    Previous research by speech scientists on the acoustic characteristics of American English vowel systems has typically focused on a single regional variety, despite decades of sociolinguistic research demonstrating the extent of regional phonological variation in the United States. In the present study, acoustic measures of duration and first and second formant frequencies were obtained from five repetitions of 11 different vowels produced by 48 talkers representing both genders and six regional varieties of American English. Results revealed consistent variation due to region of origin, particularly with respect to the production of low vowels and high back vowels. The Northern talkers produced shifted low vowels consistent with the Northern Cities Chain Shift, the Southern talkers produced fronted back vowels consistent with the Southern Vowel Shift, and the New England, Midland, and Western talkers produced the low back vowel merger. These findings indicate that the vowel systems of American English are better characterized in terms of the region of origin of the talkers than in terms of a single set of idealized acoustic-phonetic baselines of “General” American English and provide benchmark data for six regional varieties. PMID:16240825

  16. Acoustic characteristics of the vowel systems of six regional varieties of American English

    NASA Astrophysics Data System (ADS)

    Clopper, Cynthia G.; Pisoni, David B.; de Jong, Kenneth

    2005-09-01

    Previous research by speech scientists on the acoustic characteristics of American English vowel systems has typically focused on a single regional variety, despite decades of sociolinguistic research demonstrating the extent of regional phonological variation in the United States. In the present study, acoustic measures of duration and first and second formant frequencies were obtained from five repetitions of 11 different vowels produced by 48 talkers representing both genders and six regional varieties of American English. Results revealed consistent variation due to region of origin, particularly with respect to the production of low vowels and high back vowels. The Northern talkers produced shifted low vowels consistent with the Northern Cities Chain Shift, the Southern talkers produced fronted back vowels consistent with the Southern Vowel Shift, and the New England, Midland, and Western talkers produced the low back vowel merger. These findings indicate that the vowel systems of American English are better characterized in terms of the region of origin of the talkers than in terms of a single set of idealized acoustic-phonetic baselines of ``General'' American English and provide benchmark data for six regional varieties.

  17. A Comprehensive Three-Dimensional Cortical Map of Vowel Space

    ERIC Educational Resources Information Center

    Scharinger, Mathias; Idsardi, William J.; Poe, Samantha

    2011-01-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space…

  18. Quantitative and Descriptive Comparison of Four Acoustic Analysis Systems: Vowel Measurements

    ERIC Educational Resources Information Center

    Burris, Carlyn; Vorperian, Houri K.; Fourakis, Marios; Kent, Ray D.; Bolt, Daniel M.

    2014-01-01

    Purpose: This study examines accuracy and comparability of 4 trademarked acoustic analysis software packages (AASPs): Praat, WaveSurfer, TF32, and CSL by using synthesized and natural vowels. Features of AASPs are also described. Method: Synthesized and natural vowels were analyzed using each of the AASP's default settings to secure 9…

  19. Acoustic Typology of Vowel Inventories and Dispersion Theory: Insights from a Large Cross-Linguistic Corpus

    ERIC Educational Resources Information Center

    Becker-Kristal, Roy

    2010-01-01

    This dissertation examines the relationship between the structural, phonemic properties of vowel inventories and their acoustic phonetic realization, with particular focus on the adequacy of Dispersion Theory, which maintains that inventories are structured so as to maximize perceptual contrast between their component vowels. In order to assess…

  20. Vowel space development in a child acquiring English and Spanish from birth

    NASA Astrophysics Data System (ADS)

    Andruski, Jean; Kim, Sahyang; Nathan, Geoffrey; Casielles, Eugenia; Work, Richard

    2005-04-01

    To date, research on bilingual first language acquisition has tended to focus on the development of higher levels of language, with relatively few analyses of the acoustic characteristics of bilingual infants' and childrens' speech. Since monolingual infants begin to show perceptual divisions of vowel space that resemble adult native speakers divisions by about 6 months of age [Kuhl et al., Science 255, 606-608 (1992)], bilingual childrens' vowel production may provide evidence of their awareness of language differences relatively early during language development. This paper will examine the development of vowel categories in a child whose mother is a native speaker of Castilian Spanish, and whose father is a native speaker of American English. Each parent speaks to the child only in her/his native language. For this study, recordings made at the ages of 2;5 and 2;10 were analyzed and F1-F2 measurements were made of vowels from the stressed syllables of content words. The development of vowel space is compared across ages within each language, and across languages at each age. In addition, the child's productions are compared with the mother's and father's vocalic productions, which provide the predominant input in Spanish and English respectively.

  1. Vowel Acoustics in Dysarthria: Speech Disorder Diagnosis and Classification

    ERIC Educational Resources Information Center

    Lansford, Kaitlin L.; Liss, Julie M.

    2014-01-01

    Purpose: The purpose of this study was to determine the extent to which vowel metrics are capable of distinguishing healthy from dysarthric speech and among different forms of dysarthria. Method: A variety of vowel metrics were derived from spectral and temporal measurements of vowel tokens embedded in phrases produced by 45 speakers with…

  2. Does Vowel Inventory Density Affect Vowel-to-Vowel Coarticulation?

    ERIC Educational Resources Information Center

    Mok, Peggy P. K.

    2013-01-01

    This study tests the output constraints hypothesis that languages with a crowded phonemic vowel space would allow less vowel-to-vowel coarticulation than languages with a sparser vowel space to avoid perceptual confusion. Mandarin has fewer vowel phonemes than Cantonese, but their allophonic vowel spaces are similarly crowded. The hypothesis…

  3. Acoustic properties of vowels in clear and conversational speech by female non-native English speakers

    NASA Astrophysics Data System (ADS)

    Li, Chi-Nin; So, Connie K.

    2005-04-01

    Studies have shown that talkers can improve the intelligibility of their speech when instructed to speak as if talking to a hearing-impaired person. The improvement of speech intelligibility is associated with specific acoustic-phonetic changes: increases in vowel duration and fundamental frequency (F0), a wider pitch range, and a shift in formant frequencies for F1 and F2. Most previous studies of clear speech production have been conducted with native speakers; research with second language speakers is much less common. The present study examined the acoustic properties of non-native English vowels produced in a clear speaking style. Five female Cantonese speakers and a comparison group of English speakers were recorded producing four vowels (/i u ae a/) in /bVt/ context in conversational and clear speech. Vowel durations, F0, pitch range, and the first two formants for each of the four vowels were measured. Analyses revealed that for both groups of speakers, vowel durations, F0, pitch range, and F1 spoken clearly were greater than those produced conversationally. However, F2 was higher in conversational speech than in clear speech. The findings suggest that female non-native English speakers exhibit acoustic-phonetic patterns similar to those of native speakers when asked to produce English vowels clearly.

  4. Vowel Acoustics in Adults with Apraxia of Speech

    ERIC Educational Resources Information Center

    Jacks, Adam; Mathes, Katey A.; Marquardt, Thomas P.

    2010-01-01

    Purpose: To investigate the hypothesis that vowel production is more variable in adults with acquired apraxia of speech (AOS) relative to healthy individuals with unimpaired speech. Vowel formant frequency measures were selected as the specific target of focus. Method: Seven adults with AOS and aphasia produced 15 repetitions of 6 American English…

  5. Functional connectivity associated with acoustic stability during vowel production: implications for vocal-motor control.

    PubMed

    Sidtis, John J

    2015-03-01

    Vowels provide the acoustic foundation of communication through speech and song, but little is known about how the brain orchestrates their production. Positron emission tomography was used to study regional cerebral blood flow (rCBF) during sustained production of the vowel /a/. Acoustic and blood flow data from 13, normal, right-handed, native speakers of American English were analyzed to identify CBF patterns that predicted the stability of the first and second formants of this vowel. Formants are bands of resonance frequencies that provide vowel identity and contribute to voice quality. The results indicated that formant stability was directly associated with blood flow increases and decreases in both left- and right-sided brain regions. Secondary brain regions (those associated with the regions predicting formant stability) were more likely to have an indirect negative relationship with first formant variability, but an indirect positive relationship with second formant variability. These results are not definitive maps of vowel production, but they do suggest that the level of motor control necessary to produce stable vowels is reflected in the complexity of an underlying neural system. These results also extend a systems approach to functional image analysis, previously applied to normal and ataxic speech rate that is solely based on identifying patterns of brain activity associated with specific performance measures. Understanding the complex relationships between multiple brain regions and the acoustic characteristics of vocal stability may provide insight into the pathophysiology of the dysarthrias, vocal disorders, and other speech changes in neurological and psychiatric disorders. PMID:25295385

  6. Prosodic domain-initial effects on the acoustic structure of vowels

    NASA Astrophysics Data System (ADS)

    Fox, Robert Allen; Jacewicz, Ewa; Salmons, Joseph

    2003-10-01

    In the process of language change, vowels tend to shift in ``chains,'' leading to reorganizations of entire vowel systems over time. A long research tradition has described such patterns, but little is understood about what factors motivate such shifts. Drawing data from changes in progress in American English dialects, the broad hypothesis is tested that changes in vowel systems are related to prosodic organization and stress patterns. Changes in vowels under greater prosodic prominence correlate directly with, and likely underlie, historical patterns of shift. This study examines acoustic characteristics of vowels at initial edges of prosodic domains [Fougeron and Keating, J. Acoust. Soc. Am. 101, 3728-3740 (1997)]. The investigation is restricted to three distinct prosodic levels: utterance (sentence-initial), phonological phrase (strong branch of a foot), and syllable (weak branch of a foot). The predicted changes in vowels /e/ and /ɛ/ in two American English dialects (from Ohio and Wisconsin) are examined along a set of acoustic parameters: duration, formant frequencies (including dynamic changes over time), and fundamental frequency (F0). In addition to traditional methodology which elicits list-like intonation, a design is adapted to examine prosodic patterns in more typical sentence intonations. [Work partially supported by NIDCD R03 DC005560-01.

  7. Speech in ALS: Longitudinal Changes in Lips and Jaw Movements and Vowel Acoustics

    PubMed Central

    Yunusova, Yana; Green, Jordan R.; Lindstrom, Mary J.; Pattee, Gary L.; Zinman, Lorne

    2015-01-01

    Purpose The goal of this exploratory study was to investigate longitudinally the changes in facial kinematics, vowel formant frequencies, and speech intelligibility in individuals diagnosed with bulbar amyotrophic lateral sclerosis (ALS). This study was motivated by the need to understand articulatory and acoustic changes with disease progression and their subsequent effect on deterioration of speech in ALS. Method Lip and jaw movements and vowel acoustics were obtained for four individuals with bulbar ALS during four consecutive recording sessions with an average interval of three months between recordings. Participants read target words embedded into sentences at a comfortable speaking rate. Maximum vertical and horizontal mouth opening and maximum jaw displacements were obtained during corner vowels. First and second formant frequencies were measured for each vowel. Speech intelligibility and speaking rate score were obtained for each session as well. Results Transient, non-vowel-specific changes in kinematics of the jaw and lips were observed. Kinematic changes often preceded changes in vowel acoustics and speech intelligibility. Conclusions Nonlinear changes in speech kinematics should be considered in evaluation of the disease effects on jaw and lip musculature. Kinematic measures might be most suitable for early detection of changes associated with bulbar ALS.

  8. Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech

    ERIC Educational Resources Information Center

    Tyson, Na'im R.

    2012-01-01

    In an attempt to understand what acoustic/auditory feature sets motivated transcribers towards certain labeling decisions, I built machine learning models that were capable of discriminating between canonical and non-canonical vowels excised from the Buckeye Corpus. Specifically, I wanted to model when the dictionary form and the transcribed-form…

  9. Subthalamic Stimulation Reduces Vowel Space at the Initiation of Sustained Production: Implications for Articulatory Motor Control in Parkinson’s Disease

    PubMed Central

    Sidtis, John J.; Alken, Amy G.; Tagliati, Michele; Alterman, Ron; Van Lancker Sidtis, Diana

    2016-01-01

    Background: Stimulation of the subthalamic nuclei (STN) is an effective treatment for Parkinson’s disease, but complaints of speech difficulties after surgery have been difficult to quantify. Speech measures do not convincingly account for such reports. Objective: This study examined STN stimulation effects on vowel production, in order to probe whether DBS affects articulatory posturing. The objective was to compare positioning during the initiation phase with the steady prolongation phase by measuring vowel spaces for three “corner” vowels at these two time frames. Methods: Vowel space was measured over the initial 0.25 sec of sustained productions of high front (/i/), high back (/u/) and low vowels (/a/), and again during a 2 sec segment at the midpoint. Eight right-handed male subjects with bilateral STN stimulation and seven age-matched male controls were studied based on their participation in a larger study that included functional imaging. Mean values: age = 57±4.6 yrs; PD duration = 12.3±2.7 yrs; duration of DBS = 25.6±21.2 mos, and UPDRS III speech score = 1.6±0.7. STN subjects were studied off medication at their therapeutic DBS settings and again with their stimulators off, counter-balanced order. Results: Vowel space was larger in the initiation phase compared to the midpoint for both the control and the STN subjects off stimulation. With stimulation on, however, the initial vowel space was significantly reduced to the area measured at the mid-point. For the three vowels, the acoustics were differentially affected, in accordance with expected effects of front versus back position in the vocal tract. Conclusions: STN stimulation appears to constrain initial articulatory gestures for vowel production, raising the possibility that articulatory positions normally used in speech are similarly constrained. PMID:27003219

  10. Identification of Acoustically Similar and Dissimilar Vowels in Profoundly Deaf Adults Who Use Hearing Aids and/or Cochlear Implants: Some Preliminary Findings

    PubMed Central

    Hay-McCutcheon, Marcia J.; Peterson, Nathaniel R.; Rosado, Christian A.; Pisoni, David B.

    2014-01-01

    Purpose In this study, the authors examined the effects of aging and residual hearing on the identification of acoustically similar and dissimilar vowels in adults with postlingual deafness who use hearing aids (HAs) and/or cochlear implants (CIs). Method The authors used two groups of acoustically similar and dissimilar vowels to assess vowel identification. Also, the Consonant–Nucleus–Consonant Word Recognition Test (Peterson & Lehiste, 1962) and sentences from the Hearing in Noise Test (Nilsson, Soli, & Sullivan, 1994) were administered. Forty CI recipients with postlingual deafness (ages 31–81 years) participated in the study. Results Acoustically similar vowels were more difficult to identify than acoustically dissimilar vowels. With increasing age, performance deteriorated when identifying acoustically similar vowels. Vowel identification was also affected by the use of a contralateral HA and the degree of residual hearing prior to implantation. Moderate correlations were found between speech perception and vowel identification performance. Conclusions Identification performance was affected by the acoustic similarity of the vowels. Older adults experienced more difficulty identifying acoustically similar confusable vowels than did younger adults. The findings might lend support to the ease of language understanding model (Ronnberg, Rudner, Foo, & Lunner, 2008), which proposes that the quality and perceptual robustness of acoustic input affects speech perception. PMID:23824440

  11. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels

    PubMed Central

    2014-01-01

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined. Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production. PMID:25060583

  12. Vowel Space Characteristics of Speech Directed to Children With and Without Hearing Loss

    PubMed Central

    Wieland, Elizabeth A.; Burnham, Evamarie B.; Kondaurova, Maria; Bergeson, Tonya R.

    2015-01-01

    Purpose This study examined vowel characteristics in adult-directed (AD) and infant-directed (ID) speech to children with hearing impairment who received cochlear implants or hearing aids compared with speech to children with normal hearing. Method Mothers' AD and ID speech to children with cochlear implants (Study 1, n = 20) or hearing aids (Study 2, n = 11) was compared with mothers' speech to controls matched on age and hearing experience. The first and second formants of vowels /i/, /ɑ/, and /u/ were measured, and vowel space area and dispersion were calculated. Results In both studies, vowel space was modified in ID compared with AD speech to children with and without hearing loss. Study 1 showed larger vowel space area and dispersion in ID compared with AD speech regardless of infant hearing status. The pattern of effects of ID and AD speech on vowel space characteristics in Study 2 was similar to that in Study 1, but depended partly on children's hearing status. Conclusion Given previously demonstrated associations between expanded vowel space in ID compared with AD speech and enhanced speech perception skills, this research supports a focus on vowel pronunciation in developing intervention strategies for improving speech-language skills in children with hearing impairment. PMID:25658071

  13. Characterizing the distribution of the quadrilateral vowel space area.

    PubMed

    Berisha, Visar; Sandoval, Steven; Utianski, Rene; Liss, Julie; Spanias, Andreas

    2014-01-01

    The vowel space area (VSA) has been studied as a quantitative index of intelligibility to the extent it captures articulatory working space and reductions therein. The majority of such studies have been empirical wherein measures of VSA are correlated with perceptual measures of intelligibility. However, the literature contains minimal mathematical analysis of the properties of this metric. This paper further develops the theoretical underpinnings of this metric by presenting a detailed analysis of the statistical properties of the VSA and characterizing its distribution through the moment generating function. The theoretical analysis is confirmed by a series of experiments where empirically estimated and theoretically predicted statistics of this function are compared. The results show that on the Hillenbrand and TIMIT data, the theoretically predicted values of the higher-order statistics of the VSA match very well with the empirical estimates of the same. PMID:24437782

  14. Characterizing the distribution of the quadrilateral vowel space area

    PubMed Central

    Berisha, Visar; Sandoval, Steven; Utianski, Rene; Liss, Julie; Spanias, Andreas

    2014-01-01

    The vowel space area (VSA) has been studied as a quantitative index of intelligibility to the extent it captures articulatory working space and reductions therein. The majority of such studies have been empirical wherein measures of VSA are correlated with perceptual measures of intelligibility. However, the literature contains minimal mathematical analysis of the properties of this metric. This paper further develops the theoretical underpinnings of this metric by presenting a detailed analysis of the statistical properties of the VSA and characterizing its distribution through the moment generating function. The theoretical analysis is confirmed by a series of experiments where empirically estimated and theoretically predicted statistics of this function are compared. The results show that on the Hillenbrand and TIMIT data, the theoretically predicted values of the higher-order statistics of the VSA match very well with the empirical estimates of the same. PMID:24437782

  15. Second language vowel training using vowel subsets: Order of training and choice of contrasts

    NASA Astrophysics Data System (ADS)

    Nishi, Kanae; Kewley-Port, Diane

    2005-09-01

    Our previous vowel training study for Japanese learners of American English [J. Acoust. Soc. Am. 117, 2401 (2005)] compared training for two vowel subsets: nine vowels covering the entire vowel space (9V condition); and the three more difficult vowels (3V condition). Trainees in 9V condition improved on all vowels, but their identification of the three more difficult vowels was lower than that of 3V trainees. Trainees in 3V condition improved identification of the trained three vowels but not the other vowels. In order to further explore more effective training protocols, the present study compared two groups of native Korean trainees using two different training orders for the two vowel subsets: 3V then 9V (3V-9V) and 9V then 3V (9V-3V). The groups were compared in terms of their performance on all nine vowels for pre-, mid-, and post-test scores. Average test scores across the two groups were not different from each other. A closer examination indicated that group 3V-9V did not improve on one of the three more difficult vowels, whereas group 9V-3V improved on all three vowels, indicating the importance of training subset order. [Work supported by NIH DC-006313 and DC-02229.

  16. Emotions in freely varying and mono-pitched vowels, acoustic and EGG analyses.

    PubMed

    Waaramaa, Teija; Palo, Pertti; Kankare, Elina

    2015-12-01

    Vocal emotions are expressed either by speech or singing. The difference is that in singing the pitch is predetermined while in speech it may vary freely. It was of interest to study whether there were voice quality differences between freely varying and mono-pitched vowels expressed by professional actors. Given their profession, actors have to be able to express emotions both by speech and singing. Electroglottogram and acoustic analyses of emotional utterances embedded in expressions of freely varying vowels [a:], [i:], [u:] (96 samples) and mono-pitched protracted vowels (96 samples) were studied. Contact quotient (CQEGG) was calculated using 35%, 55%, and 80% threshold levels. Three different threshold levels were used in order to evaluate their effects on emotions. Genders were studied separately. The results suggested significant gender differences for CQEGG 80% threshold level. SPL, CQEGG, and F4 were used to convey emotions, but to a lesser degree, when F0 was predetermined. Moreover, females showed fewer significant variations than males. Both genders used more hypofunctional phonation type in mono-pitched utterances than in the expressions with freely varying pitch. The present material warrants further study of the interplay between CQEGG threshold levels and formant frequencies, and listening tests to investigate the perceptual value of the mono-pitched vowels in the communication of emotions. PMID:24998780

  17. English Vowel Spaces Produced by Japanese Speakers: The Smaller Point Vowels' and the Greater Schwas'

    ERIC Educational Resources Information Center

    Tomita, Kaoru; Yamada, Jun; Takatsuka, Shigenobu

    2010-01-01

    This study investigated how Japanese-speaking learners of English pronounce the three point vowels /i/, /u/, and /a/ appearing in the first and second monosyllabic words of English noun phrases, and the schwa /[image omitted]/ appearing in English disyllabic words. First and second formant (F1 and F2) values were measured for four Japanese…

  18. Characteristics of the Lax Vowel Space in Dysarthria

    ERIC Educational Resources Information Center

    Tjaden, Kris; Rivera, Deanna; Wilding, Gregory; Turner, Greg S.

    2005-01-01

    It has been hypothesized that lax vowels may be relatively unaffected by dysarthria, owing to the reduced vocal tract shapes required for these phonetic events (G. S. Turner, K. Tjaden, & G. Weismer, 1995). It also has been suggested that lax vowels may be especially susceptible to speech mode effects (M. A. Picheny, N. I. Durlach, & L. D. Braida,…

  19. Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age

    PubMed Central

    Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.

    2015-01-01

    Purpose A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method In 2 studies, mothers read a storybook containing target vowels in ID and AD speech conditions. Study 1 was longitudinal, with 11 mothers recorded when their infants were 3, 6, and 9 months old. Study 2 was cross-sectional, with 48 mothers recorded when their infants were 3, 9, 13, or 20 months old (n = 12 per group). The 1st and 2nd formants of vowels /i/, /ɑ/, and /u/ were measured, and vowel space area and dispersion were calculated. Results Across both studies, 1st and/or 2nd formant frequencies shifted systematically for /i/ and /u/ vowels in ID compared with AD speech. No difference in vowel space area or dispersion was found. Conclusions The results suggest that a variety of communication and situational factors may affect phonetic modifications in ID speech, but that vowel space characteristics in speech to infants stay consistent across the first 2 years of life. PMID:25659121

  20. Classification of stop place in consonant-vowel contexts using feature extrapolation of acoustic-phonetic features in telephone speech.

    PubMed

    Lee, Jung-Won; Choi, Jeung-Yoon; Kang, Hong-Goo

    2012-02-01

    Knowledge-based speech recognition systems extract acoustic cues from the signal to identify speech characteristics. For channel-deteriorated telephone speech, acoustic cues, especially those for stop consonant place, are expected to be degraded or absent. To investigate the use of knowledge-based methods in degraded environments, feature extrapolation of acoustic-phonetic features based on Gaussian mixture models is examined. This process is applied to a stop place detection module that uses burst release and vowel onset cues for consonant-vowel tokens of English. Results show that classification performance is enhanced in telephone channel-degraded speech, with extrapolated acoustic-phonetic features reaching or exceeding performance using estimated Mel-frequency cepstral coefficients (MFCCs). Results also show acoustic-phonetic features may be combined with MFCCs for best performance, suggesting these features provide information complementary to MFCCs. PMID:22352523

  1. Comparing vowel perception and production in Spanish and Portuguese: European versus Latin American dialects.

    PubMed

    Chládková, Kateřina; Escudero, Paola

    2012-02-01

    Recent acoustic descriptions have shown that Spanish and Portuguese vowels are produced differently in Europe and Latin America. The present study investigates whether comparable between-variety differences exist in vowel perception. Spanish, Peruvian, Portuguese, and Brazilian listeners were tested in a vowel identification task with stimuli sampled from the whole vowel space. The mean perceived first (F1) and second formant (F2) of every vowel category were compared across varieties. For both languages, perception exhibited the same between-variety differences as production for F1 but not F2, which suggests correspondence between produced F1 and perceived vowel height but not between F2 and frontness. PMID:22352610

  2. Vowel Acoustics in Parkinson's Disease and Multiple Sclerosis: Comparison of Clear, Loud, and Slow Speaking Conditions

    ERIC Educational Resources Information Center

    Tjaden, Kris; Lam, Jennifer; Wilding, Greg

    2013-01-01

    Purpose: The impact of clear speech, increased vocal intensity, and rate reduction on acoustic characteristics of vowels was compared in speakers with Parkinson's disease (PD), speakers with multiple sclerosis (MS), and healthy controls. Method: Speakers read sentences in habitual, clear, loud, and slow conditions. Variations in clarity,…

  3. How Native Do They Sound? An Acoustic Analysis of the Spanish Vowels of Elementary Spanish Immersion Students

    ERIC Educational Resources Information Center

    Menke, Mandy R.

    2015-01-01

    Language immersion students' lexical, syntactic, and pragmatic competencies are well documented, yet their phonological skill has remained relatively unexplored. This study investigates the Spanish vowel productions of a cross-sectional sample of 35 one-way Spanish immersion students. Learner productions were analyzed acoustically and compared to…

  4. Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels

    PubMed Central

    Caballero-Morales, Santiago-Omar

    2013-01-01

    An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech. PMID:23935410

  5. Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age

    ERIC Educational Resources Information Center

    Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.; Dilley, Laura C.

    2015-01-01

    Purpose: A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method: In 2…

  6. Pitch (F0) and formant profiles of human vowels and vowel-like baboon grunts: The role of vocalizer body size and voice-acoustic allometry

    NASA Astrophysics Data System (ADS)

    Rendall, Drew; Kollias, Sophie; Ney, Christina; Lloyd, Peter

    2005-02-01

    Key voice features-fundamental frequency (F0) and formant frequencies-can vary extensively between individuals. Much of the variation can be traced to differences in the size of the larynx and vocal-tract cavities, but whether these differences in turn simply reflect differences in speaker body size (i.e., neutral vocal allometry) remains unclear. Quantitative analyses were therefore undertaken to test the relationship between speaker body size and voice F0 and formant frequencies for human vowels. To test the taxonomic generality of the relationships, the same analyses were conducted on the vowel-like grunts of baboons, whose phylogenetic proximity to humans and similar vocal production biology and voice acoustic patterns recommend them for such comparative research. For adults of both species, males were larger than females and had lower mean voice F0 and formant frequencies. However, beyond this, F0 variation did not track body-size variation between the sexes in either species, nor within sexes in humans. In humans, formant variation correlated significantly with speaker height but only in males and not in females. Implications for general vocal allometry are discussed as are implications for speech origins theories, and challenges to them, related to laryngeal position and vocal tract length. .

  7. An acoustic investigation of the Cantonese vowels in the speech of the adult and child speakers

    NASA Astrophysics Data System (ADS)

    Lee, Wai-Sum

    2005-04-01

    The study analyzes the formant center frequencies for the seven Cantonese vowels [i, y, u, ɛ, æ, openo, a] from 30 native speakers of Cantonese, 10 male and 10 female adults and 5 male and 5 female 9-10 year old children. Results show that the formant frequencies for the vowels are largest for the female children, followed by the male children, female adults, and male adults in decreasing order. Despite the differences, the patterns of formant frequencies for any one vowel for the different groups are similar. The difference in F-values for any one vowel between the male and female children is smaller than the difference between the male and female adults. As for individual formant frequencies, the difference in F1 between the males and females of the same age group and between the adults and children of the same gender group is smaller for the high vowels [i, y, u] than the non-high vowels [V, æ, openo, a]. The difference in F2 between the males and females of the same age group and between the adults and children of the same gender group is smaller for the high rounded vowels [y, u] than the other vowels. The paper will also present the ratios of speaker group-to-speaker group for individual formant frequencies.

  8. A Neural Substrate for Rapid Timbre Recognition? Neural and Behavioral Discrimination of Very Brief Acoustic Vowels.

    PubMed

    Occelli, F; Suied, C; Pressnitzer, D; Edeline, J-M; Gourévitch, B

    2016-06-01

    The timbre of a sound plays an important role in our ability to discriminate between behaviorally relevant auditory categories, such as different vowels in speech. Here, we investigated, in the primary auditory cortex (A1) of anesthetized guinea pigs, the neural representation of vowels with impoverished timbre cues. Five different vowels were presented with durations ranging from 2 to 128 ms. A psychophysical experiment involving human listeners showed that identification performance was near ceiling for the longer durations and degraded close to chance level for the shortest durations. This was likely due to spectral splatter, which reduced the contrast between the spectral profiles of the vowels at short durations. Effects of vowel duration on cortical responses were well predicted by the linear frequency responses of A1 neurons. Using mutual information, we found that auditory cortical neurons in the guinea pig could be used to reliably identify several vowels for all durations. Information carried by each cortical site was low on average, but the population code was accurate even for durations where human behavioral performance was poor. These results suggest that a place population code is available at the level of A1 to encode spectral profile cues for even very short sounds. PMID:25947234

  9. Effects of Long-Term Tracheostomy on Spectral Characteristics of Vowel Production.

    ERIC Educational Resources Information Center

    Kamen, Ruth Saletsky; Watson, Ben C.

    1991-01-01

    Eight preschool children who underwent tracheotomy during the prelingual period were compared to matched controls on a variety of speech measures. Children with tracheotomies showed reduced acoustic vowel space, suggesting they were limited in their ability to produce extreme vocal tract configurations for vowels postdecannulation. Oral motor…

  10. Auditory and categorical effects on cross-language vowel perception.

    PubMed

    Flege, J E; Munro, M J; Fox, R A

    1994-06-01

    English monolinguals and native Spanish speakers of English rated the dissimilarity of tokens of two Spanish vowel categories, two English vowel categories, or one Spanish and one English vowel category. The dissimilarity ratings of experienced and inexperienced Spanish subjects did not differ significantly. For both the native Spanish and English subjects, perceived dissimilarity increased as the distance between vowels in an F1-F2 acoustic space increased. This supported the existence of a universal, sensory-based component in cross-language vowel perception. The native English and Spanish subjects' ratings were comparable for pairs made up of vowels that were distant in an F1-F2 space, but not for pairs made up of vowels from categories that were adjacent in an F1-F2 space. The inference that the differential classification of a pair of vowels augments perceived dissimilarity was supported by the results of experiment 2, where subjects rated pairs of vowels and participated in an oddity discrimination task. Triads in the oddity task were made up of tokens of vowel categories that were either adjacent (e.g., /a/-/ae/-/a/) or nonadjacent (e.g., /a/-/i/-/i/) in an F1-F2 space. The native English subjects' discrimination was better than the native Spanish subjects' for adjacent but not nonadjacent triads. The better the Spanish subjects performed on adjacent triads--and thus the more likely they were to have differentially classified the two phonetically distinct vowels in the triad--the more dissimilar they had earlier judged realizations of those two categories to be when presented in pairs. Results are discussed in terms of their implications for second language acquisition. PMID:8046152

  11. Impact of the LSVT on vowel articulation and coarticulation in Parkinson's disease.

    PubMed

    Sauvageau, Vincent Martel; Roy, Johanna-Pascale; Langlois, Mélanie; Macoir, Joël

    2015-06-01

    The purpose of this study was to investigate the impact of the Lee Silverman Voice Treatment (LSVT®) on vowel articulation and consonant-vowel (C-V) coarticulation in dysarthric speakers with Parkinson's disease (PD). Nine Quebec French speakers diagnosed with idiopathic PD underwent the LSVT®. Speech characteristics were compared before and after treatment. Vowel articulation was measured using acoustic vowel space and calculated with the first (F1) and second formant (F2) of the vowels /i/, /u/ and /a/. C-V coarticulation was measured using locus equations, an acoustic metric based on the F2 transitions within vowels in relation to the preceding consonant. The relationship between these variables, speech loudness and vowel duration was also analysed. Results showed that vowel contrast increased in F1/F2 acoustic space after administration of the LSVT®. This improvement was associated with the gain in speech loudness and longer vowel duration. C-V coarticulation patterns between consonant contexts showed greater distinctiveness after the treatment. This improvement was associated with the gain in speech loudness only. These results support the conclusions of previous studies investigating the relationship between the LSVT®, speech loudness and articulation in PD. These results expand clinical understanding of the treatment and indicate that loud speech changes C-V coarticulation patterns. Clinical applications and theoretical considerations are discussed. PMID:25688915

  12. The influence of sexual orientation on vowel production (L)

    NASA Astrophysics Data System (ADS)

    Pierrehumbert, Janet B.; Bent, Tessa; Munson, Benjamin; Bradlow, Ann R.; Bailey, J. Michael

    2004-10-01

    Vowel production in gay, lesbian, bisexual (GLB), and heterosexual speakers was examined. Differences in the acoustic characteristics of vowels were found as a function of sexual orientation. Lesbian and bisexual women produced less fronted /u/ and /opena/ than heterosexual women. Gay men produced a more expanded vowel space than heterosexual men. However, the vowels of GLB speakers were not generally shifted toward vowel patterns typical of the opposite sex. These results are inconsistent with the conjecture that innate biological factors have a broadly feminizing influence on the speech of gay men and a broadly masculinizing influence on the speech of lesbian/bisexual women. They are consistent with the idea that innate biological factors influence GLB speech patterns indirectly by causing selective adoption of certain speech patterns characteristic of the opposite sex. .

  13. Effects of Age on Concurrent Vowel Perception in Acoustic and Simulated Electroacoustic Hearing

    ERIC Educational Resources Information Center

    Arehart, Kathryn H.; Souza, Pamela E.; Muralimanohar, Ramesh Kumar; Miller, Christi Wise

    2011-01-01

    Purpose: In this study, the authors investigated the effects of age on the use of fundamental frequency differences([delta]F[subscript 0]) in the perception of competing synthesized vowels in simulations of electroacoustic and cochlear-implant hearing. Method: Twelve younger listeners with normal hearing and 13 older listeners with (near) normal…

  14. The Effects of Inventory on Vowel Perception in French and Spanish: An MEG Study

    ERIC Educational Resources Information Center

    Hacquard, Valentine; Walter, Mary Ann; Marantz, Alec

    2007-01-01

    Production studies have shown that speakers of languages with larger phoneme inventories expand their acoustic space relative to languages with smaller inventories [Bradlow, A. (1995). A comparative acoustic study of English and Spanish vowels. "Journal of the Acoustical Society of America," 97(3), 1916-1924; Jongman, A., Fourakis, M., & Sereno,…

  15. Vowel identification by younger and older listeners: relative effectiveness of vowel edges and vowel centers.

    PubMed

    Donaldson, Gail S; Talmage, Elizabeth K; Rogers, Catherine L

    2010-09-01

    Young normal-hearing (YNH) and older normal-hearing (ONH) listeners identified vowels in naturally produced /bVb/ syllables and in modified syllables that consisted of variable portions of the vowel edges (silent-center [SC] stimuli) or vowel center (center-only [CO] stimuli). Listeners achieved high levels of performance for all but the shortest stimuli, indicating that they were able to access vowel cues throughout the syllable. ONH listeners performed similarly to YNH listeners for most stimuli, but performed more poorly for the shortest CO stimuli. SC and CO stimuli were equally effective in supporting vowel identification except when acoustic information was limited to 20 ms. PMID:20815425

  16. Durations of American English vowels by native and non-native speakers: acoustic analyses and perceptual effects.

    PubMed

    Liu, Chang; Jin, Su-Hyun; Chen, Chia-Tsen

    2014-06-01

    The goal of this study was to examine durations of American English vowels produced by English-, Chinese-, and Korean-native speakers and the effects of vowel duration on vowel intelligibility. Twelve American English vowels were recorded in the /hVd/ phonetic context by native speakers and non-native speakers. The English vowel duration patterns as a function of vowel produced by non-native speakers were generally similar to those produced by native speakers. These results imply that using duration differences across vowels may be an important strategy for non-native speakers' production before they are able to employ spectral cues to produce and perceive English speech sounds. In the intelligibility experiment, vowels were selected from 10 native and non-native speakers and vowel durations were equalized at 170 ms. Intelligibility of vowels with original and equalized durations was evaluated by American English native listeners. Results suggested that vowel intelligibility of native and non-native speakers degraded slightly by 3-8% when durations were equalized, indicating that vowel duration plays a minor role in vowel intelligibility. PMID:25102608

  17. Effects of head geometry simplifications on acoustic radiation of vowel sounds based on time-domain finite-element simulations.

    PubMed

    Arnela, Marc; Guasch, Oriol; Alías, Francesc

    2013-10-01

    One of the key effects to model in voice production is that of acoustic radiation of sound waves emanating from the mouth. The use of three-dimensional numerical simulations allows to naturally account for it, as well as to consider all geometrical head details, by extending the computational domain out of the vocal tract. Despite this advantage, many approximations to the head geometry are often performed for simplicity and impedance load models are still used as well to reduce the computational cost. In this work, the impact of some of these simplifications on radiation effects is examined for vowel production in the frequency range 0-10 kHz, by means of comparison with radiation from a realistic head. As a result, recommendations are given on their validity depending on whether high frequency energy (above 5 kHz) should be taken into account or not. PMID:24116430

  18. Acoustic correlates of caller identity and affect intensity in the vowel-like grunt vocalizations of baboons

    NASA Astrophysics Data System (ADS)

    Rendall, Drew

    2003-06-01

    Comparative, production-based research on animal vocalizations can allow assessments of continuity in vocal communication processes across species, including humans, and may aid in the development of general frameworks relating specific constitutional attributes of callers to acoustic-structural details of their vocal output. Analyses were undertaken on vowel-like baboon grunts to examine variation attributable to caller identity and the intensity of the affective state underlying call production. Six hundred six grunts from eight adult females were analyzed. Grunts derived from 128 bouts of calling in two behavioral contexts: concerted group movements and social interactions involving mothers and their young infants. Each context was subdivided into a high- and low-arousal condition. Thirteen acoustic features variously predicted to reflect variation in either caller identity or arousal intensity were measured for each grunt bout, including tempo-, source- and filter-related features. Grunt bouts were highly individually distinctive, differing in a variety of acoustic dimensions but with some indication that filter-related features contributed disproportionately to individual distinctiveness. In contrast, variation according to arousal condition was associated primarily with tempo- and source-related features, many matching those identified as vehicles of affect expression in other nonhuman primate species and in human speech and other nonverbal vocal signals.

  19. A longitudinal study of very young children's vowel production

    PubMed Central

    McGowan, Rebecca W.; McGowan, Richard S.; Denny, Margaret; Nittrouer, Susan

    2014-01-01

    Purpose Ecologically realistic, spontaneous adult-directed longitudinal speech data of young children was described by acoustic analyses. Method The first two formant frequencies of vowels produced by six children from different American English dialect regions were analyzed from ages 18 to 48 months. The vowels were from largely conversational contexts and were classified according to dictionary pronunciation. Results Within-subject formant frequency variability remained relatively constant for the span of ages studied here. It was often difficult to detect overall decreases in the first two formant frequencies between the ages of 30 and 48 months. A study of the movement of the corner vowels with respect to the vowel centroid showed that the shape of the vowel space remained qualitatively constant from 30 through 48 months. Conclusions The shape of the vowel space is established early in life. Some aspects of regional dialect were observed in some of the subjects at 42 months of age. The present paper adds to the existing data on the development of vowel spaces by describing ecologically realistic speech. PMID:24687464

  20. Acoustical analysis of Canadian French word-final vowels in varying phonetic contexts.

    PubMed

    Law, Franzo; Strange, Winifred

    2015-07-01

    This study analyzed Canadian French (CF) vowels /i y ø e ɛ o u a/ in word-final position. Of particular interest was the stability of /e-ɛ/; although some dialects of French have merged /e-ɛ/ to /e/ in word-final context, this contrast is maintained in CF. The present study investigated the stability of this contrast in various preceding phonetic contexts and in lexical vs morphological contrasts. Results showed that the contrast was maintained by all four speakers, although to varying degrees. PMID:26233064

  1. Acoustical analysis of Canadian French word-final vowels in varying phonetic contexts

    PubMed Central

    Law, Franzo; Strange, Winifred

    2015-01-01

    This study analyzed Canadian French (CF) vowels /i y ø e ɛ o u a/ in word-final position. Of particular interest was the stability of /e-ɛ/; although some dialects of French have merged /e-ɛ/ to /e/ in word-final context, this contrast is maintained in CF. The present study investigated the stability of this contrast in various preceding phonetic contexts and in lexical vs morphological contrasts. Results showed that the contrast was maintained by all four speakers, although to varying degrees. PMID:26233064

  2. A cross-dialectal acoustic comparison of vowels in Northern and Southern British English.

    PubMed

    Williams, Daniel; Escudero, Paola

    2014-11-01

    This study compares the duration and first two formants (F1 and F2) of 11 nominal monophthongs and five nominal diphthongs in Standard Southern British English (SSBE) and a Northern English dialect. F1 and F2 trajectories were fitted with parametric curves using the discrete cosine transform (DCT) and the zeroth DCT coefficient represented formant trajectory means and the first DCT coefficient represented the magnitude and direction of formant trajectory change to characterize vowel inherent spectral change (VISC). Cross-dialectal comparisons involving these measures revealed significant differences for the phonologically back monophthongs /ɒ, ɔː, ʊ, uː/ and also /зː/ and the diphthongs /eɪ, əʊ, aɪ, ɔɪ/. Most cross-dialectal differences are in zeroth DCT coefficients, suggesting formant trajectory means tend to characterize such differences, while first DCT coefficient differences were more numerous for diphthongs. With respect to VISC, the most striking differences are that /uː/ is considerably more diphthongized in the Northern dialect and that the F2 trajectory of /əʊ/ proceeds in opposite directions in the two dialects. Cross-dialectal differences were found to be largely unaffected by the consonantal context in which the vowels were produced. The implications of the results are discussed in relation to VISC, consonantal context effects and speech perception. PMID:25373975

  3. Hearing impairment and vowel production. A comparison between normally hearing, hearing-aided and cochlear implanted Dutch children.

    PubMed

    Verhoeven, Jo; Hide, Oydis; De Maeyer, Sven; Gillis, San; Gillis, Steven

    2016-01-01

    This study investigated the acoustic characteristics of the Belgian Standard Dutch vowels in children with hearing impairment and in children with normal hearing. In a balanced experimental design, the 12 vowels of Belgian Standard Dutch were recorded in three groups of children: a group of children with normal hearing, a group with a conventional hearing aid and a group with a cochlear implant. The formants, the surface area of the vowel space and the acoustic differentiation between the vowels were determined. The analyses revealed that many of the vowels in hearing-impaired children showed a reduction of the formant values. This reduction was particularly significant with respect to F2. The size of the vowel space was significantly smaller in the hearing-impaired children. Finally, a smaller acoustic differentiation between the vowels was observed in children with hearing impairment. The results show that even after 5 years of device use, the acoustic characteristics of the vowels in hearing-assisted children remain significantly different as compared to their NH peers. PMID:26629749

  4. Effects of repeated production on vowel distinctiveness within nonwords

    PubMed Central

    Sasisekaran, Jayanthi; Munson, Benjamin

    2012-01-01

    In the present study acoustic distinctiveness of vowels within nonwords with repeated production was investigated. Participants were 9 males and 15 females divided into two groups. Participants repeated 6 nonwords varying in phonemic composition. The mean Euclidean distance (MED) of the vowels from each production of a nonword from the center of the F1/F2 space was calculated. The changes in MED with repeated production were analyzed using linear mixed effects regression. Results revealed an increase in MED, indicating greater vowel dispersion, for the three-syllable nonwords and a significant decrease, i.e., greater reduction, in vowel dispersion for the six-syllable nonwords. Findings support the dynamic influence of sublexical processes on phonetic realization in speech production. PMID:22502490

  5. Articulatory Changes in Muscle Tension Dysphonia: Evidence of Vowel Space Expansion Following Manual Circumlaryngeal Therapy

    ERIC Educational Resources Information Center

    Roy, Nelson; Nissen, Shawn L.; Dromey, Christopher; Sapir, Shimon

    2009-01-01

    In a preliminary study, we documented significant changes in formant transitions associated with successful manual circumlaryngeal treatment (MCT) of muscle tension dysphonia (MTD), suggesting improvement in speech articulation. The present study explores further the effects of MTD on vowel articulation by means of additional vowel acoustic…

  6. Production and perception of French vowels by congenitally blind adults and sighted adults.

    PubMed

    Ménard, Lucie; Dupont, Sophie; Baum, Shari R; Aubin, Jérôme

    2009-09-01

    The goal of this study is to investigate the production and perception of French vowels by blind and sighted speakers. 12 blind adults and 12 sighted adults served as subjects. The auditory-perceptual abilities of each subject were evaluated by discrimination tests (AXB). At the production level, ten repetitions of the ten French oral vowels were recorded. Formant values and fundamental frequency values were extracted from the acoustic signal. Measures of contrasts between vowel categories were computed and compared for each feature (height, place of articulation, roundedness) and group (blind, sighted). The results reveal a significant effect of group (blind vs sighted) on production, with sighted speakers producing vowels that are spaced further apart in the vowel space than those of blind speakers. A group effect emerged for a subset of the perceptual contrasts examined, with blind speakers having higher peak discrimination scores than sighted speakers. Results suggest an important role of visual input in determining speech goals. PMID:19739754

  7. Vowel Development in an Emergent Mandarin-English Bilingual Child: A Longitudinal Study

    ERIC Educational Resources Information Center

    Yang, Jing; Fox, Robert A.; Jacewicz, Ewa

    2015-01-01

    This longitudinal case study documents the emergence of bilingualism in a young monolingual Mandarin boy on the basis of an acoustic analysis of his vowel productions recorded via a picture-naming task over 20 months following his enrollment in an all-English (L2) preschool at the age of 3;7. The study examined (1) his initial L2 vowel space, (2)…

  8. Acoustic rainbow trapping by coiling up space.

    PubMed

    Ni, Xu; Wu, Ying; Chen, Ze-Guo; Zheng, Li-Yang; Xu, Ye-Long; Nayar, Priyanka; Liu, Xiao-Ping; Lu, Ming-Hui; Chen, Yan-Feng

    2014-01-01

    We numerically realize the acoustic rainbow trapping effect by tapping an air waveguide with space-coiling metamaterials. Due to the high refractive-index of the space-coiling metamaterials, our device is more compact compared to the reported trapped-rainbow devices. A numerical model utilizing effective parameters is also calculated, whose results are consistent well with the direct numerical simulation of space-coiling structure. Moreover, such device with the capability of dropping different frequency components of a broadband incident temporal acoustic signal into different channels can function as an acoustic wavelength division de-multiplexer. These results may have potential applications in acoustic device design such as an acoustic filter and an artificial cochlea. PMID:25392033

  9. Acoustic rainbow trapping by coiling up space

    PubMed Central

    Ni, Xu; Wu, Ying; Chen, Ze-Guo; Zheng, Li-Yang; Xu, Ye-Long; Nayar, Priyanka; Liu, Xiao-Ping; Lu, Ming-Hui; Chen, Yan-Feng

    2014-01-01

    We numerically realize the acoustic rainbow trapping effect by tapping an air waveguide with space-coiling metamaterials. Due to the high refractive-index of the space-coiling metamaterials, our device is more compact compared to the reported trapped-rainbow devices. A numerical model utilizing effective parameters is also calculated, whose results are consistent well with the direct numerical simulation of space-coiling structure. Moreover, such device with the capability of dropping different frequency components of a broadband incident temporal acoustic signal into different channels can function as an acoustic wavelength division de-multiplexer. These results may have potential applications in acoustic device design such as an acoustic filter and an artificial cochlea. PMID:25392033

  10. Vowel acquisition by prelingually deaf children with cochlear implants

    NASA Astrophysics Data System (ADS)

    Bouchard, Marie-Eve; Le Normand, Marie-Thérèse; Ménard, Lucie; Goud, Marilyne; Cohen, Henri

    2001-05-01

    Phonetic transcriptions (study 1) and acoustic analysis (study 2) were used to clarify the nature and rhythm of vowel acquisition following the cochlear implantation of prelingually deaf children. In the first study, seven children were divided according to their degree of hearing loss (DHL): DHL I: 90-100 dB of hearing loss, 1 children; DHL II: 100-110 dB, 3 children; and DHL III: over 110 dB, 3 children. Spontaneous speech productions were recorded and videotaped 6 and 12 months postsurgery and vowel inventories were obtained by listing all vowels that occurred at least twice in the child's repertoire at the time of recording. Results showed that degree of hearing loss and age at implantation have a significant impact on vowel acquisition. Indeed, DHL I and II children demonstrated more diversified as well as more typical pattern of acquisition. In the second study, the values of the first and second formants were extracted. The results suggest evolving use of the acoustic space, reflecting the use of auditory feedback to produce the three phonological features exploited to contrast French vowels (height, place of articulation, and rounding). The possible influence of visual feedback before cochlear implant is discussed.

  11. The Acoustic Characteristics of Diphthongs in Indian English

    ERIC Educational Resources Information Center

    Maxwell, Olga; Fletcher, Janet

    2010-01-01

    This paper presents the results of an acoustic analysis of English diphthongs produced by three L1 speakers of Hindi and four L1 speakers of Punjabi. Formant trajectories of rising and falling diphthongs (i.e., vowels where there is a clear rising or falling trajectory through the F1/F2 vowel space) were analysed in a corpus of citation-form…

  12. Comparing Deaf and Hearing Dutch Infants: Changes in the Vowel Space in the First 2 Years

    ERIC Educational Resources Information Center

    van der Stelt, Jeannette M.; Wempe, Ton G.; Pols, Louis C. W.

    2008-01-01

    The influence of the mother tongue on vowel productions in infancy is different for deaf and hearing babies. Audio material of five hearing and five deaf infants acquiring Dutch was collected monthly from month 5-18, and at 24 months. Fifty unlabelled utterances were digitized for each recording. This study focused on developmental paths in vowel…

  13. Malaysian English: An Instrumental Analysis of Vowel Contrasts

    ERIC Educational Resources Information Center

    Pillai, Stefanie; Don, Zuraidah Mohd.; Knowles, Gerald; Tang, Jennifer

    2010-01-01

    This paper makes an instrumental analysis of English vowel monophthongs produced by 47 female Malaysian speakers. The focus is on the distribution of Malaysian English vowels in the vowel space, and the extent to which there is phonetic contrast between traditionally paired vowels. The results indicate that, like neighbouring varieties of English,…

  14. Speechant: A Vowel Notation System to Teach English Pronunciation

    ERIC Educational Resources Information Center

    dos Reis, Jorge; Hazan, Valerie

    2012-01-01

    This paper introduces a new vowel notation system aimed at aiding the teaching of English pronunciation. This notation system, designed as an enhancement to orthographic text, was designed to use concepts borrowed from the representation of musical notes and is also linked to the acoustic characteristics of vowel sounds. Vowel timbre is…

  15. English vowel learning by speakers of Mandarin

    NASA Astrophysics Data System (ADS)

    Thomson, Ron I.

    2005-04-01

    One of the most influential models of second language (L2) speech perception and production [Flege, Speech Perception and Linguistic Experience (York, Baltimore, 1995) pp. 233-277] argues that during initial stages of L2 acquisition, perceptual categories sharing the same or nearly the same acoustic space as first language (L1) categories will be processed as members of that L1 category. Previous research has generally been limited to testing these claims on binary L2 contrasts, rather than larger portions of the perceptual space. This study examines the development of 10 English vowel categories by 20 Mandarin L1 learners of English. Imitation of English vowel stimuli by these learners, at 6 data collection points over the course of one year, were recorded. Using a statistical pattern recognition model, these productions were then assessed against native speaker norms. The degree to which the learners' perception/production shifted toward the target English vowels and the degree to which they matched L1 categories in ways predicted by theoretical models are discussed. The results of this experiment suggest that previous claims about perceptual assimilation of L2 categories to L1 categories may be too strong.

  16. International Space Station Acoustics - A Status Report

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.; Denham, Samuel A.

    2011-01-01

    It is important to control acoustic noise aboard the International Space Station (ISS) to provide a satisfactory environment for voice communications, crew productivity, and restful sleep, and to minimize the risk for temporary and permanent hearing loss. Acoustic monitoring is an important part of the noise control process on ISS, providing critical data for trend analysis, noise exposure analysis, validation of acoustic analysis and predictions, and to provide strong evidence for ensuring crew health and safety, thus allowing Flight Certification. To this purpose, sound level meter (SLM) measurements and acoustic noise dosimetry are routinely performed. And since the primary noise sources on ISS include the environmental control and life support system (fans and airflow) and active thermal control system (pumps and water flow), acoustic monitoring will indicate changes in hardware noise emissions that may indicate system degradation or performance issues. This paper provides the current acoustic levels in the ISS modules and sleep stations, and is an update to the status presented in 20031. Many new modules, and sleep stations have been added to the ISS since that time. In addition, noise mitigation efforts have reduced noise levels in some areas. As a result, the acoustic levels on the ISS have improved.

  17. International Space Station Acoustics - A Status Report

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.

    2015-01-01

    It is important to control acoustic noise aboard the International Space Station (ISS) to provide a satisfactory environment for voice communications, crew productivity, alarm audibility, and restful sleep, and to minimize the risk for temporary and permanent hearing loss. Acoustic monitoring is an important part of the noise control process on ISS, providing critical data for trend analysis, noise exposure analysis, validation of acoustic analyses and predictions, and to provide strong evidence for ensuring crew health and safety, thus allowing Flight Certification. To this purpose, sound level meter (SLM) measurements and acoustic noise dosimetry are routinely performed. And since the primary noise sources on ISS include the environmental control and life support system (fans and airflow) and active thermal control system (pumps and water flow), acoustic monitoring will reveal changes in hardware noise emissions that may indicate system degradation or performance issues. This paper provides the current acoustic levels in the ISS modules and sleep stations and is an update to the status presented in 2011. Since this last status report, many payloads (science experiment hardware) have been added and a significant number of quiet ventilation fans have replaced noisier fans in the Russian Segment. Also, noise mitigation efforts are planned to reduce the noise levels of the T2 treadmill and levels in Node 3, in general. As a result, the acoustic levels on the ISS continue to improve.

  18. Language dependent vowel representation in speech production

    PubMed Central

    Mitsuya, Takashi; Samson, Fabienne; Ménard, Lucie; Munhall, Kevin G.

    2013-01-01

    The representation of speech goals was explored using an auditory feedback paradigm. When talkers produce vowels the formant structure of which is perturbed in real time, they compensate to preserve the intended goal. When vowel formants are shifted up or down in frequency, participants change the formant frequencies in the opposite direction to the feedback perturbation. In this experiment, the specificity of vowel representation was explored by examining the magnitude of vowel compensation when the second formant frequency of a vowel was perturbed for speakers of two different languages (English and French). Even though the target vowel was the same for both language groups, the pattern of compensation differed. French speakers compensated to smaller perturbations and made larger compensations overall. Moreover, French speakers modified the third formant in their vowels to strengthen the compensation even though the third formant was not perturbed. English speakers did not alter their third formant. Changes in the perceptual goodness ratings by the two groups of participants were consistent with the threshold to initiate vowel compensation in production. These results suggest that vowel goals not only specify the quality of the vowel but also the relationship of the vowel to the vowel space of the spoken language. PMID:23654403

  19. Mapping emotions into acoustic space: the role of voice production.

    PubMed

    Patel, Sona; Scherer, Klaus R; Björkner, Eva; Sundberg, Johan

    2011-04-01

    Research on the vocal expression of emotion has long since used a "fishing expedition" approach to find acoustic markers for emotion categories and dimensions. Although partially successful, the underlying mechanisms have not yet been elucidated. To illustrate that this research can profit from considering the underlying voice production mechanism, we specifically analyzed short affect bursts (sustained/a/vowels produced by 10 professional actors for five emotions) according to physiological variations in phonation (using acoustic parameters derived from the acoustic signal and the inverse filter estimated voice source waveform). Results show significant emotion main effects for 11 of 12 parameters. Subsequent principal components analysis revealed three components that explain acoustic variations due to emotion, including "tension," "perturbation," and "voicing frequency." These results suggest that future work may benefit from theory-guided development of parameters to assess differences in physiological voice production mechanisms in the vocal expression of different emotions. PMID:21354259

  20. Learning to pronounce Vowel Sounds in a Foreign Language Using Acoustic Measurements of the Vocal Tract as Feedback in Real Time.

    ERIC Educational Resources Information Center

    Dowd, Annette; Smith, John; Wolfe, Joe

    1998-01-01

    Measured the first two vowel-tract resonances of a sample of native-French speakers for the non-nasalized vowels of that language. Values measured for native speakers for a particular vowel were used as target parameters for subjects who used a visual display of an impedance spectrum of their own vocal tracts as real time feedback to realize the…

  1. Sex differences in the acoustic structure of vowel-like grunt vocalizations in baboons and their perceptual discrimination by baboon listeners

    NASA Astrophysics Data System (ADS)

    Rendall, Drew; Owren, Michael J.; Weerts, Elise; Hienz, Robert D.

    2004-01-01

    This study quantifies sex differences in the acoustic structure of vowel-like grunt vocalizations in baboons (Papio spp.) and tests the basic perceptual discriminability of these differences to baboon listeners. Acoustic analyses were performed on 1028 grunts recorded from 27 adult baboons (11 males and 16 females) in southern Africa, focusing specifically on the fundamental frequency (F0) and formant frequencies. The mean F0 and the mean frequencies of the first three formants were all significantly lower in males than they were in females, more dramatically so for F0. Experiments using standard psychophysical procedures subsequently tested the discriminability of adult male and adult female grunts. After learning to discriminate the grunt of one male from that of one female, five baboon subjects subsequently generalized this discrimination both to new call tokens from the same individuals and to grunts from novel males and females. These results are discussed in the context of both the possible vocal anatomical basis for sex differences in call structure and the potential perceptual mechanisms involved in their processing by listeners, particularly as these relate to analogous issues in human speech production and perception.

  2. Interspeaker Variability in Hard Palate Morphology and Vowel Production

    ERIC Educational Resources Information Center

    Lammert, Adam; Proctor, Michael; Narayanan, Shrikanth

    2013-01-01

    Purpose: Differences in vocal tract morphology have the potential to explain interspeaker variability in speech production. The potential acoustic impact of hard palate shape was examined in simulation, in addition to the interplay among morphology, articulation, and acoustics in real vowel production data. Method: High-front vowel production from…

  3. The effects of inventory on vowel perception in French and Spanish: an MEG study.

    PubMed

    Hacquard, Valentine; Walter, Mary Ann; Marantz, Alec

    2007-03-01

    Production studies have shown that speakers of languages with larger phoneme inventories expand their acoustic space relative to languages with smaller inventories [Bradlow, A. (1995). A comparative acoustic study of English and Spanish vowels. Journal of the Acoustical Society of America, 97(3), 1916-1924; Jongman, A., Fourakis, M., & Sereno, J. (1989). The acoustic vowel space of Modern Greek and German. Language Speech, 32, 221-248]. In this study, we investigated whether this acoustic expansion in production has a perceptual correlate, that is, whether the perceived distance between pairs of sounds separated by equal acoustic distances varies as a function of inventory size or organization. We used magnetoencephalography, specifically the mismatch field response (MMF), and compared two language groups, French and Spanish, whose vowel inventories differ in size and organization. Our results show that the MMF is sensitive to inventory size but not organization, suggesting that speakers of languages with larger inventories perceive the same sounds as less similar than speakers with smaller inventories. PMID:17097137

  4. Articulatory Changes in Vowel Production following STN DBS and Levodopa Intake in Parkinson's Disease

    PubMed Central

    Martel Sauvageau, Vincent; Roy, Johanna-Pascale; Cantin, Léo; Prud'Homme, Michel; Langlois, Mélanie; Macoir, Joël

    2015-01-01

    Purpose. To investigate the impact of deep brain stimulation of the subthalamic nucleus (STN DBS) and levodopa intake on vowel articulation in dysarthric speakers with Parkinson's disease (PD). Methods. Vowel articulation was assessed in seven Quebec French speakers diagnosed with idiopathic PD who underwent STN DBS. Assessments were conducted on- and off-medication, first prior to surgery and then 1 year later. All recordings were made on-stimulation. Vowel articulation was measured using acoustic vowel space and formant centralization ratio. Results. Compared to the period before surgery, vowel articulation was reduced after surgery when patients were off-medication, while it was better on-medication. The impact of levodopa intake on vowel articulation changed with STN DBS: before surgery, levodopa impaired articulation, while it no longer had a negative effect after surgery. Conclusions. These results indicate that while STN DBS could lead to a direct deterioration in articulation, it may indirectly improve it by reducing the levodopa dose required to manage motor symptoms. These findings suggest that, with respect to speech production, STN DBS and levodopa intake cannot be investigated separately because the two are intrinsically linked. Along with motor symptoms, speech production should be considered when optimizing therapeutic management of patients with PD. PMID:26558134

  5. Native dialect influences second-language vowel perception: Peruvian versus Iberian Spanish learners of Dutch.

    PubMed

    Escudero, Paola; Williams, Daniel

    2012-05-01

    Peruvian Spanish (PS) and Iberian Spanish (IS) learners were tested on their ability to categorically discriminate and identify Dutch vowels. It was predicted that the acoustic differences between the vowel productions of the two dialects, which compare differently to Dutch vowels, would manifest in differential L2 perception for listeners of these two dialects. The results show that although PS learners had higher general L2 proficiency, IS learners were more accurate at discriminating all five contrasts and at identifying six of the L2 Dutch vowels. These findings confirm that acoustic differences in native vowel production lead to differential L2 vowel perception. PMID:22559460

  6. Direct Mapping of Acoustics to Phonology: On the Lexical Encoding of Front Rounded Vowels in L1 English-L2 French Acquisition

    ERIC Educational Resources Information Center

    Darcy, Isabelle; Dekydtspotter, Laurent; Sprouse, Rex A.; Glover, Justin; Kaden, Christiane; McGuire, Michael; Scott, John H. G.

    2012-01-01

    It is well known that adult US-English-speaking learners of French experience difficulties acquiring high /y/-/u/ and mid /oe/-/[openo]/ front vs. back rounded vowel contrasts in French. This study examines the acquisition of these French vowel contrasts at two levels: phonetic categorization and lexical representations. An ABX categorization task…

  7. Neural Processing of Acoustic Duration and Phonological German Vowel Length: Time Courses of Evoked Fields in Response to Speech and Nonspeech Signals

    ERIC Educational Resources Information Center

    Tomaschek, Fabian; Truckenbrodt, Hubert; Hertrich, Ingo

    2013-01-01

    Recent experiments showed that the perception of vowel length by German listeners exhibits the characteristics of categorical perception. The present study sought to find the neural activity reflecting categorical vowel length and the short-long boundary by examining the processing of non-contrastive durations and categorical length using MEG.…

  8. Speech after Radial Forearm Free Flap Reconstruction of the Tongue: A Longitudinal Acoustic Study of Vowel and Diphthong Sounds

    ERIC Educational Resources Information Center

    Laaksonen, Juha-Pertti; Rieger, Jana; Happonen, Risto-Pekka; Harris, Jeffrey; Seikaly, Hadi

    2010-01-01

    The purpose of this study was to use acoustic analyses to describe speech outcomes over the course of 1 year after radial forearm free flap (RFFF) reconstruction of the tongue. Eighteen Canadian English-speaking females and males with reconstruction for oral cancer had speech samples recorded (pre-operative, and 1 month, 6 months, and 1 year…

  9. Pacific northwest vowels: A Seattle neighborhood dialect study

    NASA Astrophysics Data System (ADS)

    Ingle, Jennifer K.; Wright, Richard; Wassink, Alicia

    2005-04-01

    According to current literature a large region encompassing nearly the entire west half of the U.S. belongs to one dialect region referred to as Western, which furthermore, according to Labov et al., ``... has developed a characteristic but not unique phonology.'' [http://www.ling.upenn.edu/phono-atlas/NationalMap/NationalMap.html] This paper will describe the vowel space of a set of Pacific Northwest American English speakers native to the Ballard neighborhood of Seattle, Wash. based on the acoustical analysis of high-quality Marantz CDR 300 recordings. Characteristics, such as low back merger and [u] fronting will be compared to findings by other studies. It is hoped that these recordings will contribute to a growing number of corpora of North American English dialects. All participants were born in Seattle and began their residence in Ballard between ages 0-8. They were recorded in two styles of speech: individually reading repetitions of a word list containing one token each of 10 vowels within carrier phrases, and in casual conversation for 40 min with a partner matched in age, gender, and social mobility. The goal was to create a compatible data set for comparison with current acoustic studies. F1 and F2 and vowel duration from LPC spectral analysis will be presented.

  10. Relationships between objective acoustic indices and acoustic comfort evaluation in nonacoustic spaces

    NASA Astrophysics Data System (ADS)

    Kang, Jian

    2001-05-01

    Much attention has been paid to acoustic spaces such as concert halls and recording studios, whereas research on nonacoustic buildings/spaces has been rather limited, especially from the viewpoint of acoustic comfort. In this research a series of case studies has been carried out on this topic, considering various spaces including shopping mall atrium spaces, library reading rooms, football stadia, swimming spaces, churches, dining spaces, as well as urban open public spaces. The studies focus on the relationships between objective acoustic indices such as sound pressure level and reverberation time and perceptions of acoustic comfort. The results show that the acoustic atmosphere is an important consideration in such spaces and the evaluation of acoustic comfort may vary considerably even if the objective acoustic indices are the same. It is suggested that current guidelines and technical regulations are insufficient in terms of acoustic design of these spaces, and the relationships established from the case studies between objective and subjective aspects would be useful for developing further design guidelines. [Work supported partly by the British Academy.

  11. Space vehicle acoustics prediction improvement for payloads. [space shuttle

    NASA Technical Reports Server (NTRS)

    Dandridge, R. E.

    1979-01-01

    The modal analysis method was extensively modified for the prediction of space vehicle noise reduction in the shuttle payload enclosure, and this program was adapted to the IBM 360 computer. The predicted noise reduction levels for two test cases were compared with experimental results to determine the validity of the analytical model for predicting space vehicle payload noise environments in the 10 Hz one-third octave band regime. The prediction approach for the two test cases generally gave reasonable magnitudes and trends when compared with the measured noise reduction spectra. The discrepancies in the predictions could be corrected primarily by improved modeling of the vehicle structural walls and of the enclosed acoustic space to obtain a more accurate assessment of normal modes. Techniques for improving and expandng the noise prediction for a payload environment are also suggested.

  12. Acoustic emission technology for space applications

    SciTech Connect

    Friesel, M.A.; Lemon, D.K.; Skorpik, J.R.; Hutton, P.H.

    1989-05-01

    Clearly the structural and functional integrity of space station components is a primary requirement. The combinations of advanced materials, new designs, and an unusual environment increase the need for inservice monitoring to help assure component integrity. Continuous monitoring of the components using acoustic emission (AE) methods can provide early indication of structural or functional distress, thus allowing time to plan remedial action. The term ''AE'' refers to energy impulses propagated from a growing crack in a solid material or from a leak in a pressurized pipe or tube. In addition to detecting a crack or leak, AE methods can provide information on the location of the defect and an estimate of crack growth rate and leak rate. 8 figs.

  13. Influences of listeners' native and other dialects on cross-language vowel perception.

    PubMed

    Williams, Daniel; Escudero, Paola

    2014-01-01

    This paper examines to what extent acoustic similarity between native and non-native vowels predicts non-native vowel perception and whether this process is influenced by listeners' native and other non-native dialects. Listeners with Northern and Southern British English dialects completed a perceptual assimilation task in which they categorized tokens of 15 Dutch vowels in terms of English vowel categories. While the cross-language acoustic similarity of Dutch vowels to English vowels largely predicted Southern listeners' perceptual assimilation patterns, this was not the case for Northern listeners, whose assimilation patterns resembled those of Southern listeners for all but three Dutch vowels. The cross-language acoustic similarity of Dutch vowels to Northern English vowels was re-examined by incorporating Southern English tokens, which resulted in considerable improvements in the predicting power of cross-language acoustic similarity. This suggests that Northern listeners' assimilation of Dutch vowels to English vowels was influenced by knowledge of both native Northern and non-native Southern English vowel categories. The implications of these findings for theories of non-native speech perception are discussed. PMID:25339921

  14. Multichannel Compression: Effects of Reduced Spectral Contrast on Vowel Identification

    ERIC Educational Resources Information Center

    Bor, Stephanie; Souza, Pamela; Wright, Richard

    2008-01-01

    Purpose: To clarify if large numbers of wide dynamic range compression channels provide advantages for vowel identification and to measure its acoustic effects. Methods: Eight vowels produced by 12 talkers in the /hVd/ context were compressed using 1, 2, 4, 8, and 16 channels. Formant contrast indices (mean formant peak minus mean formant trough;…

  15. Perceptual Adaptation of Voice Gender Discrimination with Spectrally Shifted Vowels

    ERIC Educational Resources Information Center

    Li, Tianhao; Fu, Qian-Jie

    2011-01-01

    Purpose: To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Method: Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the…

  16. Production and perception of whispered vowels

    NASA Astrophysics Data System (ADS)

    Kiefte, Michael

    2005-09-01

    Information normally associated with pitch, such as intonation, can still be conveyed in whispered speech despite the absence of voicing. For example, it is possible to whisper the question ``You are going today?'' without any syntactic information to distinguish this sentence from a simple declarative. It has been shown that pitch change in whispered speech is correlated with the simultaneous raising or lowering of several formants [e.g., M. Kiefte, J. Acoust. Soc. Am. 116, 2546 (2004)]. However, spectral peak frequencies associated with formants have been identified as important correlates to vowel identity. Spectral peak frequencies may serve two roles in the perception of whispered speech: to indicate both vowel identity and intended pitch. Data will be presented to examine the relative importance of several acoustic properties including spectral peak frequencies and spectral shape parameters in both the production and perception of whispered vowels. Speakers were asked to phonate and whisper vowels at three different pitches across a range of roughly a musical fifth. It will be shown that relative spectral change is preserved within vowels across intended pitches in whispered speech. In addition, several models of vowel identification by listeners will be presented. [Work supported by SSHRC.

  17. Vowel Devoicing in Shanghai.

    ERIC Educational Resources Information Center

    Zee, Eric

    A phonetic study of vowel devoicing in the Shanghai dialect of Chinese explored the phonetic conditions under which the high, closed vowels and the apical vowel in Shanghai are most likely to become devoiced. The phonetic conditions may be segmental or suprasegmental. Segmentally, the study sought to determine whether a certain type of pre-vocalic…

  18. Articulatory characteristics of Hungarian ‘transparent’ vowels

    PubMed Central

    Benus, Stefan; Gafos, Adamantios I.

    2007-01-01

    Using a combination of magnetometry and ultrasound, we examined the articulatory characteristics of the so-called ‘transparent’ vowels [iː], [i], and [eː] in Hungarian vowel harmony. Phonologically, transparent vowels are front, but they can be followed by either front or back suffixes. However, a finer look reveals an underlying phonetic coherence in two respects. First, transparent vowels in back harmony contexts show a less advanced (more retracted) tongue body posture than phonemically identical vowels in front harmony contexts: e.g. [i] in buli-val is less advanced than [i] in bili-vel. Second, transparent vowels in monosyllabic stems selecting back suffixes are also less advanced than phonemically identical vowels in stems selecting front suffixes: e.g. [iː] in ír, taking back suffixes, compared to [iː] of hír, taking front suffixes, is less advanced when these stems are produced in bare form (no suffixes). We thus argue that the phonetic degree of tongue body horizontal position correlates with the phonological alternation in suffixes. A hypothesis that emerges from this work is that a plausible phonetic basis for transparency can be found in quantal characteristics of the relation between articulation and acoustics of transparent vowels. More broadly, the proposal is that the phonology of transparent vowels is better understood when their phonological patterning is studied together with their articulatory and acoustic characteristics. PMID:18389086

  19. Vowel coarticulation: Landmark statistics measure vowel aggression.

    PubMed

    Chen, Wei-rong; Chang, Yueh-chin; Iskarous, Khalil

    2015-08-01

    Regression analysis and mutual information have been used to measure the degree of dependence between a consonant and a vowel, and this has been used to identify the invariance of consonant place and to quantify the coarticulatory resistance of consonants [e.g., Fowler (1994). Percept. Psychophys. 55, 597-610]. This paper presents the first application of this approach to measure coarticulatory properties of vowels, using regression analysis and mutual information on articulatory data of CV syllables produced by seven Taiwan Mandarin speakers. The results show that vowel /i/ shares the most information with the preceding consonant among vowels for the tongue body, whereas vowels /a/ and /u/ are not significantly different from each other in that respect. For the lip articulator, the degree of information sharing for vowels is in the progression: /u/ > /i/ > /a/. Based on the CV model theory of gestural coordination (C-V in-phase relation) and the present results, this study proposes that landmark statistics for vowels reflect the degree of vowel aggression and that V-to-C effect is dominant over C-to-V effect in C-V coarticulation. PMID:26328735

  20. Diversity of vowel systems

    NASA Astrophysics Data System (ADS)

    Maddieson, Ian

    2005-04-01

    Systems of vowels vary greatly across the world's languages while nonetheless conforming to certain general structural patterns. All languages have at least two qualitative distinctions between vowels based on the major parameters of height, backness and rounding, but probably none has more than 15 or so, and the modal number is 5. Generally these basic vowel qualities respect dispersion principles, but deviations can be considerable. When additional parameters, such as nasalization, length, phonation type and pharyngealization are included, the total number of vowel distinctions may easily exceed 40. These ``additive'' features never occur with a larger number of vowel qualities than those occurring in a ``plain'' series. Languages may differ markedly in the distributional patterns of their vowels as well as in their inventory. Some languages have different (usually reduced) vowel inventories in unstressed or other non-prominent positions; others constrain vowel sequences in (phonological) words through vowel harmony limitations. Co-occurrence patterns between vowels and consonants also vary greatly, as does the degree of coarticulation between vowels and neighboring segments. Learners must master all of these factors to speak an individual language fluently. Constraints that are universal or shared may be expected to facilitate this task.

  1. Adult Second Language Learning of Spanish Vowels

    ERIC Educational Resources Information Center

    Cobb, Katherine; Simonet, Miquel

    2015-01-01

    The present study reports on the findings of a cross-sectional acoustic study of the production of Spanish vowels by three different groups of speakers: 1) native Spanish speakers; 2) native English intermediate learners of Spanish; and 3) native English advanced learners of Spanish. In particular, we examined the production of the five Spanish…

  2. Two Notes on Kinande Vowel Harmony

    ERIC Educational Resources Information Center

    Kenstowicz, Michael J.

    2009-01-01

    This paper documents the acoustic reflexes of ATR harmony in Kinande followed by an analysis of the dominance reversal found in class 5 nominals. The principal findings are that the ATR harmony is reliably reflected in a lowering of the first formant. Depending on the vowel, ATR harmony also affects the second formant. The directional asymmetry…

  3. American and Swedish children's acquisition of vowel duration: Effects of vowel identity and final stop voicing

    NASA Astrophysics Data System (ADS)

    Buder, Eugene H.; Stoel-Gammon, Carol

    2002-04-01

    Vowel durations typically vary according to both intrinsic (segment-specific) and extrinsic (contextual) specifications. It can be argued that such variations are due to both predisposition and cognitive learning. The present report utilizes acoustic phonetic measurements from Swedish and American children aged 24 and 30 months to investigate the hypothesis that default behaviors may precede language-specific learning effects. The predicted pattern is the presence of final consonant voicing effects in both languages as a default, and subsequent learning of intrinsic effects most notably in the Swedish children. The data, from 443 monosyllabic tokens containing high-front vowels and final stop consonants, are analyzed in statistical frameworks at group and individual levels. The results confirm that Swedish children show an early tendency to vary vowel durations according to final consonant voicing, followed only six months later by a stage at which the intrinsic influence of vowel identity grows relatively more robust. Measures of vowel formant structure from selected 30-month-old children also revealed a tendency for children of this age to focus on particular acoustic contrasts. In conclusion, the results indicate that early acquisition of vowel specifications involves an interaction between language-specific features and articulatory predispositions associated with phonetic context.

  4. Effects of Intensive Voice Treatment (the Lee Silverman Voice Treatment [LSVT]) on Vowel Articulation in Dysarthric Individuals with Idiopathic Parkinson Disease: Acoustic and Perceptual Findings

    ERIC Educational Resources Information Center

    Sapir, Shimon; Spielman, Jennifer L.; Ramig, Lorraine O.; Story, Brad H.; Fox, Cynthia

    2007-01-01

    Purpose: To evaluate the effects of intensive voice treatment targeting vocal loudness (the Lee Silverman Voice Treatment [LSVT]) on vowel articulation in dysarthric individuals with idiopathic Parkinson's disease (PD). Method: A group of individuals with PD receiving LSVT (n = 14) was compared to a group of individuals with PD not receiving LSVT…

  5. Pre-attentive and attentive processing of French vowels.

    PubMed

    Deguchi, Chizuru; Chobert, Julie; Brunellière, Angèle; Nguyen, Noël; Colombo, Lucia; Besson, Mireille

    2010-12-17

    This study aimed at investigating the effects of acoustic distance and of speaker variability on the pre-attentive and attentive perception of French vowels by French adult speakers. The electroencephalogram (EEG) was recorded while participants watched a silent movie (Passive condition) and discriminated deviant vowels (Active condition). The auditory sequence included 4 French vowels, /u/ (standard) and /o/, /y/ and /ø/ as deviants, produced by 3 different speakers. As the vowel /o/ is closer to /u/ than the other deviants in acoustic distance, we predicted smaller mismatch negativity (MMN) and smaller N1 component, as well as higher error rate and longer reaction times. Results were in line with these predictions. Moreover, the MMN was elicited by all deviant vowels independently of speaker variability. By contrast, the Vowel by Speaker interaction was significant in the Active listening condition thereby showing that subtle within-category differences are processed at the attentive level. These results suggest that while vowels are categorized pre-attentively according to phonemic representations and independently of speaker variability, participants are sensitive to between-speaker differences when they focus attention on vowel processing. PMID:20920484

  6. Vowel Duration in Three American English Dialects

    PubMed Central

    Jacewicz, Ewa; Fox, Robert A.; Salmons, Joseph

    2010-01-01

    The article reports on an acoustic investigation into the duration of five American English vowels, those found in hid, head, had, hayed, and hide. We compare duration across three major dialect areas: the Inland North, Midlands, and South. The results show systematic differences across all vowels studied, with the longest durations in the South and the shortest in the Inland North, with the Midlands in an intermediate but distinct position. More generally, the sample differs from and complements other work on this question by including detailed evidence from relatively small, cohesive areas, each within a different established dialect region. PMID:20198113

  7. The Shift in Infant Preferences for Vowel Duration and Pitch Contour between 6 and 10 Months of Age

    ERIC Educational Resources Information Center

    Kitamura, Christine; Notley, Anna

    2009-01-01

    This study investigates the influence of the acoustic properties of vowels on 6- and 10-month-old infants' speech preferences. The shape of the contour (bell or monotonic) and the duration (normal or stretched) of vowels were manipulated in words containing the vowels /i/ and /u/, and presented to infants using a two-choice preference procedure.…

  8. Vowel Intelligibility in Children with and without Dysarthria: An Exploratory Study

    ERIC Educational Resources Information Center

    Levy, Erika S.; Leone, Dorothy; Moya-Gale, Gemma; Hsu, Sih-Chiao; Chen, Wenli; Ramig, Lorraine O.

    2016-01-01

    Children with dysarthria due to cerebral palsy (CP) present with decreased vowel space area and reduced word intelligibility. Although a robust relationship exists between vowel space and word intelligibility, little is known about the intelligibility of vowels in this population. This exploratory study investigated the intelligibility of American…

  9. Dimension-based statistical learning of vowels.

    PubMed

    Liu, Ran; Holt, Lori L

    2015-12-01

    Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners' baseline perceptual weighting of 2 acoustic dimensions (spectral quality and vowel duration) toward vowel categorization and examine how they subsequently adapt to an "artificial accent" that deviates from English norms in the correlation between the 2 dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an "artificial accent" in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception. PMID:26280268

  10. Acoustic levitation for high temperature containerless processing in space

    NASA Technical Reports Server (NTRS)

    Rey, C. A.; Sisler, R.; Merkley, D. R.; Danley, T. J.

    1990-01-01

    New facilities for high-temperature containerless processing in space are described, including the acoustic levitation furnace (ALF), the high-temperature acoustic levitator (HAL), and the high-pressure acoustic levitator (HPAL). In the current ALF development, the maximum temperature capabilities of the levitation furnaces are 1750 C, and in the HAL development with a cold wall furnace they will exceed 2000-2500 C. The HPAL demonstrated feasibility of precursor space flight experiments on the ground in a 1 g pressurized-gas environment. Testing of lower density materials up to 1300 C has also been accomplished. It is suggested that advances in acoustic levitation techniques will result in the production of new materials such as ceramics, alloys, and optical and electronic materials.

  11. Acoustic emissions applications on the NASA Space Station

    SciTech Connect

    Friesel, M.A.; Dawson, J.F.; Kurtz, R.J.; Barga, R.S.; Hutton, P.H.; Lemon, D.K.

    1991-08-01

    Acoustic emission is being investigated as a way to continuously monitor the space station Freedom for damage caused by space debris impact and seal failure. Experiments run to date focused on detecting and locating simulated and real impacts and leakage. These were performed both in the laboratory on a section of material similar to a space station shell panel and also on the full-scale common module prototype at Boeing's Huntsville facility. A neural network approach supplemented standard acoustic emission detection and analysis techniques. 4 refs., 5 figs., 1 tab.

  12. The Vietnamese Vowel System

    ERIC Educational Resources Information Center

    Emerich, Giang Huong

    2012-01-01

    In this dissertation, I provide a new analysis of the Vietnamese vowel system as a system with fourteen monophthongs and nineteen diphthongs based on phonetic and phonological data. I propose that these Vietnamese contour vowels - /ie/, /[turned m]?/ and /uo/-should be grouped with these eleven monophthongs /i e epsilon a [turned a] ? ? [turned m]…

  13. Producing American English Vowels during Vocal Tract Growth: A Perceptual Categorization Study of Synthesized Vowels

    ERIC Educational Resources Information Center

    Menard, Lucie; Davis, Barbara L.; Boe, Louis-Jean; Roy, Johanna-Pascale

    2009-01-01

    Purpose: To consider interactions of vocal tract change with growth and perceived output patterns across development, the influence of nonuniform vocal tract growth on the ability to reach acoustic-perceptual targets for English vowels was studied. Method: Thirty-seven American English speakers participated in a perceptual categorization…

  14. Space manufacturing of surface acoustic wave devices, appendix D

    NASA Technical Reports Server (NTRS)

    Sardella, G.

    1973-01-01

    Space manufacturing of transducers in a vibration free environment is discussed. Fabrication of the masks, and possible manufacturing of the surface acoustic wave components aboard a space laboratory would avoid the inherent ground vibrations and the frequency limitation imposed by a seismic isolator pad. The manufacturing vibration requirements are identified. The concepts of space manufacturing are analyzed. A development program for manufacturing transducers is recommended.

  15. A comparison of vowel normalization procedures for language variation research.

    PubMed

    Adank, Patti; Smits, Roel; van Hout, Roeland

    2004-11-01

    An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels ("vowel-extrinsic" information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself ("vowel-intrinsic" information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., "formant-extrinsic" F2-F1). PMID:15603155

  16. A comparison of vowel normalization procedures for language variation research

    NASA Astrophysics Data System (ADS)

    Adank, Patti; Smits, Roel; van Hout, Roeland

    2004-11-01

    An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels (``vowel-extrinsic'' information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself (``vowel-intrinsic'' information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., ``formant-extrinsic'' F2-F1). .

  17. International Space Station Crew Quarters Ventilation and Acoustic Design Implementation

    NASA Technical Reports Server (NTRS)

    Broyan, James L., Jr.; Cady, Scott M; Welsh, David A.

    2010-01-01

    The International Space Station (ISS) United States Operational Segment has four permanent rack sized ISS Crew Quarters (CQs) providing a private crew member space. The CQs use Node 2 cabin air for ventilation/thermal cooling, as opposed to conditioned ducted air-from the ISS Common Cabin Air Assembly (CCAA) or the ISS fluid cooling loop. Consequently, CQ can only increase the air flow rate to reduce the temperature delta between the cabin and the CQ interior. However, increasing airflow causes increased acoustic noise so efficient airflow distribution is an important design parameter. The CQ utilized a two fan push-pull configuration to ensure fresh air at the crew member's head position and reduce acoustic exposure. The CQ ventilation ducts are conduits to the louder Node 2 cabin aisle way which required significant acoustic mitigation controls. The CQ interior needs to be below noise criteria curve 40 (NC-40). The design implementation of the CQ ventilation system and acoustic mitigation are very inter-related and require consideration of crew comfort balanced with use of interior habitable volume, accommodation of fan failures, and possible crew uses that impact ventilation and acoustic performance. Each CQ required 13% of its total volume and approximately 6% of its total mass to reduce acoustic noise. This paper illustrates the types of model analysis, assumptions, vehicle interactions, and trade-offs required for CQ ventilation and acoustics. Additionally, on-orbit ventilation system performance and initial crew feedback is presented. This approach is applicable to any private enclosed space that the crew will occupy.

  18. Vowel Formant Values in Hearing and Hearing-Impaired Children: A Discriminant Analysis

    ERIC Educational Resources Information Center

    Ozbic, Martina; Kogovsek, Damjana

    2010-01-01

    Hearing-impaired speakers show changes in vowel production and formant pitch and variability, as well as more cases of overlapping between vowels and more restricted formant space, than hearing speakers; consequently their speech is less intelligible. The purposes of this paper were to determine the differences in vowel formant values between 32…

  19. Auditory spectral integration in the perception of static vowels

    PubMed Central

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2015-01-01

    Purpose To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower (F1) and higher (F2) regions? Does the spacing between the spectral components affect a listener’s ability to integrate the acoustic cues? Method Twenty young listeners with normal hearing identified synthesized vowel-like stimuli created for adjustments in the F1 (/ʌ/- /ɑ/, /ɪ/-/ɛ/) and in the F2 region (/ʌ/-/æ/). There were two types of stimuli: (1) two-formant tokens and (2) tokens in which one formant was removed and two pairs of sine waves were inserted below and above the missing formant; the intensities of these harmonics were modified to cause variations in their spectral center-of-gravity (COG). The COG effects were tested over a wide range of frequencies. Results Obtained patterns were consistent with calculated changes to the spectral COG, both in F1 and F2 regions. The spacing of the sine waves did not affect listeners’ responses. Conclusion The auditory system may perform broadband integration as a type of auditory wideband spectral analysis. PMID:21862680

  20. Space Launch System Begins Acoustic Testing

    NASA Video Gallery

    Engineers at NASA's Marshall Space Flight Center in Huntsville, Ala., have assembled a collection of thrusters to stand in for the various propulsion elements in a scale model version of NASA’s S...

  1. Call me Alix, not Elix: vowels are more important than consonants in own-name recognition at 5 months.

    PubMed

    Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry

    2015-07-01

    Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of consonants and vowels at the onset of lexical acquisition was assessed in French-learning 5-month-olds by testing sensitivity to minimal phonetic changes in their own name. Infants' reactions to mispronunciations revealed sensitivity to vowel but not consonant changes. Vowels were also more salient (on duration and intensity) but less distinct (on spectrally based measures) than consonants. Lastly, vowel (but not consonant) mispronunciation detection was modulated by acoustic factors, in particular spectrally based distance. These results establish that consonant changes do not affect lexical recognition at 5 months, while vowel changes do; the consonant bias observed later in development does not emerge until after 5 months through additional language exposure. PMID:25294431

  2. The vowel systems of Quichua-Spanish bilinguals. Age of acquisition effects on the mutual influence of the first and second languages.

    PubMed

    Guion, Susan G

    2003-01-01

    This study investigates vowel productions of 20 Quichua-Spanish bilinguals, differing in age of Spanish acquisition, and 5 monolingual Spanish speakers. While the vowel systems of simultaneous, early, and some mid bilinguals all showed significant plasticity, there were important differences in the kind, as well as the extent, of this adaptability. Simultaneous bilinguals differed from early bilinguals in that they were able to partition the vowel space in a more fine-grained way to accommodate the vowels of their two languages. Early and some mid bilinguals acquired Spanish vowels, whereas late bilinguals did not. It was also found that acquiring Spanish vowels could affect the production of native Quichua vowels. The Quichua vowels were produced higher by bilinguals who had acquired Spanish vowels than those who had not. It is proposed that this vowel reorganization serves to enhance the perceptual distinctiveness between the vowels of the combined first- and second-language system. PMID:12853715

  3. Variability in Vowel Production within and between Days

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2015-01-01

    Although the acoustic variability of speech is often described as a problem for phonetic recognition, there is little research examining acoustic-phonetic variability over time. We measured naturally occurring acoustic variability in speech production at nine specific time points (three per day over three days) to examine daily change in production as well as change across days for citation-form vowels. Productions of seven different vowels (/EE/, /IH/, /AH/, /UH/, /AE/, /OO/, /EH/) were recorded at 9AM, 3PM and 9PM over the course of each testing day on three different days, every other day, over a span of five days. Results indicate significant systematic change in F1 and F0 values over the course of a day for each of the seven vowels recorded, whereas F2 and F3 remained stable. Despite this systematic change within a day, however, talkers did not show significant changes in F0, F1, F2, and F3 between days, demonstrating that speakers are capable of producing vowels with great reliability over days without any extrinsic feedback besides their own auditory monitoring. The data show that in spite of substantial day-to-day variability in the specific listening and speaking experiences of these participants and thus exposure to different acoustic tokens of speech, there is a high degree of internal precision and consistency for the production of citation form vowels. PMID:26331478

  4. An acoustical performance space in ancient India: The Rani Gumpha

    NASA Astrophysics Data System (ADS)

    Ault, C. Thomas; Manthravadi, Umashankar

    2002-11-01

    The Rani Gumpha, or Queen's Cavern, was built by artist-king of Kalinga, Kharavela (ca. 200-100 B.C.). It is a rock cut structure, carved into Udayagiri hill. As in ancient Greek and Roman theaters, the entire performance space of the Rani Gumpa is backed by a decorated facade, and it is remarkably similar to Greek theaters of the Hellenistic period, having both an upper and lower level for playing. There are acoustical chambers behind each level as well as on either side, and a special ''cantor's chamber'' stage left on the lower level. The effect on the voice is astonishing. This is a rock cut acoustical installation analogous to that described by Vitruvius in Book V, Chaps. 5 and 8, of his de Architectura, where he speaks of vessels placed in Greek and Roman theaters for the same purpose. We have created a computerized model of the Ranim Gumpha, using CATT Acoustic. We have taken acoustic measurements of the site, using Aurora Sofware package. Our results indicate that the Rani Gumpha is an acoustical performance site, sharing characteristics of the classical Greek and Roman theaters of approximately the same period.

  5. Retroactive Streaming Fails to Improve Concurrent Vowel Identification.

    PubMed

    Brandewie, Eugene J; Oxenham, Andrew J

    2015-01-01

    The sequential organization of sound over time can interact with the concurrent organization of sounds across frequency. Previous studies using simple acoustic stimuli have suggested that sequential streaming cues can retroactively affect the perceptual organization of sounds that have already occurred. It is unknown whether such effects generalize to the perception of speech sounds. Listeners' ability to identify two simultaneously presented vowels was measured in the following conditions: no context, a preceding context stream (precursors), and a following context stream (postcursors). The context stream was comprised of brief repetitions of one of the two vowels, and the primary measure of performance was listeners' ability to identify the other vowel. Results in the precursor condition showed a significant advantage for the identification of the second vowel compared to the no-context condition, suggesting that sequential grouping mechanisms aided the segregation of the concurrent vowels, in agreement with previous work. However, performance in the postcursor condition was significantly worse compared to the no-context condition, providing no evidence for an effect of stream segregation, and suggesting a possible interference effect. Two additional experiments involving inharmonic (jittered) vowels were performed to provide additional cues to aid retroactive stream segregation; however, neither manipulation enabled listeners to improve their identification of the target vowel. Taken together with earlier studies, the results suggest that retroactive streaming may require large spectral differences between concurrent sources and thus may not provide a robust segregation cue for natural broadband sounds such as speech. PMID:26451598

  6. Toward a Systematic Evaluation of Vowel Target Events across Speech Tasks

    ERIC Educational Resources Information Center

    Kuo, Christina

    2011-01-01

    The core objective of this study was to examine whether acoustic variability of vowel production in American English, across speaking tasks, is systematic. Ten male speakers who spoke a relatively homogeneous Wisconsin dialect produced eight monophthong vowels (in hVd and CVC contexts) in four speaking tasks, including clear-speech, citation form,…

  7. A Longitudinal Study of Very Young Children's Vowel Production

    ERIC Educational Resources Information Center

    McGowan, Rebecca W.; McGowan, Richard S.; Denny, Margaret; Nittrouer, Susan

    2014-01-01

    Purpose: Ecologically realistic, spontaneous, adult-directed, longitudinal speech data of young children were described by acoustic analyses. Method: The first 2 formant frequencies of vowels produced by 6 children from different American English dialect regions were analyzed from ages 18 to 48 months. The vowels were from largely conversational…

  8. Towards a continuous population model for natural language vowel shift.

    PubMed

    Shipman, Patrick D; Faria, Sérgio H; Strickland, Christopher

    2013-09-01

    The Great English Vowel Shift of 16th-19th centuries and the current Northern Cities Vowel Shift are two examples of collective language processes characterized by regular phonetic changes, that is, gradual changes in vowel pronunciation over time. Here we develop a structured population approach to modeling such regular changes in the vowel systems of natural languages, taking into account learning patterns and effects such as social trends. We treat vowel pronunciation as a continuous variable in vowel space and allow for a continuous dependence of vowel pronunciation in time and age of the speaker. The theory of mixtures with continuous diversity provides a framework for the model, which extends the McKendrick-von Foerster equation to populations with age and phonetic structures. We develop the general balance equations for such populations and propose explicit expressions for the factors that impact the evolution of the vowel pronunciation distribution. For illustration, we present two examples of numerical simulations. In the first one we study a stationary solution corresponding to a state of phonetic equilibrium, in which speakers of all ages share a similar phonetic profile. We characterize the variance of the phonetic distribution in terms of a parameter measuring a ratio of phonetic attraction to dispersion. In the second example we show how vowel shift occurs upon starting with an initial condition consisting of a majority pronunciation that is affected by an immigrant minority with a different vowel pronunciation distribution. The approach developed here for vowel systems may be applied also to other learning situations and other time-dependent processes of cognition in self-interacting populations, like opinions or perceptions. PMID:23624180

  9. Acoustic containerless processing module for material research. [Space Shuttle experiments

    NASA Technical Reports Server (NTRS)

    Lagomarsini, G. C.; Wang, T.

    1979-01-01

    In the Shuttle space processing program, the melt is formed within a container without physically contacting the container walls. The present paper deals with a high-temperature acoustic containerless processing module currently under development for early OSTA Shuttle flights. The manipulation capabilities of this module are expected to meet the requirements of a wide variety of experiments. Some novel techniques in optics and control are discussed.

  10. The phonological function of vowels is maintained at fundamental frequencies up to 880 Hz.

    PubMed

    Friedrichs, Daniel; Maurer, Dieter; Dellwo, Volker

    2015-07-01

    In a between-subject perception task, listeners either identified full words or vowels isolated from these words at F0s between 220 and 880 Hz. They received two written words as response options (minimal pair with the stimulus vowel in contrastive position). Listeners' sensitivity (A') was extremely high in both conditions at all F0s, showing that the phonological function of vowels can also be maintained at high F0s. This indicates that vowel sounds may carry strong acoustic cues departing from common formant frequencies at high F0s and that listeners do not rely on consonantal context phenomena for their identification performance. PMID:26233058

  11. Assessing acoustic communication active space in the Lusitanian toadfish.

    PubMed

    Alves, Daniel; Amorim, M Clara P; Fonseca, Paulo J

    2016-04-15

    The active space of a signal is an important concept in acoustic communication as it has implications for the function and evolution of acoustic signals. However, it remains mostly unknown for fish as it has been measured in only a restricted number of species. We combined physiological and sound propagation approaches to estimate the communication range of the Lusitanian toadfish's ( ITALIC! Halobatrachus didactylus) advertisement sound, the boatwhistle (BW). We recorded BWs at different distances from vocalizing fish in a natural nesting site at ca. 2-3 m depth. We measured the representation of these increasingly attenuated BWs in the auditory pathway through the auditory evoked potential (AEP) technique. These measurements point to a communication range of between 6 and 13 m, depending on the spectral characteristics of the BW. A similar communication range (ca. 8 m) was derived from comparing sound attenuation at selected frequencies with auditory sensitivity. This is one of the few studies to combine auditory measurements with sound propagation to estimate the active space of acoustic signals in fish. We emphasize the need in future studies for estimates of active space to take informational masking into account. PMID:26896547

  12. Articulatory Distinctiveness of Vowels and Consonants: A Data-Driven Approach

    PubMed Central

    Wang, Jun; Green, Jordan R.; Samal, Ashok; Yunusova, Yana

    2015-01-01

    Purpose The goal of this project was to quantify the articulatory distinctiveness of eight major English vowels and eleven English consonants based on tongue and lip movement time series data using a data-driven approach. Method Tongue and lip movements of eight vowels and eleven consonants from ten healthy talkers were collected. First, classification accuracies were obtained using two complementary approaches: Procrustes analysis and support vector machine. Procrustes distance was then used to measure the articulatory distinctiveness among vowels and consonants. Finally, the distance (distinctiveness) matrices of different vowel pairs and consonant pairs were used to derive articulatory vowel and consonant spaces using multi-dimensional scaling. Results Vowel classification accuracies of 91.67% and 89.05% and consonant classification accuracies of 91.37% and 88.94% were obtained using Procrustes analysis and support vector machine, respectively. Articulatory vowel and consonant spaces were derived based on the pairwise Procrustes distances. Conclusion The articulatory vowel space derived in this study resembled the long-standing descriptive articulatory vowel space defined by tongue height and advancement. The articulatory consonant space was consistent with feature-based classification of English consonants. The derived articulatory vowel and consonant spaces may have clinical implications including serving as an objective measure of the severity of articulatory impairment. PMID:23838988

  13. Models of vowel perception

    NASA Astrophysics Data System (ADS)

    Molis, Michelle R.

    2002-05-01

    The debate continues regarding the efficacy of formant-based versus whole spectrum models of vowel perception. Categorization data were obtained for a set of synthetic steady-state vowels and were used to evaluate both types of models. The models tested included various combinations of formant frequencies and amplitudes, principal components derived from critical-band spectra, and perceptually scaled LPC cepstral coefficients. The stimuli were 54 five-formant synthesized vowels that varied orthogonally in F2 (1081-2120 Hz) and F3 (1268-2783 Hz) frequency in equal 0.8 Bark steps. Fundamental frequency contour, F1 (455 Hz), F4 (3250 Hz), F5 (3700 Hz), and duration (225 ms) were held constant across all stimuli. Twelve speakers of American English (Central Texas dialect) categorized the stimuli as the vowels /I/, /U/, or /ɛ/. Results indicate that formant frequencies provided the best account of the data only when nonlinear terms were also included in the analysis. The critical-band principal components also performed reasonably well. While the principle of parsimony would suggest that formant frequencies offer the most appropriate description of vowels, the relative success of a richer, more flexible and more neurophysiologically implementable model still must be taken into consideration. [Work supported by NIDCD.] a)Formerly at Univ. of Texas at Austin.

  14. Perception of sinewave vowels.

    PubMed

    Hillenbrand, James M; Clark, Michael J; Baer, Carter A

    2011-06-01

    There is a significant body of research examining the intelligibility of sinusoidal replicas of natural speech. Discussion has followed about what the sinewave speech phenomenon might imply about the mechanisms underlying phonetic recognition. However, most of this work has been conducted using sentence material, making it unclear what the contributions are of listeners' use of linguistic constraints versus lower level phonetic mechanisms. This study was designed to measure vowel intelligibility using sinusoidal replicas of naturally spoken vowels. The sinusoidal signals were modeled after 300 /hVd/ syllables spoken by men, women, and children. Students enrolled in an introductory phonetics course served as listeners. Recognition rates for the sinusoidal vowels averaged 55%, which is much lower than the ∼95% intelligibility of the original signals. Attempts to improve performance using three different training methods met with modest success, with post-training recognition rates rising by ∼5-11 percentage points. Follow-up work showed that more extensive training produced further improvements, with performance leveling off at ∼73%-74%. Finally, modeling work showed that a fairly simple pattern-matching algorithm trained on naturally spoken vowels classified sinewave vowels with 78.3% accuracy, showing that the sinewave speech phenomenon does not necessarily rule out template matching as a mechanism underlying phonetic recognition. PMID:21682420

  15. Reading skills and the discrimination of English vowel contrasts by bilingual Spanish/English-speaking children: Is there a correlation?

    NASA Astrophysics Data System (ADS)

    Levey, Sandra

    2005-04-01

    This study examined the discrimination of English vowel contrasts in real and novel word-pairs by 21 children: 11 bilingual Spanish/English- and 10 monolingual English-speaking children, 8-12 years of age (M=10; 6; Mdn=10; 4). The goal was to determine if children with poor reading skills had difficulty with discrimination, an essential factor in reading abilities. A categorial discrimination task was used in an ABX discrimination paradigm: A (the first word in the sequence) and B (the second word in the sequence) were different stimuli, and X (the third word in the sequence) was identical to either A or to B. Stimuli were produced by one of three different speakers. Seventy-two monosyllabic words were presented: 36 real English and 36 novel words. Vowels were those absent from the inventory of Spanish vowels. Discrimination accuracy for the English-speaking children with good reading skills was significantly greater than for the bilingual-speaking children with good or poor reading skills. Early age of acquisition and greater percentage of time devoted to communication in English played the greatest role in bilingual children's discrimination and reading skills. The adjacency of vowels in the F1-F2 acoustic space presented the greatest difficulty.

  16. Acoustical analysis and multiple source auralizations of charismatic worship spaces

    NASA Astrophysics Data System (ADS)

    Lee, Richard W.

    2001-05-01

    Because of the spontaneity and high level of call and response, many charismatic churches have verbal and musical communication problems that stem from highly reverberant sound fields, poor speech intelligibility, and muddy music. This research looks at the subjective dimensions of room acoustics perception that affect a charismatic worship space, which is summarized using the acronym RISCS (reverberation, intimacy, strength, coloration, and spaciousness). The method of research is to obtain acoustical measurements for three worship spaces in order to analyze the objective parameters associated with the RISCS subjective dimensions. For the same spaces, binaural room impulse response (BRIR) measurements are done for different receiver positions in order to create an auralization for each position. The subjective descriptors of RISCS are analyzed through the use of listening tests of the three auralized spaces. The results from the measurements and listening tests are analyzed to determine if listeners' perceptions correlate with the objective parameter results, the appropriateness of the subjective parameters for the use of the space, and which parameters seem to take precedent. A comparison of the multi-source auralization to a conventional single-source auralization was done with the mixed down version of the synchronized multi-track anechoic signals.

  17. Formant Centralization Ratio: A Proposal for a New Acoustic Measure of Dysarthric Speech

    ERIC Educational Resources Information Center

    Sapir, Shimon; Ramig, Lorraine O.; Spielman, Jennifer L.; Fox, Cynthia

    2010-01-01

    Purpose: The vowel space area (VSA) has been used as an acoustic metric of dysarthric speech, but with varying degrees of success. In this study, the authors aimed to test an alternative metric to the VSA--the "formant centralization ratio" (FCR), which is hypothesized to more effectively differentiate dysarthric from healthy speech and register…

  18. Indexical properties influence time-varying amplitude and fundamental frequency contributions of vowels to sentence intelligibility

    PubMed Central

    Fogerty, Daniel

    2015-01-01

    The present study investigated how non-linguistic, indexical information about talker identity interacts with contributions to sentence intelligibility by the time-varying amplitude (temporal envelope) and fundamental frequency (F0). Young normal-hearing adults listened to sentences that preserved the original consonants but replaced the vowels with a single vowel production. This replacement vowel selectively preserved amplitude or F0 cues of the original vowel, but replaced cues to phonetic identity. Original vowel duration was always preserved. Three experiments investigated indexical contributions by replacing vowels with productions from the same or different talker, or by acoustically morphing the original vowel. These stimulus conditions investigated how vowel suprasegmental and indexical properties interact and contribute to intelligibility independently from phonetic information. Results demonstrated that indexical properties influence the relative contribution of suprasegmental properties to sentence intelligibility. F0 variations are particularly important in the presence of conflicting indexical information. Temporal envelope modulations significantly improve sentence intelligibility, but are enhanced when either indexical or F0 cues are available. These findings suggest that F0 and other indexical cues may facilitate perceptually grouping suprasegmental properties of vowels with the remainder of the sentence. Temporal envelope modulations of vowels may contribute to intelligibility once they are successfully integrated with the preserved signal. PMID:26543276

  19. Target spectral, dynamic spectral, and duration cues in infant perception of German vowels.

    PubMed

    Bohn, O S; Polka, L

    2001-07-01

    Previous studies of vowel perception have shown that adult speakers of American English and of North German identify native vowels by exploiting at least three types of acoustic information contained in consonant-vowel-consonant (CVC) syllables: target spectral information reflecting the articulatory target of the vowel, dynamic spectral information reflecting CV- and -VC coarticulation, and duration information. The present study examined the contribution of each of these three types of information to vowel perception in prelingual infants and adults using a discrimination task. Experiment 1 examined German adults' discrimination of four German vowel contrasts (see text), originally produced in /dVt/ syllables, in eight experimental conditions in which the type of vowel information was manipulated. Experiment 2 examined German-learning infants' discrimination of the same vowel contrasts using a comparable procedure. The results show that German adults and German-learning infants appear able to use either dynamic spectral information or target spectral information to discriminate contrasting vowels. With respect to duration information, the removal of this cue selectively affected the discriminability of two of the vowel contrasts for adults. However, for infants, removal of contrastive duration information had a larger effect on the discrimination of all contrasts tested. PMID:11508975

  20. Acoustic Emission Detection of Impact Damage on Space Shuttle Structures

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Gorman, Michael R.; Madaras, Eric I.

    2004-01-01

    The loss of the Space Shuttle Columbia as a result of impact damage from foam debris during ascent has led NASA to investigate the feasibility of on-board impact detection technologies. AE sensing has been utilized to monitor a wide variety of impact conditions on Space Shuttle components ranging from insulating foam and ablator materials, and ice at ascent velocities to simulated hypervelocity micrometeoroid and orbital debris impacts. Impact testing has been performed on both reinforced carbon composite leading edge materials as well as Shuttle tile materials on representative aluminum wing structures. Results of these impact tests will be presented with a focus on the acoustic emission sensor responses to these impact conditions. These tests have demonstrated the potential of employing an on-board Shuttle impact detection system. We will describe the present plans for implementation of an initial, very low frequency acoustic impact sensing system using pre-existing flight qualified hardware. The details of an accompanying flight measurement system to assess the Shuttle s acoustic background noise environment as a function of frequency will be described. The background noise assessment is being performed to optimize the frequency range of sensing for a planned future upgrade to the initial impact sensing system.

  1. Acoustics in the Worship Space: Goals and Strategies

    NASA Astrophysics Data System (ADS)

    Crist, Ernest Vincent, III

    The act of corporate worship, though encompassing nearly all the senses, is primarily aural. Since the transmission process is aural, the experiences must be heard with sufficient volume and quality in order to be effective. The character and magnitude of the sound-producing elements are significant in the process, but it is the nature of the enclosing space that is even more crucial. Every building will, by its size, shape, and materials, act upon all sounds produced, affecting not only the amount, but also the quality of sound reaching the listeners' ears. The purpose of this project was to determine appropriate acoustical goals for worship spaces, and to propose design strategies to achieve these goals. Acoustic goals were determined by examination of the results of previously conducted subjective preference studies, computer plotting of virtual sources, and experimentation in actual spaces. Determinations also take into account the sensory inhibition factors in the processing of sound by the human auditory system. Design strategies incorporate aspects of placement of performing forces, the geometry of the enclosing space, materials and furnishings, and noise control.

  2. The Effect of Stress and Speech Rate on Vowel Coarticulation in Catalan Vowel-Consonant-Vowel Sequences

    ERIC Educational Resources Information Center

    Recasens, Daniel

    2015-01-01

    Purpose: The goal of this study was to ascertain the effect of changes in stress and speech rate on vowel coarticulation in vowel-consonant-vowel sequences. Method: Data on second formant coarticulatory effects as a function of changing /i/ versus /a/ were collected for five Catalan speakers' productions of vowel-consonant-vowel sequences with the…

  3. Virtual Acoustic Space: Space perception for the blind

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, Luis F.

    2011-06-01

    This R&D project implements a new way of perception of the three-dimensional surrounding space, based exclusively in sounds and thus especially useful for the blind. The innate capability of locating sounds, the externalization of sounds played with headphones and the machine capture of the 3D environment are the technological pillars used for this purpose. They are analysed and a summary of their main requirements are presented. A number of laboratory facilities and portable prototypes are described, together with their main characteristics.

  4. Dynamic Articulation of Vowels.

    ERIC Educational Resources Information Center

    Morgan, Willie B.

    1979-01-01

    A series of exercises and a theory of vowel descriptions can help minimize speakers' problems of excessive tension, awareness of tongue height, and tongue retraction. Eight exercises to provide Forward Facial Stretch neutralize tensions in the face and vocal resonator and their effect on the voice. Three experiments in which sounds are repeated…

  5. Acoustics

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.; Grosveld, Ferdinand

    2007-01-01

    The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.

  6. Effects of Talker Variability on Vowel Recognition in Cochlear Implants

    ERIC Educational Resources Information Center

    Chang, Yi-ping; Fu, Qian-Jie

    2006-01-01

    Purpose: To investigate the effects of talker variability on vowel recognition by cochlear implant (CI) users and by normal-hearing (NH) participants listening to 4-channel acoustic CI simulations. Method: CI users were tested with their clinically assigned speech processors. For NH participants, 3 CI processors were simulated, using different…

  7. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    ERIC Educational Resources Information Center

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  8. Underwater acoustic source localization using closely spaced hydrophone pairs

    NASA Astrophysics Data System (ADS)

    Sim, Min Seop; Choi, Bok-Kyoung; Kim, Byoung-Nam; Lee, Kyun Kyung

    2016-07-01

    Underwater sound source position is determined using a line array. However, performance degradation occurs owing to a multipath environment, which generates incoherent signals. In this paper, a hydrophone array is proposed for underwater source position estimation robust to a multipath environment. The array is composed of three pairs of sensors placed on the same line. The source position is estimated by performing generalized cross-correlation (GCC). The proposed system is not affected by a multipath time delay because of the close distance between closely spaced sensors. The validity of the array is confirmed by simulation using acoustic signals synthesized by eigenrays.

  9. Resolution of lateral acoustic space assessed by electroencephalography and psychoacoustics

    PubMed Central

    Bennemann, Jan; Freigang, Claudia; Schröger, Erich; Rübsamen, Rudolf; Richter, Nicole

    2013-01-01

    The encoding of auditory spatial acuity (measured as the precision to distinguish between two spatially distinct stimuli) by neural circuits in both auditory cortices is a matter of ongoing research. Here, the event-related potential (ERP) mismatch negativity (MMN), a sensitive indicator of preattentive auditory change detection, was used to tap into the underlying mechanism of cortical representation of auditory spatial information. We characterized the MMN response affected by the degree of spatial deviance in lateral acoustic space using a passive oddball paradigm. Two stimulation conditions (SCs)—specifically focusing on the investigation of the mid- and far-lateral acoustic space—were considered: (1) 65° left standard position with deviant positions at 70, 75, and 80°; and (2) 95° left standard position with deviant positions at 90, 85, and 80°. Additionally, behavioral data on the minimum audible angle (MAA) were acquired for the respective standard positions (65, 95° left) to quantify spatial discrimination in separating distinct sound sources. The two measurements disclosed the linkage between the (preattentive) MMN response and the (attentive) behavioral threshold. At 65° spatial deviations as small as 5° reliably elicited MMNs. Thereby, the MMN amplitudes monotonously increased as a function of spatial deviation. At 95°, spatial deviations of 15° were necessary to elicit a valid MMN. The behavioral data, however, yielded no difference in mean MAA thresholds for position 65 and 95°. The different effects of laterality on MMN responses and MAA thresholds suggest a role of spatial selective attention mechanisms particularly relevant in active discrimination of neighboring sound sources, especially in the lateral acoustic space. PMID:23781211

  10. Characterization of space dust using acoustic impact detection.

    PubMed

    Corsaro, Robert D; Giovane, Frank; Liou, Jer-Chyi; Burchell, Mark J; Cole, Michael J; Williams, Earl G; Lagakos, Nicholas; Sadilek, Albert; Anderson, Christopher R

    2016-08-01

    This paper describes studies leading to the development of an acoustic instrument for measuring properties of micrometeoroids and other dust particles in space. The instrument uses a pair of easily penetrated membranes separated by a known distance. Sensors located on these films detect the transient acoustic signals produced by particle impacts. The arrival times of these signals at the sensor locations are used in a simple multilateration calculation to measure the impact coordinates on each film. Particle direction and speed are found using these impact coordinates and the known membrane separations. This ability to determine particle speed, direction, and time of impact provides the information needed to assign the particle's orbit and identify its likely origin. In many cases additional particle properties can be estimated from the signal amplitudes, including approximate diameter and (for small particles) some indication of composition/morphology. Two versions of this instrument were evaluated in this study. Fiber optic displacement sensors are found advantageous when very thin membranes can be maintained in tension (solar sails, lunar surface). Piezoelectric strain sensors are preferred for thicker films without tension (long duration free flyers). The latter was selected for an upcoming installation on the International Space Station. PMID:27586768

  11. Effects of local lexical competition and regional dialect on vowel production.

    PubMed

    Clopper, Cynthia G; Tamati, Terrin N

    2014-07-01

    Global measures of lexical competition, such as lexical neighborhood density, assume that all phonological contrasts contribute equally to competition. However, effects of local phonetic similarity have also been observed in speech production processes, suggesting that some contrasts may lead to greater competition than others. In the current study, the effect of local lexical competition on vowel production was examined across two dialects of American English that differ in the phonetic similarity of the low-front and low-back vowel pairs. Results revealed a significant interaction between regional dialect and local lexical competition on the acoustic distance within each vowel pair. Local lexical contrast led to greater acoustic distance between vowels, as expected, but this effect was significantly enhanced for acoustically similar dialect-specific variants. These results were independent of global neighborhood density, suggesting that local lexical competition may contribute to the realization of sociolinguistic variation and phonological change. PMID:24993188

  12. Are vowel errors influenced by consonantal context in the speech of persons with aphasia?

    NASA Astrophysics Data System (ADS)

    Gelfer, Carole E.; Bell-Berti, Fredericka; Boyle, Mary

    2001-05-01

    The literature suggests that vowels and consonants may be affected differently in the speech of persons with conduction aphasia (CA) or nonfluent aphasia with apraxia of speech (AOS). Persons with CA have shown similar error rates across vowels and consonants, while those with AOS have shown more errors for consonants than vowels. These data have been interpreted to suggest that consonants have greater gestural complexity than vowels. However, recent research [M. Boyle et al., Proc. International Cong. Phon. Sci., 3265-3268 (2003)] does not support this interpretation: persons with AOS and CA both had a high proportion of vowel errors, and vowel errors almost always occurred in the context of consonantal errors. To examine the notion that vowels are inherently less complex than consonants and are differentially affected in different types of aphasia, vowel production in different consonantal contexts for speakers with AOS or CA was examined. The target utterances, produced in carrier phrases, were bVC and bV syllables, allowing us to examine whether vowel production is influenced by consonantal context. Listener judgments were obtained for each token, and error productions were grouped according to the intended utterance and error type. Acoustical measurements were made from spectrographic displays.

  13. Durational and spectral differences in American English vowels: dialect variation within and across regions.

    PubMed

    Fridland, Valerie; Kendall, Tyler; Farrington, Charlie

    2014-07-01

    Spectral differences among varieties of American English have been widely studied, typically recognizing three major regionally diagnostic vowel shift patterns [Labov, Ash, and Boberg (2006). The Atlas of North American English: Phonetics, Phonology and Sound Change (De Gruyter, Berlin)]. Durational variability across dialects, on the other hand, has received relatively little attention. This paper investigates to what extent regional differences in vowel duration are linked with spectral changes taking place in the Northern, Western, and Southern regions of the U.S. Using F1/F2 and duration measures, the durational correlates of the low back vowel merger, characteristic of Western dialects, and the acoustic reversals of the front tense/lax vowels, characteristic of Southern dialects, are investigated. Results point to a positive correlation between spectral overlap and vowel duration for Northern and Western speakers, suggesting that both F1/F2 measures and durational measures are used for disambiguation of vowel quality. The findings also indicate that, regardless of region, a durational distinction maintains the contrast between the low back vowel classes, particularly in cases of spectral merger. Surprisingly, Southerners show a negative correlation for the vowel shifts most defining of contemporary Southern speech, suggesting that neither spectral position nor durational measures are the most relevant cues for vowel quality in the South. PMID:24993218

  14. Cross-linguistic studies of children’s and adults’ vowel spacesa

    PubMed Central

    Chung, Hyunju; Kong, Eun Jong; Edwards, Jan; Weismer, Gary; Fourakis, Marios; Hwang, Youngdeok

    2012-01-01

    This study examines cross-linguistic variation in the location of shared vowels in the vowel space across five languages (Cantonese, American English, Greek, Japanese, and Korean) and three age groups (2-year-olds, 5-year-olds, and adults). The vowels /a/, /i/, and /u/ were elicited in familiar words using a word repetition task. The productions of target words were recorded and transcribed by native speakers of each language. For correctly produced vowels, first and second formant frequencies were measured. In order to remove the effect of vocal tract size on these measurements, a normalization approach that calculates distance and angular displacement from the speaker centroid was adopted. Language-specific differences in the location of shared vowels in the formant values as well as the shape of the vowel spaces were observed for both adults and children. PMID:22280606

  15. Invariance in vowel systems.

    PubMed

    Funabashi, Masatoshi

    2015-05-01

    This study applies information geometry of normal distribution to model Japanese vowels on the basis of the first and second formants. The distribution of Kullback-Leibler (KL) divergence and its decomposed components were investigated to reveal the statistical invariance in the vowel system. The results suggest that although significant variability exists in individual KL divergence distributions, the population distribution tends to converge into a specific log-normal distribution. This distribution can be considered as an invariant distribution for the standard-Japanese speaking population. Furthermore, it was revealed that the mean and variance components of KL divergence are linearly related in the population distribution. The significance of these invariant features is discussed. PMID:25994716

  16. Final Vowel-Consonant-e.

    ERIC Educational Resources Information Center

    Burmeister, Lou E.

    The utility value of the final vowel-consonant-e phonic generalization was examined using 2,715 common English words. When the vowel was defined as a single-vowel, the consonant as a single-consonant, and the final e as a single-e the generalization was found to be highly useful, contrary to other recent findings. Using the total sample of 2,715…

  17. Vowel reduction across tasks for male speakers of American English.

    PubMed

    Kuo, Christina; Weismer, Gary

    2016-07-01

    This study examined acoustic variation of vowels within speakers across speech tasks. The overarching goal of the study was to understand within-speaker variation as one index of the range of normal speech motor behavior for American English vowels. Ten male speakers of American English performed four speech tasks including citation form sentence reading with a clear-speech style (clear-speech), citation form sentence reading (citation), passage reading (reading), and conversational speech (conversation). Eight monophthong vowels in a variety of consonant contexts were studied. Clear-speech was operationally defined as the reference point for describing variation. Acoustic measures associated with the conventions of vowel targets were obtained and examined. These included temporal midpoint formant frequencies for the first three formants (F1, F2, and F3) and the derived Euclidean distances in the F1-F2 and F2-F3 planes. Results indicated that reduction toward the center of the F1-F2 and F2-F3 planes increased in magnitude across the tasks in the order of clear-speech, citation, reading, and conversation. The cross-task variation was comparable for all speakers despite fine-grained individual differences. The characteristics of systematic within-speaker acoustic variation across tasks have potential implications for the understanding of the mechanisms of speech motor control and motor speech disorders. PMID:27475161

  18. The Interplay between Input and Initial Biases: Asymmetries in Vowel Perception during the First Year of Life

    ERIC Educational Resources Information Center

    Pons, Ferran; Albareda-Castellot, Barbara; Sebastian-Galles, Nuria

    2012-01-01

    Vowels with extreme articulatory-acoustic properties act as natural referents. Infant perceptual asymmetries point to an underlying bias favoring these referent vowels. However, as language experience is gathered, distributional frequency of speech sounds could modify this initial bias. The perception of the /i/-/e/ contrast was explored in 144…

  19. Call Me Alix, Not Elix: Vowels Are More Important than Consonants in Own-Name Recognition at 5 Months

    ERIC Educational Resources Information Center

    Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry

    2015-01-01

    Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of…

  20. A New Acoustic Test Facility at Alcatel Space Test Centre

    NASA Astrophysics Data System (ADS)

    Meurat, A.; Jezequel, L.

    2004-08-01

    Due to the obsolescence of its acoustic test facility, Alcatel Space has initiated the investment of a large acoustic chamber on its test centre located in Cannes, south of France. This paper presents the main specification elaborated to design the facility, and the solution chosen : it will be located on a dedicated area of the existing test centre and will be based on technical solution already used in similar facilities over the world. The main structure consists in a chamber linked to an external envelope (concrete building) through suspension aiming at decoupling the vibration and preventing from seismic risks. The noise generation system is based on the use of Wyle modulators located on the chamber roof. Gaseous nitrogen is produced by a dedicated gas generator developed by Air-Liquide that could deliver high flow rate with accurate pressure and temperature controls. The control and acquisition system is based on existing solution implemented on the vibration facilities of the test centre. With the start of the construction in May 2004, the final acceptance tests are planned for April 2005, and the first satellites to be tested are planned for May 2005.

  1. Training Japanese listeners to identify American English vowels

    NASA Astrophysics Data System (ADS)

    Nishi, Kanae; Kewley-Port, Diane

    2005-04-01

    Perception training of phonemes by second language (L2) learners has been studied primarily using consonant contrasts, where the number of contrasting sounds rarely exceeds five. In order to investigate the effects of stimulus sets, this training study used two conditions: 9 American English vowels covering the entire vowel space (9V), and 3 difficult vowels for problem-focused training (3V). Native speakers of Japanese were trained for nine days. To assess changes in performance due to training, a battery of perception and production tests were given pre- and post-training, as well as 3 months following training. The 9V trainees improved vowel perception on all vowels after training, on average by 23%. Their performance at the 3-month test was slightly worse than the posttest, but still better than the pretest. Transfer of training effect to stimuli spoken by new speakers was observed. Strong response bias observed in the pretest disappeared after the training. The preliminary results of the 3V trainees showed substantial improvement only on the trained vowels. The implications of this research for improved training of L2 learners to understand speech will be discussed. [Work supported by NIH-NIDCD DC-006313 & DC-02229.

  2. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    NASA Technical Reports Server (NTRS)

    Nance, Donald; Liever, Peter; Nielsen, Tanner

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test, conducted at Marshall Space Flight Center. The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  3. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    NASA Technical Reports Server (NTRS)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  4. Perception of steady-state vowels and vowelless syllables by adults and children

    NASA Astrophysics Data System (ADS)

    Nittrouer, Susan

    2005-04-01

    Vowels can be produced as long, isolated, and steady-state, but that is not how they are found in natural speech. Instead natural speech consists of almost continuously changing (i.e., dynamic) acoustic forms from which mature listeners recover underlying phonetic form. Some theories suggest that children need steady-state information to recognize vowels (and so learn vowel systems), even though that information is sparse in natural speech. The current study examined whether young children can recover vowel targets from dynamic forms, or whether they need steady-state information. Vowel recognition was measured for adults and children (3, 5, and 7 years) for natural productions of /dæd/, /dUd/ /æ/, /U/ edited to make six stimulus sets: three dynamic (whole syllables; syllables with middle 50-percent replaced by cough; syllables with all but the first and last three pitch periods replaced by cough), and three steady-state (natural, isolated vowels; reiterated pitch periods from those vowels; reiterated pitch periods from the syllables). Adults scored nearly perfectly on all but first/last three pitch period stimuli. Children performed nearly perfectly only when the entire syllable was heard, and performed similarly (near 80%) for all other stimuli. Consequently, children need dynamic forms to perceive vowels; steady-state forms are not preferred.

  5. Cross-language comparisons of contextual variation in the production and perception of vowels

    NASA Astrophysics Data System (ADS)

    Strange, Winifred

    2005-04-01

    In the last two decades, a considerable amount of research has investigated second-language (L2) learners problems with perception and production of non-native vowels. Most studies have been conducted using stimuli in which the vowels are produced and presented in simple, citation-form (lists) monosyllabic or disyllabic utterances. In my laboratory, we have investigated the spectral (static/dynamic formant patterns) and temporal (syllable duration) variation in vowel productions as a function of speech-style (list/sentence utterances), speaking rate (normal/rapid), sentence focus (narrow focus/post-focus) and phonetic context (voicing/place of surrounding consonants). Data will be presented for a set of languages that include large and small vowel inventories, stress-, syllable-, and mora-timed prosody, and that vary in the phonological/phonetic function of vowel length, diphthongization, and palatalization. Results show language-specific patterns of contextual variation that affect the cross-language acoustic similarity of vowels. Research on cross-language patterns of perceived phonetic similarity by naive listeners suggests that listener's knowledge of native language (L1) patterns of contextual variation influences their L1/L2 similarity judgments and subsequently, their discrimination of L2 contrasts. Implications of these findings for assessing L2 learners perception of vowels and for developing laboratory training procedures to improve L2 vowel perception will be discussed. [Work supported by NIDCD.

  6. Production of a Catalan-specific vowel contrast by early Spanish-Catalan bilinguals.

    PubMed

    Simonet, Miquel

    2011-01-01

    The present study investigates the acoustics (F1 × F2) of Catalan and Spanish mid-back vowels as produced by highly proficient, early Spanish-Catalan bilinguals residing on the island of Majorca, a bilingual speech community. Majorcan Catalan has two phonemic mid-back vowels in stressed positions (/o/ and /c/) while Spanish possesses only one (/o/). Two groups of bilinguals were recruited and asked to produce materials in both languages - one group of Spanish dominant and one of Catalan-dominant speakers. It was first found that Catalan and Spanish /o/ are virtually indistinguishable. Catalan /c/ is lower and more fronted than the other two vowels. Spanish-dominant bilinguals were found to differ from Catalan-dominant ones in that they did not produce the Catalan-specific /o/-/c/ contrast in their speech; that is, they produced a single, merged Catalan mid-back vowel. A within-subjects analysis of first- and second-language mid-back vowels further suggested, for Spanish-dominant bilinguals, that they had developed a separate vowel category to accommodate their single, merged Catalan mid-back vowel; that is, they possessed a two-category mid-back vowel system, i.e. one for their Spanish /o/ and one for their merged Catalan /o/ + /c/. Potential explanations and theoretical implications are discussed. PMID:21804334

  7. The Vowel Harmony in the Sinhala Language

    ERIC Educational Resources Information Center

    Petryshyn, Ivan

    2005-01-01

    The Sinhala language is characterized by the melodic shifty stress or its essence, the opposition between long and short vowels, the Ablaut variants of the vowels and the syllabic alphabet which, of course, might impact the vowel harmony and can be a feature of all the leveled Indo-European languages. The vowel harmony is a well-known concept in…

  8. Vowel-specific effects in concurrent vowel identification.

    PubMed

    de Cheveigné, A

    1999-07-01

    An experiment investigated the effects of amplitude ratio (-35 to 35 dB in 10-dB steps) and fundamental frequency difference (0%, 3%, 6%, and 12%) on the identification of pairs of concurrent synthetic vowels. Vowels as weak as -25 dB relative to their competitor were easier to identify in the presence of a fundamental frequency difference (delta F0). Vowels as weak as -35 dB were not. Identification was generally the same at delta F0 = 3%, 6%, and 12% for all amplitude ratios: unfavorable amplitude ratios could not be compensated by larger delta F0's. Data for each vowel pair and each amplitude ratio, at delta F0 = 0%, were compared to the spectral envelope of the stimulus at the same ratio, in order to determine which spectral cues determined identification. This information was then used to interpret the pattern of improvement with delta F0 for each vowel pair, to better understand mechanisms of F0-guided segregation. Identification of a vowel was possible in the presence of strong cues belonging to its competitor, as long as cues to its own formants F1 and F2 were prominent. delta F0 enhanced the prominence of a target vowel's cues, even when the spectrum of the target was up to 10 dB below that of its competitor at all frequencies. The results are incompatible with models of segregation based on harmonic enhancement, beats, or channel selection. PMID:10420625

  9. Sparseness of vowel category structure: Evidence from English dialect comparison

    PubMed Central

    Scharinger, Mathias; Idsardi, William J.

    2014-01-01

    Current models of speech perception tend to emphasize either fine-grained acoustic properties or coarse-grained abstract characteristics of speech sounds. We argue for a particular kind of 'sparse' vowel representations and provide new evidence that these representations account for the successful access of the corresponding categories. In an auditory semantic priming experiment, American English listeners made lexical decisions on targets (e.g. load) preceded by semantically related primes (e.g. pack). Changes of the prime vowel that crossed a vowel-category boundary (e.g. peck) were not treated as a tolerable variation, as assessed by a lack of priming, although the phonetic categories of the two different vowels considerably overlap in American English. Compared to the outcome of the same experiment with New Zealand English listeners, where such prime variations were tolerated, our experiment supports the view that phonological representations are important in guiding the mapping process from the acoustic signal to an abstract mental representation. Our findings are discussed with regard to current models of speech perception and recent findings from brain imaging research. PMID:24653528

  10. An ultrasound study of Canadian French rhotic vowels with polar smoothing spline comparisons.

    PubMed

    Mielke, Jeff

    2015-05-01

    This is an acoustic and articulatory study of Canadian French rhotic vowels, i.e., mid front rounded vowels /ø œ̃ œ/ produced with a rhotic perceptual quality, much like English [ɚ] or [ɹ], leading heureux, commun, and docteur to sound like [ɚʁɚ], [kɔmɚ̃], and [dɔktaɹʁ]. Ultrasound, video, and acoustic data from 23 Canadian French speakers are analyzed using several measures of mid-sagittal tongue contours, showing that the low F3 of rhotic vowels is achieved using bunched and retroflex tongue postures and that the articulatory-acoustic mapping of F1 and F2 are rearranged in systems with rhotic vowels. A subset of speakers' French vowels are compared with their English [ɹ]/[ɚ], revealing that the French vowels are consistently less extreme in low F3 and its articulatory correlates, even for the most rhotic speakers. Polar coordinates are proposed as a replacement for Cartesian coordinates in calculating smoothing spline comparisons of mid-sagittal tongue shapes, because they enable comparisons to be roughly perpendicular to the tongue surface, which is critical for comparisons involving tongue root position but appropriate for all comparisons involving mid-sagittal tongue contours. PMID:25994713

  11. Investigating interaural frequency-place mismatches via bimodal vowel integration.

    PubMed

    Guérit, François; Santurette, Sébastien; Chalupper, Josef; Dau, Torsten

    2014-01-01

    For patients having residual hearing in one ear and a cochlear implant (CI) in the opposite ear, interaural place-pitch mismatches might be partly responsible for the large variability in individual benefit. Behavioral pitch-matching between the two ears has been suggested as a way to individualize the fitting of the frequency-to-electrode map but is rather tedious and unreliable. Here, an alternative method using two-formant vowels was developed and tested. The interaural spectral shift was inferred by comparing vowel spaces, measured by presenting the first formant (F1) to the nonimplanted ear and the second (F2) on either side. The method was first evaluated with eight normal-hearing listeners and vocoder simulations, before being tested with 11 CI users. Average vowel distributions across subjects showed a similar pattern when presenting F2 on either side, suggesting acclimatization to the frequency map. However, individual vowel spaces with F2 presented to the implant did not allow a reliable estimation of the interaural mismatch. These results suggest that interaural frequency-place mismatches can be derived from such vowel spaces. However, the method remains limited by difficulties in bimodal fusion of the two formants. PMID:25421087

  12. Investigating Interaural Frequency-Place Mismatches via Bimodal Vowel Integration

    PubMed Central

    Santurette, Sébastien; Chalupper, Josef; Dau, Torsten

    2014-01-01

    For patients having residual hearing in one ear and a cochlear implant (CI) in the opposite ear, interaural place-pitch mismatches might be partly responsible for the large variability in individual benefit. Behavioral pitch-matching between the two ears has been suggested as a way to individualize the fitting of the frequency-to-electrode map but is rather tedious and unreliable. Here, an alternative method using two-formant vowels was developed and tested. The interaural spectral shift was inferred by comparing vowel spaces, measured by presenting the first formant (F1) to the nonimplanted ear and the second (F2) on either side. The method was first evaluated with eight normal-hearing listeners and vocoder simulations, before being tested with 11 CI users. Average vowel distributions across subjects showed a similar pattern when presenting F2 on either side, suggesting acclimatization to the frequency map. However, individual vowel spaces with F2 presented to the implant did not allow a reliable estimation of the interaural mismatch. These results suggest that interaural frequency-place mismatches can be derived from such vowel spaces. However, the method remains limited by difficulties in bimodal fusion of the two formants. PMID:25421087

  13. Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics

    NASA Technical Reports Server (NTRS)

    Kenny, Robert Jeremy

    2009-01-01

    NASA's current models to predict lift-off acoustics for launch vehicles are currently being updated using several numerical and empirical inputs. One empirical input comes from free-field acoustic data measured at three Space Shuttle Reusable Solid Rocket Motor (RSRM) static firings. The measurements were collected by a joint collaboration between NASA - Marshall Space Flight Center, Wyle Labs, and ATK Launch Systems. For the first time NASA measured large-thrust solid rocket motor plume acoustics for evaluation of both noise sources and acoustic radiation properties. Over sixty acoustic free-field measurements were taken over the three static firings to support evaluation of acoustic radiation near the rocket plume, far-field acoustic radiation patterns, plume acoustic power efficiencies, and apparent noise source locations within the plume. At approximately 67 m off nozzle centerline and 70 m downstream of the nozzle exit plan, the measured overall sound pressure level of the RSRM was 155 dB. Peak overall levels in the far field were over 140 dB at 300 m and 50-deg off of the RSRM thrust centerline. The successful collaboration has yielded valuable data that are being implemented into NASA's lift-off acoustic models, which will then be used to update predictions for Ares I and Ares V liftoff acoustic environments.

  14. Efficient estimation of decay parameters in acoustically coupled-spaces using slice sampling.

    PubMed

    Jasa, Tomislav; Xiang, Ning

    2009-09-01

    Room-acoustic energy decay analysis of acoustically coupled-spaces within the Bayesian framework has proven valuable for architectural acoustics applications. This paper describes an efficient algorithm termed slice sampling Monte Carlo (SSMC) for room-acoustic decay parameter estimation within the Bayesian framework. This work combines the SSMC algorithm and a fast search algorithm in order to efficiently determine decay parameters, their uncertainties, and inter-relationships with a minimum amount of required user tuning and interaction. The large variations in the posterior probability density functions over multidimensional parameter spaces imply that an adaptive exploration algorithm such as SSMC can have advantages over the exiting importance sampling Monte Carlo and Metropolis-Hastings Markov Chain Monte Carlo algorithms. This paper discusses implementation of the SSMC algorithm, its initialization, and convergence using experimental data measured from acoustically coupled-spaces. PMID:19739741

  15. Changing space and sound: Parametric design and variable acoustics

    NASA Astrophysics Data System (ADS)

    Norton, Christopher William

    This thesis examines the potential for parametric design software to create performance based design using acoustic metrics as the design criteria. A former soundstage at the University of Southern California used by the Thornton School of Music is used as a case study for a multiuse space for orchestral, percussion, master class and recital use. The criteria used for each programmatic use include reverberation time, bass ratio, and the early energy ratios of the clarity index and objective support. Using a panelized ceiling as a design element to vary the parameters of volume, panel orientation and type of absorptive material, the relationships between these parameters and the design criteria are explored. These relationships and subsequently derived equations are applied to Grasshopper parametric modeling software for Rhino 3D (a NURBS modeling software). Using the target reverberation time and bass ratio for each programmatic use as input for the parametric model, the genomic optimization function of Grasshopper - Galapagos - is run to identify the optimum ceiling geometry and material distribution.

  16. A wave superposition method formulated in digital acoustic space

    NASA Astrophysics Data System (ADS)

    Hwang, Yong-Sin

    In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.

  17. Perception of English vowels by bilingual Chinese-English and corresponding monolingual listeners.

    PubMed

    Yang, Jing; Fox, Robert A

    2014-06-01

    This study compares the underlying perceptual structure of vowel perception in monolingual Chinese, monolingual English and bilingual Chinese-English listeners. Of particular interest is how listeners' spatial organization of vowels is affected either by their L1 or their experience with L2. Thirteen English vowels, /i, I, e, epsilon, ae, u, omega, o, (see symbol), alpha, (see symbol)I, alphaI, alphaomega/, embedded in /hVd/ syllable produced by an Ohio male speaker were presented in pairs to three groups of listeners. Each listener rated 312 vowel pairs on a nine-point dissimilarity scale. The responses from each group were analyzed using a multidimensional scaling program (ALSCAL). Results demonstrated that all three groups of listeners used high/low and front/back distinctions as the two most important dimensions to perceive English vowels. However, the vowels were distributed in clusters in the perceptual space of Chinese monolinguals, while they were appropriately separated and located in that of bilinguals and English monolinguals. Besides the two common perceptual dimensions, each group of listeners utilized a different third dimension to perceive these English vowels. English monolinguals used high-front offset. Bilinguals used a dimension mainly correlated to the distinction of monophthong/diphthong. Chinese monolinguals separated two high vowels, /i/ and /u/, from the rest of vowels in the third dimension. The difference between English monolinguals and Chinese monolinguals evidenced the effect of listeners' native language on the vowel perception. The difference between Chinese monolinguals and bilingual listeners as well as the approximation of bilingual listeners' perceptual space to that of English monolinguals demonstrated the effect of L2 experience on listeners' perception of L2 vowels. PMID:25102607

  18. Monopitched expression of emotions in different vowels.

    PubMed

    Waaramaa, Teija; Laukkanen, Anne-Maria; Alku, Paavo; Väyrynen, Eero

    2008-01-01

    Fundamental frequency (F(0)) and intensity are known to be important variables in the communication of emotions in speech. In singing, however, pitch is predetermined and yet the voice should convey emotions. Hence, other vocal parameters are needed to express emotions. This study investigated the role of voice source characteristics and formant frequencies in the communication of emotions in monopitched vowel samples [a:], [i:] and [u:]. Student actors (5 males, 8 females) produced the emotional samples simulating joy, tenderness, sadness, anger and a neutral emotional state. Equivalent sound level (L(eq)), alpha ratio [SPL (1-5 kHz) - SPL (50 Hz-1 kHz)] and formant frequencies F1-F4 were measured. The [a:] samples were inverse filtered and the estimated glottal flows were parameterized with the normalized amplitude quotient [NAQ = f(AC)/(d(peak)T)]. Interrelations of acoustic variables were studied by ANCOVA, considering the valence and psychophysiological activity of the expressions. Forty participants listened to the randomized samples (n = 210) for identification of the emotions. The capacity of monopitched vowels for conveying emotions differed. L(eq) and NAQ differentiated activity levels. NAQ also varied independently of L(eq). In [a:], filter (formant frequencies F1-F4) was related to valence. The interplay between voice source and F1-F4 warrants a synthesis study. PMID:18765945

  19. A comparison of vowel formant frequencies in the babbling of infants exposed to Canadian English and Canadian French

    NASA Astrophysics Data System (ADS)

    Mattock, Karen; Rvachew, Susan; Polka, Linda; Turner, Sara

    2005-04-01

    It is well established that normally developing infants typically enter the canonical babbling stage of production between 6 and 8 months of age. However, whether the linguistic environment affects babbling, either in terms of the phonetic inventory of vowels produced by infants [Oller & Eiler (1982)] or the acoustics of vowel formants [Boysson-Bardies et al. (1989)] is controversial. The spontaneous speech of 42 Canadian English- and Canadian French-learning infants aged 8 to 11, 12 to 15 and 16 to 18 months of age was recorded and digitized to yield a total of 1253 vowels that were spectrally analyzed and statistically compared for differences in first and second formant frequencies. Language-specific influences on vowel acoustics were hypothesized. Preliminary results reveal changes in formant frequencies as a function of age and language background. There is evidence of decreases over age in the F1 values of French but not English infants vowels, and decreases over age in the F2 values of English but not French infants vowels. The notion of an age-related shift in infants attention to language-specific acoustic features and the implications of this for early vocal development as well as for the production of Canadian English and Canadian French vowels will be discussed.

  20. Spectral timbre perception in ferrets; discrimination of artificial vowels under different listening conditions

    PubMed Central

    Bizley, Jennifer K; Walker, Kerry MM; King, Andrew J; Schnupp, Jan WH

    2013-01-01

    Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/, and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners. PMID:23297909

  1. Vowel perception by noise masked normal-hearing young adults

    NASA Astrophysics Data System (ADS)

    Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen

    2005-08-01

    This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /smcapi e ɛ invv æ/ when F1 or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.

  2. Perceptual effects of dialectal and prosodic variation in vowels

    NASA Astrophysics Data System (ADS)

    Fox, Robert Allen; Jacewicz, Ewa; Hatcher, Kristin; Salmons, Joseph

    2005-09-01

    As was reported earlier [Fox et al., J. Acoust. Soc. Am. 114, 2396 (2003)], certain vowels in the Ohio and Wisconsin dialects of American English are shifting in different directions. In addition, we have found that the spectral characteristics of these vowels (e.g., duration and formant frequencies) changed systematically under varying degrees of prosodic prominence, with somewhat different changes occurring within each dialect. The question addressed in the current study is whether naive listeners from these two dialects are sensitive to both the dialect variations and to the prosodically induced spectral differences. Listeners from Ohio and Wisconsin listened to the stimulus tokens [beIt] and [bɛt] produced in each of three prosodic contexts (representing three different levels of prominence). These words were produced by speakers from Ohio or from Wisconsin (none of the listeners were also speakers). Listeners identified the stimulus tokens in terms of vowel quality and indicated whether it was a good, fair, or poor exemplar of that phonetic category. Results showed that both phonetic quality decisions and goodness ratings were systematically and significantly affected by speaker dialect, listener dialect, and prosodic context. Implications of source and nature of ongoing vowel changes in these two dialects will be discussed. [Work partially supported by NIDCD R03 DC005560-01.

  3. Tense-Lax Vowel Classification with Energy Trajectory and Voice Quality Measurements

    NASA Astrophysics Data System (ADS)

    Lee, Suk-Myung; Choi, Jeung-Yoon

    This work examines energy trajectory and voice quality measurements, in addition to conventional formant and duration properties, to classify tense and lax vowels in English. Tense and lax vowels are produced with differing articulatory configurations which can be identified by measuring acoustic cues such as energy peak location, energy convexity, open quotient and spectral tilt. An analysis of variance (ANOVA) is conducted, and dialect effects are observed. An overall 85.2% classification rate is obtained using the proposed features on the TIMIT database, resulting in improvement over using only conventional acoustic features. Adding the proposed features to widely used cepstral features also results in improved classification.

  4. Vowel production in Korean, Korean-accented English, and American English

    NASA Astrophysics Data System (ADS)

    Lee, Jimin; Weismer, Gary

    2005-09-01

    The current study compares vowel formant frequencies and durations produced by ten native speakers of Korean, those same speakers producing American English vowels, and ten native speakers of American English. The Korean speakers were chosen carefully to have a minimum of 2 years, and maximum of 5 years residence in the United States; all speakers were between the ages of 22 and 27. In addition, the native speakers of Korean were chosen, by means of a small-scale dialect-severity experiment, from a larger pool of speakers to achieve some homogeneity in their mastery of English phonetics. The full vowel systems of both languages were explored, and a rate condition was included (conversational versus fast) to test the hypothesis that the English vowel space is modified by rate differently for native speakers of Korean who produce English, versus native speakers of English. Results will be discussed in terms of language- and rate-induced adjustments of the vowel systems under study.

  5. Changes in Wisconsin English over 110 Years: A Real-Time Acoustic Account

    ERIC Educational Resources Information Center

    Delahanty, Jennifer

    2011-01-01

    The growing set of studies on American regional dialects have to date focused heavily on vowels while few examine consonant features and none provide acoustic analysis of both vowel and consonant features. This dissertation uses real-time data on both vowels and consonants to show how Wisconsin English has changed over time. Together, the…

  6. The Processing of Short Vowels, Long Vowels and Vowel Digraphs in Disabled and Non-Disabled Readers.

    ERIC Educational Resources Information Center

    Calhoun, Mary Lynne; Allegretti, Christine L.

    To test F. J. Morrison's conceptualization of reading disability as the failure to master the complex irregular system of rules governing sound-symbol correspondence in English (1980), a study investigated the speed with which disabled and normal readers processed short vowels, long vowels, and vowel digraphs. Subjects consisted of two groups of…

  7. Preliminary characterization of a one-axis acoustic system. [acoustic levitation for space processing

    NASA Technical Reports Server (NTRS)

    Oran, W. A.; Reiss, D. A.; Berge, L. H.; Parker, H. W.

    1979-01-01

    The acoustic fields and levitation forces produced along the axis of a single-axis resonance system were measured. The system consisted of a St. Clair generator and a planar reflector. The levitation force was measured for bodies of various sizes and geometries (i.e., spheres, cylinders, and discs). The force was found to be roughly proportional to the volume of the body until the characteristic body radius reaches approximately 2/k (k = wave number). The acoustic pressures along the axis were modeled using Huygens principle and a method of imaging to approximate multiple reflections. The modeled pressures were found to be in reasonable agreement with those measured with a calibrated microphone.

  8. Acoustic vibration analysis for utilization of woody plant in space environment

    NASA Astrophysics Data System (ADS)

    Chida, Yukari; Yamashita, Masamichi; Hashimoto, Hirofumi; Sato, Seigo; Tomita-Yokotani, Kaori; Baba, Keiichi; Suzuki, Toshisada; Motohashi, Kyohei; Sakurai, Naoki; Nakagawa-izumi, Akiko

    2012-07-01

    We are proposing to raise woody plants for space agriculture in Mars. Space agriculture has the utilization of wood in their ecosystem. Nobody knows the real tree shape grown under space environment under the low or micro gravitational conditions such as outer environment. Angiosperm tree forms tension wood for keeping their shape. Tension wood formation is deeply related to gravity, but the details of the mechanism of its formation has not yet been clarified. For clarifying the mechanism, the space experiment in international space station, ISS is the best way to investigate about them as the first step. It is necessary to establish the easy method for crews who examine the experiments at ISS. Here, we are proposing to investigate the possibility of the acoustic vibration analysis for the experiment at ISS. Two types of Japanese cherry tree, weeping and upright types in Prunus sp., were analyzed by the acoustic vibration method. Coefficient-of-variation (CV) of sound speed was calculated by the acoustic vibration analysis. The amount of lignin and decomposed lignin were estimated by both Klason and Py-GC/MS method, respectively. The relationships of the results of acoustic vibration analysis and the inner components in tested woody materials were investigated. After the experiments, we confirm the correlation about them. Our results indicated that the acoustic vibration analysis would be useful for determining the inside composition as a nondestructive method in outer space environment.

  9. Interaction of native- and second-language vowel system(s) in early and late bilinguals.

    PubMed

    Baker, Wendy; Trofimovich, Pavel

    2005-01-01

    The objective of this study was to determine how bilinguals' age at the time of language acquisition influenced the organization of their phonetic system(s). The productions of six English and five Korean vowels by English and Korean monolinguals were compared to the productions of the same vowels by early and late Korean-English bilinguals varying in amount of exposure to their second language. Results indicated that bilinguals' age profoundly influenced both the degree and the direction of the interaction between the phonetic systems of their native (L1) and second (L2) languages. In particular, early bilinguals manifested a bidirectional L1-L2 influence and produced distinct acoustic realizations of L1 and L2 vowels. Late bilinguals, however, showed evidence of a unidirectional influence of the L1 on the L2 and produced L2 vowels that were "colored" by acoustic properties of their L1. The degree and direction of L1-L2 influences in early and late bilinguals appeared to depend on the degree of acoustic similarity between L1 and L2 vowels and the length of their exposure to the L2. Overall, the findings underscored the complex nature of the restructuring of the L1-L2 phonetic system(s) in bilinguals. PMID:16161470

  10. Teaching Pronunciation with the Vowel Colour Chart.

    ERIC Educational Resources Information Center

    Finger, Julianne

    1985-01-01

    Explains the composition of the Vowel Colour Chart, a system for teaching Canadian English vowels in which each sound is represented by a color, the color word being the key word for that vowel sound. Suggests practical ways to use the chart with learners of English as a second language. (SED)

  11. Technical Aspects of Acoustical Engineering for the ISS [International Space Station

    NASA Technical Reports Server (NTRS)

    Allen, Christopher S.

    2009-01-01

    It is important to control acoustic levels on manned space flight vehicles and habitats to protect crew-hearing, allow for voice communications, and to ensure a healthy and habitable environment in which to work and live. For the International Space Station (ISS) this is critical because of the long duration crew-stays of approximately 6-months. NASA and the JSC Acoustics Office set acoustic requirements that must be met for hardware to be certified for flight. Modules must meet the NC-50 requirement and other component hardware are given smaller allocations to meet. In order to meet these requirements many aspects of noise generation and control must be considered. This presentation has been developed to give an insight into the various technical activities performed at JSC to ensure that a suitable acoustic environment is provided for the ISS crew. Examples discussed include fan noise, acoustic flight material development, on-orbit acoustic monitoring, and a specific hardware development and acoustical design case, the ISS Crew Quarters.

  12. International Space Station USOS Crew Quarters Ventilation and Acoustic Design Implementation

    NASA Technical Reports Server (NTRS)

    Broyan, James Lee, Jr.

    2009-01-01

    The International Space Station (ISS) United States Operational Segment (USOS) has four permanent rack sized ISS Crew Quarters (CQ) providing a private crewmember space. The CQ uses Node 2 cabin air for ventilation/thermal cooling, as opposed to conditioned ducted air from the ISS Temperature Humidity Control System or the ISS fluid cooling loop connections. Consequently, CQ can only increase the air flow rate to reduce the temperature delta between the cabin and the CQ interior. However, increasing airflow causes increased acoustic noise so efficient airflow distribution is an important design parameter. The CQ utilized a two fan push-pull configuration to ensure fresh air at the crewmember s head position and reduce acoustic exposure. The CQ interior needs to be below Noise Curve 40 (NC-40). The CQ ventilation ducts are open to the significantly louder Node 2 cabin aisle way which required significantly acoustic mitigation controls. The design implementation of the CQ ventilation system and acoustic mitigation are very inter-related and require consideration of crew comfort balanced with use of interior habitable volume, accommodation of fan failures, and possible crew uses that impact ventilation and acoustic performance. This paper illustrates the types of model analysis, assumptions, vehicle interactions, and trade-offs required for CQ ventilation and acoustics. Additionally, on-orbit ventilation system performance and initial crew feedback is presented. This approach is applicable to any private enclosed space that the crew will occupy.

  13. Phonics Plus, Book B: Short Vowel Patterns, Long Vowel Patterns.

    ERIC Educational Resources Information Center

    Smith, Carl B.; Ruff, Regina

    By actively involving the child in hearing, saying, seeing, and writing the letters and sounds, this workbook develops a child's skill in recognizing consonant sounds as well as the most important short and long vowels through a series of 70 lessons. It is appropriate for parents to use with advanced first grade children. By using this learning…

  14. Effects of acoustic variability in the perceptual learning of non-native-accented speech sounds.

    PubMed

    Wade, Travis; Jongman, Allard; Sereno, Joan

    2007-01-01

    This study addressed whether acoustic variability and category overlap in non-native speech contribute to difficulty in its recognition, and more generally whether the benefits of exposure to acoustic variability during categorization training are stable across differences in category confusability. Three experiments considered a set of Spanish-accented English productions. The set was seen to pose learning and recognition difficulty (experiment 1) and was more variable and confusable than a parallel set of native productions (experiment 2). A training study (experiment 3) probed the relative contributions of category central tendency and variability to difficulty in vowel identification using derived inventories in which these dimensions were manipulated based on the results of experiments 1 and 2. Training and test difficulty related straightforwardly to category confusability but not to location in the vowel space. Benefits of high-variability exposure also varied across vowel categories, and seemed to be diminished for highly confusable vowels. Overall, variability was implicated in perception and learning difficulty in ways that warrant further investigation. PMID:17914280

  15. Acoustics

    NASA Astrophysics Data System (ADS)

    The acoustics research activities of the DLR fluid-mechanics department (Forschungsbereich Stroemungsmechanik) during 1988 are surveyed and illustrated with extensive diagrams, drawings, graphs, and photographs. Particular attention is given to studies of helicopter rotor noise (high-speed impulsive noise, blade/vortex interaction noise, and main/tail-rotor interaction noise), propeller noise (temperature, angle-of-attack, and nonuniform-flow effects), noise certification, and industrial acoustics (road-vehicle flow noise and airport noise-control installations).

  16. Please say what this word is-Vowel-extrinsic normalization in the sensorimotor control of speech.

    PubMed

    Bourguignon, Nicolas J; Baum, Shari R; Shiller, Douglas M

    2016-07-01

    The extent to which the adaptive nature of speech perception influences the acoustic targets underlying speech production is not well understood. For example, listeners can rapidly accommodate to talker-dependent phonetic properties-a process known as vowel-extrinsic normalization-without altering their speech output. Recent evidence, however, shows that reinforcement-based learning in vowel perception alters the processing of speech auditory feedback, impacting sensorimotor control during vowel production. This suggests that more automatic and ubiquitous forms of perceptual plasticity, such as those characterizing perceptual talker normalization, may also impact the sensorimotor control of speech. To test this hypothesis, we set out to examine the possible effects of vowel-extrinsic normalization on experimental subjects' interpretation of their own speech outcomes. By combining a well-known manipulation of vowel-extrinsic normalization with speech auditory-motor adaptation, we show that exposure to different vowel spectral properties subsequently alters auditory feedback processing during speech production, thereby influencing speech motor adaptation. These findings extend the scope of perceptual normalization processes to include auditory feedback and support the idea that naturally occurring adaptations found in speech perception impact speech production. (PsycINFO Database Record PMID:26820250

  17. The effects of auditory-visual vowel and consonant training on speechreading performance

    NASA Astrophysics Data System (ADS)

    Richie, Carolyn; Kewley-Port, Diane

    2001-05-01

    Recent work examined the effects of a novel approach to speechreading training using vowels, for normal-hearing listeners tested in masking noise [C. Richie and D. Kewley-Port, J. Acoust. Soc. Am. 114, 2337 (2003)]. That study showed significant improvements in sentence-level speechreading abilities for trained listeners compared to untrained listeners. The purpose of the present study was to determine the effects of combining vowel training with consonant training on speechreading abilities. Normal-hearing adults were tested in auditory-visual conditions in noise designed to simulate a mild-to-moderate sloping sensorineural hearing loss. One group of listeners received training on consonants in monosyllable context, and another group received training on both consonants and vowels in monosyllable context. A control group was tested but did not receive any training. All listeners performed speechreading pre- and post-tests, on words and sentences. Results are discussed in terms of differences between groups, dependent upon which type of training was administered; vowel training, consonant training, or vowel and consonant training combined. Comparison is made between these and other speechreading training methods. Finally, the potential benefit of these vowel- and consonant-based speechreading training methods for rehabilitation of hearing-impaired listeners is discussed. [Work supported by NIHDCD02229.

  18. Phonetic correlates of phonological vowel quantity in Yakut read and spontaneous speech.

    PubMed

    Vasilyeva, Lena; Arnhold, Anja; Järvikivi, Juhani

    2016-05-01

    The quantity language Yakut (Sakha) has a binary distinction between short and long vowels. Disyllabic words with short and long vowels in one or both syllables were extracted from spontaneous speech of native Yakut speakers. In addition, a controlled production by a native speaker of disyllabic words with different short and long vowel combinations along with contrastive minimal pairs was recorded in a phonetics laboratory. Acoustic measurements of the vowels' fundamental frequency, duration, and intensity showed a significant consistent lengthening of phonologically long vowels compared to their short counterparts. However, in addition to evident durational differences between long and short quantities, fundamental frequency and intensity also showed effects of quantity. These results allow the interpretation that similarly to other non-tonal quantity languages like Finnish or Estonian, the Yakut vowel quantity opposition is not based exclusively on durational differences. The data furthermore revealed differences in F0 contours between spontaneous and read speech, providing some first indications of utterance-level prosody in Yakut. PMID:27250149

  19. Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Hack, Zarita Caplan; Erber, Norman P.

    1982-01-01

    Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…

  20. Acoustic Shaping: Enabling Technology for a Space-Based Economy

    NASA Astrophysics Data System (ADS)

    Komerath, N. M.; Matos, C. A.; Coker, A.; Wanis, S.; Hausaman, J.; Ames, R. G.; Tan, X. Y.

    1999-01-01

    This abstract presents three points for discussion: (1) Key to the development of civilization in space, is a space-based marketplace, where the need to compete in earth-based markets is removed, along with the constraint of launch costs from Earth. (2) A body of technical results, obtained by the authors' team, indicates promise for non-contact manufacturing in space, of low-cost items required for human presence in space. This is presented along with various other techniques which hold promise. (3) The economics of starting a space-based production company are heavily dependent on the presence of a rudimentary infrastructure. A national-level investment in space-based infrastructure, would be an essential catalyst for the development of a space-based economy. Some suggestions for the beginnings of this infrastructure are repeated from the literature.

  1. The first radial-mode Lorentzian Landau damping of dust acoustic space-charge waves

    NASA Astrophysics Data System (ADS)

    Lee, Myoung-Jae; Jung, Young-Dae

    2016-05-01

    The dispersion properties and the first radial-mode Lorentzian Landau damping of a dust acoustic space-charge wave propagating in a cylindrical waveguide dusty plasma which contains nonthermal electrons and ions are investigated by employing the normal mode analysis and the method of separation of variables. It is found that the frequency of dust acoustic space-charge wave increases as the wave number increases as well as the radius of cylindrical plasma does. However, the nonthermal property of the Lorentzian plasma is found to suppress the wave frequency of the dust acoustic space-charge wave. The Landau damping rate of the dust acoustic space-charge wave is derived in a cylindrical waveguide dusty plasma. The damping of the space-charge wave is found to be enhanced as the radius of cylindrical plasma and the nonthermal property increase. The maximum Lorentzian Landau damping rate is also found in a cylindrical waveguide dusty plasma. The variation of the wave frequency and the Landau damping rate due to the nonthermal character and geometric effects are also discussed.

  2. Micropropulsion by an acoustic bubble for navigating microfluidic spaces.

    PubMed

    Feng, Jian; Yuan, Junqi; Cho, Sung Kwon

    2015-03-21

    This paper describes an underwater micropropulsion principle where a gaseous bubble trapped in a suspended microchannel and oscillated by external acoustic excitation generates a propelling force. The propelling swimmer is designed and microfabricated from parylene on the microscale (the equivalent diameter of the cylindrical bubble is around 60 μm) using microphotolithography. The propulsion mechanism is studied and verified by computational fluid dynamics (CFD) simulations as well as experiments. The acoustically excited and thus periodically oscillating bubble generates alternating flows of intake and discharge through an opening of the microchannel. As the Reynolds number of oscillating flow increases, the difference between the intake and discharge flows becomes significant enough to generate a net flow (microstreaming flow) and a propulsion force against the channel. As the size of the device is reduced, however, the Reynolds number is also reduced. To maintain the Reynolds number in a certain range and thus generate a strong propulsion force in the fabricated device, the oscillation amplitude of the bubble is maximized (resonated) and the oscillation frequency is set high (over 10 kHz). Propelling motions by a single bubble as well as an array of bubbles are achieved on the microscale. In addition, the microswimmer demonstrates payload carrying. This propulsion mechanism may be applied to microswimmers that navigate microfluidic environments and possibly narrow passages in human bodies to perform biosensing, drug delivery, imaging, and microsurgery. PMID:25650274

  3. Acoustic response modeling of energetics systems in confined spaces

    NASA Astrophysics Data System (ADS)

    González, David R.; Hixon, Ray; Liou, William W.; Sanford, Matthew

    2007-04-01

    In recent times, warfighting has been taking place not in far-removed areas but within urban environments. As a consequence, the modern warfighter must adapt. Currently, an effort is underway to develop shoulder-mounted rocket launcher rounds suitable with reduced acoustic signatures for use in such environments. Of prime importance is to ensure that these acoustic levels, generated by propellant burning, reflections from enclosures, etc., are at tolerable levels without requiring excessive hearing protection. Presented below is a proof-of-concept approach aimed at developing a computational tool to aid in the design process. Unsteady, perfectly-expanded-jet simulations at two different Mach numbers and one at an elevated temperature ratio were conducted using an existing computational aeroacoustics code. From the solutions, sound pressure levels and frequency spectra were then obtained. The results were compared to sound pressure levels collected from a live-fire test of the weapon. Lastly, an outline of work that is to continue and be completed in the near future will be presented.

  4. Acoustic Analysis of Speech of Cochlear Implantees and Its Implications

    PubMed Central

    Patadia, Rajesh; Govale, Prajakta; Rangasayee, R.; Kirtane, Milind

    2012-01-01

    Objectives Cochlear implantees have improved speech production skills compared with those using hearing aids, as reflected in their acoustic measures. When compared to normal hearing controls, implanted children had fronted vowel space and their /s/ and /∫/ noise frequencies overlapped. Acoustic analysis of speech provides an objective index of perceived differences in speech production which can be precursory in planning therapy. The objective of this study was to compare acoustic characteristics of speech in cochlear implantees with those of normal hearing age matched peers to understand implications. Methods Group 1 consisted of 15 children with prelingual bilateral severe-profound hearing loss (age, 5-11 years; implanted between 4-10 years). Prior to an implant behind the ear, hearing aids were used; prior & post implantation subjects received at least 1 year of aural intervention. Group 2 consisted of 15 normal hearing age matched peers. Sustained productions of vowels and words with selected consonants were recorded. Using Praat software for acoustic analysis, digitized speech tokens were measured for F1, F2, and F3 of vowels; centre frequency (Hz) and energy concentration (dB) in burst; voice onset time (VOT in ms) for stops; centre frequency (Hz) of noise in /s/; rise time (ms) for affricates. A t-test was used to find significant differences between groups. Results Significant differences were found in VOT for /b/, F1 and F2 of /e/, and F3 of /u/. No significant differences were found for centre frequency of burst, energy concentration for stops, centre frequency of noise in /s/, or rise time for affricates. These findings suggest that auditory feedback provided by cochlear implants enable subjects to monitor production of speech sounds. Conclusion Acoustic analysis of speech is an essential method for discerning characteristics which have or have not been improved by cochlear implantation and thus for planning intervention. PMID:22701768

  5. Comparing measurement errors for formants in synthetic and natural vowels.

    PubMed

    Shadle, Christine H; Nam, Hosung; Whalen, D H

    2016-02-01

    The measurement of formant frequencies of vowels is among the most common measurements in speech studies, but measurements are known to be biased by the particular fundamental frequency (F0) exciting the formants. Approaches to reducing the errors were assessed in two experiments. In the first, synthetic vowels were constructed with five different first formant (F1) values and nine different F0 values; formant bandwidths, and higher formant frequencies, were constant. Input formant values were compared to manual measurements and automatic measures using the linear prediction coding-Burg algorithm, linear prediction closed-phase covariance, the weighted linear prediction-attenuated main excitation (WLP-AME) algorithm [Alku, Pohjalainen, Vainio, Laukkanen, and Story (2013). J. Acoust. Soc. Am. 134(2), 1295-1313], spectra smoothed cepstrally and by averaging repeated discrete Fourier transforms. Formants were also measured manually from pruned reassigned spectrograms (RSs) [Fulop (2011). Speech Spectrum Analysis (Springer, Berlin)]. All but WLP-AME and RS had large errors in the direction of the strongest harmonic; the smallest errors occur with WLP-AME and RS. In the second experiment, these methods were used on vowels in isolated words spoken by four speakers. Results for the natural speech show that F0 bias affects all automatic methods, including WLP-AME; only the formants measured manually from RS appeared to be accurate. In addition, RS coped better with weaker formants and glottal fry. PMID:26936555

  6. Effect of frequency-modulation coherence for inharmonic stimuli: frequency-modulation phase discrimination and identification of artificial double vowels.

    PubMed

    Lyzenga, Johannes; Moore, Brian C J

    2005-03-01

    The ability to compare patterns of frequency modulation (FM) in separate frequency regions was explored. In experiment 1, listeners had to distinguish whether the FM applied to two nonharmonically related sinusoidal carriers was in phase or out of phase. The FM rate was the same for each carrier. The starting phase of the modulation was randomized for each stimulus in a three alternative, forced-choice (3AFC) trial. Subjects were sensitive to relative FM phase for modulation rates of 2 and 4 Hz, but not for higher rates. In experiment 2, vowel identification was compared for artificial single and double vowels. The vowels were constructed from complex tones with components spaced at 2-ERB(N) (equivalent rectangular bandwidth) intervals, by increasing the levels of three components by 15 dB, to create three "formants." In the double vowels, the components of the two vowels were interleaved, to give 1-ERB(N) spacing. The three "formant" components were frequency modulated at 2, 4, or 8 Hz, with either the same or different rates for the two vowels. The identification of double vowels was not improved by a difference in FM rate across vowels, suggesting that differences in FM rate do not support perceptual segregation of inharmonic stimuli. PMID:15807020

  7. Seeing a voice: Rudolph Koenig's instruments for studying vowel sounds.

    PubMed

    Pantalony, David

    2004-01-01

    The human voice was one of the more elusive acoustical phenomena to study in the 19th century and therefore a crucial test of Hermann von Helmholtz's new theory of sound. This article describes the origins of instruments used to study vowel sounds: synthesizers for production, resonators for detection, and manometric flames for visual display. Instrument maker Rudolph Koenig played a leading role in transforming Helmholtz's ideas into apparatus. In particular, he was the first to make the human voice visible for research and teaching. Koenig's work reveals the rich context of science, craft traditions, experiment, demonstration culture, and commerce in his Paris workshop. PMID:15457810

  8. Effect of Vowel Identity and Onset Asynchrony on Concurrent Vowel Identification

    ERIC Educational Resources Information Center

    Hedrick, Mark S.; Madix, Steven G.

    2009-01-01

    Purpose: The purpose of the current study was to determine the effects of vowel identity and temporal onset asynchrony on identification of vowels overlapped in time. Method: Fourteen listeners with normal hearing, with a mean age of 24 years, participated. The listeners were asked to identify both of a pair of 200-ms vowels (referred to as…

  9. The Effectiveness of Vowel Production Training with Real-Time Spectrographic Displays for Children with Profound Hearing Impairment.

    NASA Astrophysics Data System (ADS)

    Ertmer, David Joseph

    1994-01-01

    The effectiveness of vowel production training which incorporated direct instruction in combination with spectrographic models and feedback was assessed for two children who exhibited profound hearing impairment. A multiple-baseline design across behaviors, with replication across subjects was implemented to determine if vowel production accuracy improved following the introduction of treatment. Listener judgments of vowel correctness were obtained during the baseline, training, and follow-up phases of the study. Data were analyzed through visual inspection of changes in levels of accuracy, changes in trends of accuracy, and changes in variability of accuracy within and across phases. One subject showed significant improvement of all three trained vowel targets; the second subject for the first trained target only (Kolmogorov-Smirnov Two Sample Test). Performance trends during training sessions suggest that continued treatment would have resulted in further improvement for both subjects. Vowel duration, fundamental frequency, and the frequency locations of the first and second formants were measured before and after training. Acoustic analysis revealed highly individualized changes in the frequency locations of F1 and F2. Vowels which received the most training were maintained at higher levels than those which were introduced later in training, Some generalization of practiced vowel targets to untrained words was observed in both subjects. A bias towards judging productions as "correct" was observed for both subjects during self-evaluation tasks using spectrographic feedback.

  10. From prosodic structure to acoustic saliency: A fMRI investigation of speech rate, clarity, and emphasis

    NASA Astrophysics Data System (ADS)

    Golfinopoulos, Elisa

    Acoustic variability in fluent speech can arise at many stages in speech production planning and execution. For example, at the phonological encoding stage, the grouping of phonemes into syllables determines which segments are coarticulated and, by consequence, segment-level acoustic variation. Likewise phonetic encoding, which determines the spatiotemporal extent of articulatory gestures, will affect the acoustic detail of segments. Functional magnetic resonance imaging (fMRI) was used to measure brain activity of fluent adult speakers in four speaking conditions: fast, normal, clear, and emphatic (or stressed) speech. These speech manner changes typically result in acoustic variations that do not change the lexical or semantic identity of productions but do affect the acoustic saliency of phonemes, syllables and/or words. Acoustic responses recorded inside the scanner were assessed quantitatively using eight acoustic measures and sentence duration was used as a covariate of non-interest in the neuroimaging analysis. Compared to normal speech, emphatic speech was characterized acoustically by a greater difference between stressed and unstressed vowels in intensity, duration, and fundamental frequency, and neurally by increased activity in right middle premotor cortex and supplementary motor area, and bilateral primary sensorimotor cortex. These findings are consistent with right-lateralized motor planning of prosodic variation in emphatic speech. Clear speech involved an increase in average vowel and sentence durations and average vowel spacing, along with increased activity in left middle premotor cortex and bilateral primary sensorimotor cortex. These findings are consistent with an increased reliance on feedforward control, resulting in hyper-articulation, under clear as compared to normal speech. Fast speech was characterized acoustically by reduced sentence duration and average vowel spacing, and neurally by increased activity in left anterior frontal

  11. Acoustic levitation technique for containerless processing at high temperatures in space

    NASA Technical Reports Server (NTRS)

    Rey, Charles A.; Merkley, Dennis R.; Hammarlund, Gregory R.; Danley, Thomas J.

    1988-01-01

    High temperature processing of a small specimen without a container has been demonstrated in a set of experiments using an acoustic levitation furnace in the microgravity of space. This processing technique includes the positioning, heating, melting, cooling, and solidification of a material supported without physical contact with container or other surface. The specimen is supported in a potential energy well, created by an acoustic field, which is sufficiently strong to position the specimen in the microgravity environment of space. This containerless processing apparatus has been successfully tested on the Space Shuttle during the STS-61A mission. In that experiment, three samples wer successfully levitated and processed at temperatures from 600 to 1500 C. Experiment data and results are presented.

  12. NONLINEAR BEHAVIOR OF BARYON ACOUSTIC OSCILLATIONS IN REDSHIFT SPACE FROM THE ZEL'DOVICH APPROXIMATION

    SciTech Connect

    McCullagh, Nuala; Szalay, Alexander S.

    2015-01-10

    Baryon acoustic oscillations (BAO) are a powerful probe of the expansion history of the universe, which can tell us about the nature of dark energy. In order to accurately characterize the dark energy equation of state using BAO, we must understand the effects of both nonlinearities and redshift space distortions on the location and shape of the acoustic peak. In a previous paper, we introduced a novel approach to second order perturbation theory in configuration space using the Zel'dovich approximation, and presented a simple result for the first nonlinear term of the correlation function. In this paper, we extend this approach to redshift space. We show how to perform the computation and present the analytic result for the first nonlinear term in the correlation function. Finally, we validate our result through comparison with numerical simulations.

  13. Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics

    NASA Technical Reports Server (NTRS)

    Kenny, Jeremy; Hobbs, Chris; Plotkin, Ken; Pilkey, Debbie

    2009-01-01

    Lift-off acoustic environments generated by the future Ares I launch vehicle are assessed by the NASA Marshall Space Flight Center (MSFC) acoustics team using several prediction tools. This acoustic environment is directly caused by the Ares I First Stage booster, powered by the five-segment Reusable Solid Rocket Motor (RSRMV). The RSRMV is a larger-thrust derivative design from the currently used Space Shuttle solid rocket motor, the Reusable Solid Rocket Motor (RSRM). Lift-off acoustics is an integral part of the composite launch vibration environment affecting the Ares launch vehicle and must be assessed to help generate hardware qualification levels and ensure structural integrity of the vehicle during launch and lift-off. Available prediction tools that use free field noise source spectrums as a starting point for generation of lift-off acoustic environments are described in the monograph NASA SP-8072: "Acoustic Loads Generated by the Propulsion System." This monograph uses a reference database for free field noise source spectrums which consist of subscale rocket motor firings, oriented in horizontal static configurations. The phrase "subscale" is appropriate, since the thrust levels of rockets in the reference database are orders of magnitude lower than the current design thrust for the Ares launch family. Thus, extrapolation is needed to extend the various reference curves to match Ares-scale acoustic levels. This extrapolation process yields a subsequent amount of uncertainty added upon the acoustic environment predictions. As the Ares launch vehicle design schedule progresses, it is important to take every opportunity to lower prediction uncertainty and subsequently increase prediction accuracy. Never before in NASA s history has plume acoustics been measured for large scale solid rocket motors. Approximately twice a year, the RSRM prime vendor, ATK Launch Systems, static fires an assembled RSRM motor in a horizontal configuration at their test facility

  14. The relation between first-graders' reading level and vowel production variability in real and nonsense words: A temporal analysis

    NASA Astrophysics Data System (ADS)

    Lydtin, Kimberly; Fowler, Anne; Bell-Berti, Fredericka

    2002-05-01

    The focus of this study is to determine if children who are poor readers produce vowels with greater variability than children with normal reading ability, since earlier research has indicated possible links between phonological difficulty, speech production variation, and reading problems. In continuation of our past research [K. Lydtin, A. Fowler, and F. Bell-Berti, J. Acoust. Soc. Am. 110, 2704 (2001)], where we looked at the spectral aspects of vowel production, we will report the results of our study of vowel duration and its variability in poor and good readers. The vowels chosen for this study are /smcapi/, /ɛ/, and /æ/ in real and nonsense words occurring in both blocked and random presentation. [Work supported by U.S. Dept. of Education, McNair Scholars Program.

  15. The relation between first-graders' reading level and vowel production variability and presentation format: A temporal analysis

    NASA Astrophysics Data System (ADS)

    Baker, Kandice; Fowler, Anne; Bell-Berti, Fredericka

    2002-05-01

    The purpose of this research is to determine if children with reading difficulties produce vowels with greater variability than children with normal reading ability. The vowels chosen for this study are /smcapi/, /ɛ/, and /æ/, occurring in real and nonsense monosyllabic words. Our past research, examining spectral variability in vowels produced by first grade students as a function of whether they were reading words presented individually in random or blocked format, revealed no systematic effect of presentation format on variability [K. Baker, A. Fowler, and F. Bell-Berti, J. Acoust. Soc. Am. 110, 2704 (2001)]. The purpose of this present study is to determine if good and poor readers differ in vowel duration variability. [Work supported by U.S. Dept. of Education, McNair Scholars Program.

  16. The role of vowel quality in stress clash

    NASA Astrophysics Data System (ADS)

    Levey, Sandra Kay

    The effect of stress clash between adjacent primary stressed syllables was examined through perceptual and acoustical analysis. Bisyllabic target words with primary stress on the second syllable were produced in citation form and in sentences by ten adult participants. The selected target words were analyzed for (a)the position of primary stress and (b)the identity of the vowel in the first syllable when produced in citation form. The goal was to determine if primary stress was placed on the final syllable and that the first syllable contained a vowel that could receive primary stress. Words judged to not meet these criteria were eliminated from the corpus. The target words were placed in stress clash contexts (followed by a primary stressed syllable) and in non-clash contexts (followed by a non-primary stressed syllable). The goal was to determine if stress clash resolution would occur, and if so, which of three explanations could account for resolution: (a)stress shift, with primary stress shifted to the first syllable in the target words or stress deletion, with acoustic features reduced in the second syllable in the target words, (b)pitch accent, taking the form of fundamental frequency, assigned to the first syllable in target words produced in early-sentence position or (c)increased final syllable duration in the target word. Perceptual judgment showed that stress clash was resolved inconsistently in stress clash contexts, and that stress shift also occurred in non-clash contexts. Acoustic analysis showed that fundamental frequency was higher the first syllable of target words when stress shift occurred, and that both syllables of the target words were produced with higher fundamental frequency in early-sentence position. A test of the correlation between perceptual judgments and acoustic results showed that fundamental frequency was potentially the primary acoustic feature that signaled the presence of stress shift.

  17. Acoustic Correlates of Emphatic Stress in Central Catalan

    ERIC Educational Resources Information Center

    Nadeu, Marianna; Hualde, Jose Ignacio

    2012-01-01

    A common feature of public speech in Catalan is the placement of prominence on lexically unstressed syllables ("emphatic stress"). This paper presents an acoustic study of radio speech data. Instances of emphatic stress were perceptually identified. Within-word comparison between vowels with emphatic stress and vowels with primary lexical stress…

  18. Acoustic Modeling and Analysis for the Space Shuttle Main Propulsion System Liner Crack Investigation

    NASA Technical Reports Server (NTRS)

    Casiano, Matthew J.; Zoladz, Tom F.

    2004-01-01

    Cracks were found on bellows flow liners in the liquid hydrogen feedlines of several space shuttle orbiters in 2002. An effort to characterize the fluid environment upstream of the space shuttle main engine low-pressure fuel pump was undertaken to help identify the cause of the cracks and also provide quantitative environments and loads of the region. Part of this effort was to determine the duct acoustics several inches upstream of the low-pressure fuel pump in the region of a bellows joint. A finite element model of the complicated geometry was made using three-dimensional fluid elements. The model was used to describe acoustics in the complex geometry and played an important role in the investigation. Acoustic mode shapes and natural frequencies of the liquid hydrogen in the duct and in the cavity behind the flow liner were determined. Forced response results were generated also by applying an edgetone-like forcing to the liner slots. Studies were conducted for state conditions and also conditions assuming two-phase entrapment in the backing cavity. Highly instrumented single-engine hot fire data confirms the presence of some of the predicted acoustic modes.

  19. Recreating the real, realizing the imaginary--a composer's preoccupation with acoustic space

    NASA Astrophysics Data System (ADS)

    Godman, Rob

    2002-11-01

    For centuries composers have been concerned with spatialization of sound and with the use of acoustic spaces to create feeling, atmosphere, and musical structure. This paper will explore Rob Godman's own use of sound in space, including (1) his treatment of ancient Vitruvian principles and how they are combined with new technologies; (2) an exploration of virtual journeys through real and imaginary acoustic spaces; (3) how sounds might be perceived in air, liquid, and solids; and (4) how technology has allowed composers to realize ideas that previously had only existed in the imagination. While focusing on artistic concerns, the paper will provide information on research carried out by the composer into acoustic spaces that are able to transform in real time with the aid of digital technology (Max/MSP software with sensor technology) and how these have been used in installation and pre-recorded work. It will also explore digital reconstructions of Vitruvian theatres and how we perceive resonance and ambience in the real and virtual world.

  20. Acoustical Testing Laboratory Developed to Support the Low-Noise Design of Microgravity Space Flight Hardware

    NASA Technical Reports Server (NTRS)

    Cooper, Beth A.

    2001-01-01

    The NASA John H. Glenn Research Center at Lewis Field has designed and constructed an Acoustical Testing Laboratory to support the low-noise design of microgravity space flight hardware. This new laboratory will provide acoustic emissions testing and noise control services for a variety of customers, particularly for microgravity space flight hardware that must meet International Space Station limits on noise emissions. These limits have been imposed by the space station to support hearing conservation, speech communication, and safety goals as well as to prevent noise-induced vibrations that could impact microgravity research data. The Acoustical Testing Laboratory consists of a 23 by 27 by 20 ft (height) convertible hemi/anechoic chamber and separate sound-attenuating test support enclosure. Absorptive 34-in. fiberglass wedges in the test chamber provide an anechoic environment down to 100 Hz. A spring-isolated floor system affords vibration isolation above 3 Hz. These criteria, along with very low design background levels, will enable the acquisition of accurate and repeatable acoustical measurements on test articles, up to a full space station rack in size, that produce very little noise. Removable floor wedges will allow the test chamber to operate in either a hemi/anechoic or anechoic configuration, depending on the size of the test article and the specific test being conducted. The test support enclosure functions as a control room during normal operations but, alternatively, may be used as a noise-control enclosure for test articles that require the operation of noise-generating test support equipment.

  1. Contrastive and contextual vowel nasalization in Ottawa

    NASA Astrophysics Data System (ADS)

    Klopfenstein, Marie

    2005-09-01

    Ottawa is a Central Algonquian language that possesses the recent innovation of contrastive vowel nasalization. Most phonetic studies done to date on contrastive vowel nasalization have investigated Indo-European languages; therefore, a study of Ottawa could prove to be a valuable addition to the literature. To this end, a percentage of nasalization (nasal airflow/oral + nasal airflow) was measured during target vowels produced by native Ottawa speakers using a Nasometer 6200-3. Nasalized vowels in the target word set were either contrastively or contextually nasalized: candidates for contextual nasalization were either regressive or perserverative in word-initial and word-final syllables. Subjects were asked to read words containing target vowels in a carrier sentence. Mean, minimum, and maximum nasalance were obtained for each target vowel across its full duration. Target vowels were compared across context (regressive or perseverative and word-initial or word-final). In addition, contexts were compared to determine whether a significant difference existed between contrastive and contextual nasalization. Results for Ottawa will be compared with results for vowels in similar contexts in other languages including Hindi, Breton, Bengali, and French.

  2. Vowel Aperture and Syllable Segmentation in French

    ERIC Educational Resources Information Center

    Goslin, Jeremy; Frauenfelder, Ulrich H.

    2008-01-01

    The theories of Pulgram (1970) suggest that if the vowel of a French syllable is open then it will induce syllable segmentation responses that result in the syllable being closed, and vice versa. After the empirical verification that our target French-speaking population was capable of distinguishing between mid-vowel aperture, we examined the…

  3. Vowel Quantity and Syllabification in English.

    ERIC Educational Resources Information Center

    Hammond, Michael

    1997-01-01

    Argues that there is phonological gemination in English based on distribution of vowel qualities in medial and final syllables. The analysis, cast in terms of optimality theory, has implications in several domains: (1) ambisyllabicity is not the right way to capture aspiration and flapping; (2) languages in which stress depends on vowel quality…

  4. Feedforward and Feedback Control in Apraxia of Speech: Effects of Noise Masking on Vowel Production

    PubMed Central

    Mailend, Marja-Liisa; Guenther, Frank H.

    2015-01-01

    Purpose This study was designed to test two hypotheses about apraxia of speech (AOS) derived from the Directions Into Velocities of Articulators (DIVA) model (Guenther et al., 2006): the feedforward system deficit hypothesis and the feedback system deficit hypothesis. Method The authors used noise masking to minimize auditory feedback during speech. Six speakers with AOS and aphasia, 4 with aphasia without AOS, and 2 groups of speakers without impairment (younger and older adults) participated. Acoustic measures of vowel contrast, variability, and duration were analyzed. Results Younger, but not older, speakers without impairment showed significantly reduced vowel contrast with noise masking. Relative to older controls, the AOS group showed longer vowel durations overall (regardless of masking condition) and a greater reduction in vowel contrast under masking conditions. There were no significant differences in variability. Three of the 6 speakers with AOS demonstrated the group pattern. Speakers with aphasia without AOS did not differ from controls in contrast, duration, or variability. Conclusion The greater reduction in vowel contrast with masking noise for the AOS group is consistent with the feedforward system deficit hypothesis but not with the feedback system deficit hypothesis; however, effects were small and not present in all individual speakers with AOS. Theoretical implications and alternative interpretations of these findings are discussed. PMID:25565143

  5. Vocal register effects on vowel spectral noise and roughness: findings for adult females.

    PubMed

    Emanuel, F; Scarinzi, A

    1979-05-01

    The effects of vocal register (vocal fry, modal, and falsetto) on the relative roughness and spectral noise level (SNL) of isolated test vowels (/u/ and/ae/) were investigated. Each of the three vocal registers; the obtained samples were magnetically recorded. A panel of 11 listeners independently verified by perceptual judgments that each recorded sample was produced in the "desired" vocal register. Also, they independently rated (on a 5-point equal-appearing intervals scale) the roughness of each test production. A median of the 11 roughness ratings (MRR) available was then obtained as an index of the roughness of each vowel sample. Additionally, a narrow-band (3 Hz) acoustic spectrum of each test production was made in which measures of vowel spectral noise were obtained. A mean over 25 noise measures per test production (from 100 to 2600 Hz) served as an index of the spectral noise level (SNL) for each vowel sample. The major finding in the study was that the MRR and the SNL for productions of both test vowels diminished significantly across vocal registers, from fry, to modal, to falsetto. PMID:438364

  6. Acoustic correlates of timbre space dimensions: A confirmatory study using synthetic tones

    NASA Astrophysics Data System (ADS)

    Caclin, Anne; McAdams, Stephen; Smith, Bennett K.; Winsberg, Suzanne

    2005-07-01

    Timbre spaces represent the organization of perceptual distances, as measured with dissimilarity ratings, among tones equated for pitch, loudness, and perceived duration. A number of potential acoustic correlates of timbre-space dimensions have been proposed in the psychoacoustic literature, including attack time, spectral centroid, spectral flux, and spectrum fine structure. The experiments reported here were designed as direct tests of the perceptual relevance of these acoustical parameters for timbre dissimilarity judgments. Listeners presented with carefully controlled synthetic tones use attack time, spectral centroid, and spectrum fine structure in dissimilarity rating experiments. These parameters thus appear as major determinants of timbre. However, spectral flux appears as a less salient timbre parameter, its salience depending on the number of other dimensions varying concurrently in the stimulus set. Dissimilarity ratings were analyzed with two different multidimensional scaling models (CLASCAL and CONSCAL), the latter providing psychophysical functions constrained by the physical parameters. Their complementarity is discussed.

  7. Acoustic correlates of timbre space dimensions: a confirmatory study using synthetic tones.

    PubMed

    Caclin, Anne; McAdams, Stephen; Smith, Bennett K; Winsberg, Suzanne

    2005-07-01

    Timbre spaces represent the organization of perceptual distances, as measured with dissimilarity ratings, among tones equated for pitch, loudness, and perceived duration. A number of potential acoustic correlates of timbre-space dimensions have been proposed in the psychoacoustic literature, including attack time, spectral centroid, spectral flux, and spectrum fine structure. The experiments reported here were designed as direct tests of the perceptual relevance of these acoustical parameters for timbre dissimilarity judgments. Listeners presented with carefully controlled synthetic tones use attack time, spectral centroid, and spectrum fine structure in dissimilarity rating experiments. These parameters thus appear as major determinants of timbre. However, spectral flux appears as a less salient timbre parameter, its salience depending on the number of other dimensions varying concurrently in the stimulus set. Dissimilarity ratings were analyzed with two different multidimensional scaling models (CLASCAL and CONSCAL), the latter providing psychophysical functions constrained by the physical parameters. Their complementarity is discussed. PMID:16119366

  8. Haptic Holography: Acoustic Space and the Evolution of the Whole Message

    NASA Astrophysics Data System (ADS)

    Logan, N.

    2013-02-01

    The paper argues that the Haptic Holography Work Station is an example of a medium that fit's with McLuhan's notion of Acoustic Space, that is it is a medium which stimulates more than one sense of perception at a time. As a result, the Haptic Holography Work Station transmits information about the subject much more rapidly than other media that precedes it, be it text, photography or television.

  9. A wideband fast multipole boundary element method for half-space/plane-symmetric acoustic wave problems

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Chen, Hai-Bo; Chen, Lei-Lei

    2013-04-01

    This paper presents a novel wideband fast multipole boundary element approach to 3D half-space/plane-symmetric acoustic wave problems. The half-space fundamental solution is employed in the boundary integral equations so that the tree structure required in the fast multipole algorithm is constructed for the boundary elements in the real domain only. Moreover, a set of symmetric relations between the multipole expansion coefficients of the real and image domains are derived, and the half-space fundamental solution is modified for the purpose of applying such relations to avoid calculating, translating and saving the multipole/local expansion coefficients of the image domain. The wideband adaptive multilevel fast multipole algorithm associated with the iterative solver GMRES is employed so that the present method is accurate and efficient for both lowand high-frequency acoustic wave problems. As for exterior acoustic problems, the Burton-Miller method is adopted to tackle the fictitious eigenfrequency problem involved in the conventional boundary integral equation method. Details on the implementation of the present method are described, and numerical examples are given to demonstrate its accuracy and efficiency.

  10. Identification and Multiplicity of Double Vowels in Cochlear Implant Users

    ERIC Educational Resources Information Center

    Kwon, Bomjun J.; Perry, Trevor T.

    2014-01-01

    Purpose: The present study examined cochlear implant (CI) users' perception of vowels presented concurrently (i.e., "double vowels") to further our understanding of auditory grouping in electric hearing. Method: Identification of double vowels and single vowels was measured with 10 CI subjects. Fundamental frequencies (F0s) of…

  11. Virtual adult ears reveal the roles of acoustical factors and experience in auditory space map development

    PubMed Central

    Campbell, Robert A. A.; King, Andrew J.; Nodal, Fernando R.; Schnupp, Jan W. H.; Carlile, Simon; Doubell, Timothy P.

    2009-01-01

    Auditory neurons in the superior colliculus (SC) respond preferentially to sounds from restricted directions to form a map of auditory space. The development of this representation is shaped by sensory experience, but little is known about the relative contribution of peripheral and central factors to the emergence of adult responses. By recording from the SC of anesthetized ferrets at different age points, we show that the map matures gradually after birth; the spatial receptive fields (SRFs) become more sharply tuned and topographic order emerges by the end of the second postnatal month. Principal components analysis of the head-related transfer function revealed that the time course of map development is mirrored by the maturation of the spatial cues generated by the growing head and external ears. However, using virtual acoustic space stimuli, we show that these acoustical changes are not by themselves responsible for the emergence of SC map topography. Presenting stimuli to infant ferrets through virtual adult ears did not improve the order in the representation of sound azimuth in the SC. But using linear discriminant analysis to compare different response properties across age, we found that the SRFs of infant neurons nevertheless became more adult-like when stimuli were delivered through virtual adult ears. Hence, although the emergence of auditory topography is likely to depend on refinements in neural circuitry, maturation of the structure of the SRFs (particularly their spatial extent) can be largely accounted for by changes in the acoustics associated with growth of the head and ears. PMID:18987192

  12. Virtual adult ears reveal the roles of acoustical factors and experience in auditory space map development.

    PubMed

    Campbell, Robert A A; King, Andrew J; Nodal, Fernando R; Schnupp, Jan W H; Carlile, Simon; Doubell, Timothy P

    2008-11-01

    Auditory neurons in the superior colliculus (SC) respond preferentially to sounds from restricted directions to form a map of auditory space. The development of this representation is shaped by sensory experience, but little is known about the relative contribution of peripheral and central factors to the emergence of adult responses. By recording from the SC of anesthetized ferrets at different age points, we show that the map matures gradually after birth; the spatial receptive fields (SRFs) become more sharply tuned and topographic order emerges by the end of the second postnatal month. Principal components analysis of the head-related transfer function revealed that the time course of map development is mirrored by the maturation of the spatial cues generated by the growing head and external ears. However, using virtual acoustic space stimuli, we show that these acoustical changes are not by themselves responsible for the emergence of SC map topography. Presenting stimuli to infant ferrets through virtual adult ears did not improve the order in the representation of sound azimuth in the SC. But by using linear discriminant analysis to compare different response properties across age, we found that the SRFs of infant neurons nevertheless became more adult-like when stimuli were delivered through virtual adult ears. Hence, although the emergence of auditory topography is likely to depend on refinements in neural circuitry, maturation of the structure of the SRFs (particularly their spatial extent) can be largely accounted for by changes in the acoustics associated with growth of the head and ears. PMID:18987192

  13. The effect of helicopter main rotor blade phasing and spacing on performance, blade loads, and acoustics

    NASA Technical Reports Server (NTRS)

    Gangwani, S. T.

    1976-01-01

    The performance, blade loads, and acoustic characteristics of a variable geometry rotor (VGR) system in forward flight and in a pullup maneuver were determined by the use of existing analytical programs. The investigation considered the independent effects of vertical separation of two three-bladed rotor systems as well as the effects of azimuthal spacing between the blades of the two rotors. The computations were done to determine the effects of these parameters on the performance, blade loads, and acoustic characteristics at two advance ratios in steady-state level flight and for two different g pullups at one advance ratio. To evaluate the potential benefits of the VGR concept in forward flight and pullup maneuvers, the results were compared as to performance, oscillatory blade loadings, vibratory forces transmitted to the fixed fuselage, and the rotor noise characteristics of the various VGR configurations with those of the conventional six-bladed rotor system.

  14. Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)

    2002-01-01

    The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.

  15. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging “periodicity-tagged” segregation of competing speech in rooms

    PubMed Central

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.

    2015-01-01

    The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening

  16. Task-dependent decoding of speaker and vowel identity from auditory cortical response patterns.

    PubMed

    Bonte, Milene; Hausfeld, Lars; Scharke, Wolfgang; Valente, Giancarlo; Formisano, Elia

    2014-03-26

    Selective attention to relevant sound properties is essential for everyday listening situations. It enables the formation of different perceptual representations of the same acoustic input and is at the basis of flexible and goal-dependent behavior. Here, we investigated the role of the human auditory cortex in forming behavior-dependent representations of sounds. We used single-trial fMRI and analyzed cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by different speakers (boy, girl, male) and performed a delayed-match-to-sample task on either speech sound or speaker identity. Univariate analyses showed a task-specific activation increase in the right superior temporal gyrus/sulcus (STG/STS) during speaker categorization and in the right posterior temporal cortex during vowel categorization. Beyond regional differences in activation levels, multivariate classification of single trial responses demonstrated that the success with which single speakers and vowels can be decoded from auditory cortical activation patterns depends on task demands and subject's behavioral performance. Speaker/vowel classification relied on distinct but overlapping regions across the (right) mid-anterior STG/STS (speakers) and bilateral mid-posterior STG/STS (vowels), as well as the superior temporal plane including Heschl's gyrus/sulcus. The task dependency of speaker/vowel classification demonstrates that the informative fMRI response patterns reflect the top-down enhancement of behaviorally relevant sound representations. Furthermore, our findings suggest that successful selection, processing, and retention of task-relevant sound properties relies on the joint encoding of information across early and higher-order regions of the auditory cortex. PMID:24672000

  17. Formant frequency analysis of children's spoken and sung vowels using sweeping fundamental frequency production.

    PubMed

    White, P

    1999-12-01

    High-pitched productions present difficulties in formant frequency analysis due to wide harmonic spacing and poorly defined formants. As a consequence, there is little reliable data regarding children's spoken or sung vowel formants. Twenty-nine 11-year-old Swedish children were asked to produce 4 sustained spoken and sung vowels. In order to circumvent the problem of wide harmonic spacing, F1 and F2 measurements were taken from vowels produced with a sweeping F0. Experienced choir singers were selected as subjects in order to minimize the larynx height adjustments associated with pitch variation in less skilled subjects. Results showed significantly higher formant frequencies for speech than for singing. Formants were consistently higher in girls than in boys suggesting longer vocal tracts in these preadolescent boys. Furthermore, formant scaling demonstrated vowel dependent differences between boys and girls suggesting non-uniform differences in male and female vocal tract dimensions. These vowel-dependent sex differences were not consistent with adult data. PMID:10622522

  18. Early sound symbolism for vowel sounds

    PubMed Central

    Spector, Ferrinne; Maurer, Daphne

    2013-01-01

    Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape. PMID:24349684

  19. A comparative analysis of Media Lengua and Quichua vowel production.

    PubMed

    Stewart, Jesse

    2014-01-01

    This study presents a comparative analysis of F1 and F2 vowel frequencies from Pijal Media Lengua (PML) and Imbabura Quichua. Mixed-effects models are used to test Spanish-derived high and low vowels against their Quichua-derived counterparts for statistical significance. Spanish-derived and Quichua-derived high vowels are also tested against Spanish-derived mid vowels. This analysis suggests that PML may be manipulating as many as eight vowels where Spanishderived high and low vowels coexist as near-mergers with their Quichua-derived counterparts, while high and mid vowels coexist with partial overlap. Quichua, traditionally viewed as a three-vowel system, shows similar results and may be manipulating as many as six vowels. PMID:25721292

  20. Vibro-Acoustic Analysis of NASA's Space Shuttle Launch Pad 39A Flame Trench Wall

    NASA Technical Reports Server (NTRS)

    Margasahayam, Ravi N.

    2009-01-01

    A vital element to NASA's manned space flight launch operations is the Kennedy Space Center Launch Complex 39's launch pads A and B. Originally designed and constructed In the 1960s for the Saturn V rockets used for the Apollo missions, these pads were modified above grade to support Space Shuttle missions. But below grade, each of the pad's original walls (including a 42 feet deep, 58 feet wide, and 450 feet long tunnel designed to deflect flames and exhaust gases, the flame trench) remained unchanged. On May 31, 2008 during the launch of STS-124, over 3500 of the. 22000 interlocking refractory bricks that lined east wall of the flame trench, protecting the pad structure were liberated from pad 39A. The STS-124 launch anomaly spawned an agency-wide initiative to determine the failure root cause, to assess the impact of debris on vehicle and ground support equipment safety, and to prescribe corrective action. The investigation encompassed radar imaging, infrared video review, debris transport mechanism analysis using computational fluid dynamics, destructive testing, and non-destructive evaluation, including vibroacoustic analysis, in order to validate the corrective action. The primary focus of this paper is on the analytic approach, including static, modal, and vibro-acoustic analysis, required to certify the corrective action, and ensure Integrity and operational reliability for future launches. Due to the absence of instrumentation (including pressure transducers, acoustic pressure sensors, and accelerometers) in the flame trench, defining an accurate acoustic signature of the launch environment during shuttle main engine/solid rocket booster Ignition and vehicle ascent posed a significant challenge. Details of the analysis, including the derivation of launch environments, the finite element approach taken, and analysistest/ launch data correlation are discussed. Data obtained from the recent launch of STS-126 from Pad 39A was instrumental in validating the

  1. Drop dynamics in space and interference with acoustic field (M-15)

    NASA Technical Reports Server (NTRS)

    Yamanaka, Tatsuo

    1993-01-01

    The objective of the experiment is to study contactless positioning of liquid drops, excitation of capillary waves on the surface of acoustically levitated liquid drops, and deformation of liquid drops by means of acoustic radiation pressure. Contactless positioning technologies are very important in space materials processing because the melt is processed without contacting the wall of a crucible which can easily contaminate the melt specifically for high melting temperatures and chemically reactive materials. Among the contactless positioning technologies, an acoustic technology is especially important for materials unsusceptible to electromagnetic fields such as glasses and ceramics. The shape of a levitated liquid drop in the weightless condition is determined by its surface tension and the internal and external pressure distribution. If the surface temperature is constant and there exist neither internal nor external pressure perturbations, the levitated liquid drop forms a shape of perfect sphere. If temperature gradients on the surface and internal or external pressure perturbations exist, the liquid drop forms various modes of shapes with proper vibrations. A rotating liquid drop was specifically studied not only as a classical problem of theoretical mechanics to describe the shapes of the planets of the solar system, as well as their arrangement, but it is also more a contemporary problem of modern non-linear mechanics. In the experiment, we are expecting to observe various shapes of a liquid drop such as cocoon, tri-lobed, tetropod, multi-lobed, and doughnut.

  2. Enhancement of temporal periodicity cues in cochlear implants: Effects on prosodic perception and vowel identification

    NASA Astrophysics Data System (ADS)

    Green, Tim; Faulkner, Andrew; Rosen, Stuart; Macherey, Olivier

    2005-07-01

    Standard continuous interleaved sampling processing, and a modified processing strategy designed to enhance temporal cues to voice pitch, were compared on tests of intonation perception, and vowel perception, both in implant users and in acoustic simulations. In standard processing, 400 Hz low-pass envelopes modulated either pulse trains (implant users) or noise carriers (simulations). In the modified strategy, slow-rate envelope modulations, which convey dynamic spectral variation crucial for speech understanding, were extracted by low-pass filtering (32 Hz). In addition, during voiced speech, higher-rate temporal modulation in each channel was provided by 100% amplitude-modulation by a sawtooth-like wave form whose periodicity followed the fundamental frequency (F0) of the input. Channel levels were determined by the product of the lower- and higher-rate modulation components. Both in acoustic simulations and in implant users, the ability to use intonation information to identify sentences as question or statement was significantly better with modified processing. However, while there was no difference in vowel recognition in the acoustic simulation, implant users performed worse with modified processing both in vowel recognition and in formant frequency discrimination. It appears that, while enhancing pitch perception, modified processing harmed the transmission of spectral information.

  3. The Effect of Acoustic Disturbances on the Operation of the Space Shuttle Main Engine Fuel Flowmeter

    NASA Technical Reports Server (NTRS)

    Marcu, Bogdan; Szabo, Roland; Dorney, Dan; Zoladz, Tom

    2007-01-01

    The Space Shuttle Main Engine (SSME) uses a turbine fuel flowmeter (FFM) in its Low Pressure Fuel Duct (LPFD) to measure liquid hydrogen flowrates during engine operation. The flowmeter is required to provide accurate and robust measurements of flow rates ranging from 10000 to 18000 GPM in an environment contaminated by duct vibration and duct internal acoustic disturbances. Errors exceeding 0.5% can have a significant impact on engine operation and mission completion. The accuracy of each sensor is monitored during hot-fire engine tests on the ground. Flow meters which do not meet requirements are not flown. Among other parameters, the device is screened for a specific behavior in which a small shift in the flow rate reading is registered during a period in which the actual fuel flow as measured by a facility meter does not change. Such behavior has been observed over the years for specific builds of the FFM and must be avoided or limited in magnitude in flight. Various analyses of the recorded data have been made prior to this report in an effort to understand the cause of the phenomenon; however, no conclusive cause for the shift in the instrument behavior has been found. The present report proposes an explanation of the phenomenon based on interactions between acoustic pressure disturbances in the duct and the wakes produced by the FFM flow straightener. Physical insight into the effects of acoustic plane wave disturbances was obtained using a simple analytical model. Based on that model, a series of three-dimensional unsteady viscous flow computational fluid dynamics (CFD) simulations were performed using the MSFC PHANTOM turbomachinery code. The code was customized to allow the FFM rotor speed to change at every time step according to the instantaneous fluid forces on the rotor, that, in turn, are affected by acoustic plane pressure waves propagating through the device. The results of the simulations show the variation in the rotation rate of the flowmeter

  4. Comparative evaluation of Space Transportation System (STS)-3 flight and acoustic test random vibration response of the OSS-1 payload

    NASA Technical Reports Server (NTRS)

    On, F. J.

    1983-01-01

    A comparative evaluation of the Space Transportation System (STS)-3 flight and acoustic test random vibration response of the Office of Space Science-1 (OSS-1) payload is presented. The results provide insight into the characteristics of vibroacoustic response of pallet payload components in the payload bay during STS flights.

  5. Nonnative Speech Perception Training Using Vowel Subsets: Effects of Vowels in Sets and Order of Training

    ERIC Educational Resources Information Center

    Nishi, Kanae; Kewley-Port, Diane

    2008-01-01

    Purpose: K. Nishi and D. Kewley-Port (2007) trained Japanese listeners to perceive 9 American English monophthongs and showed that a protocol using all 9 vowels (fullset) produced better results than the one using only the 3 more difficult vowels (subset). The present study extended the target population to Koreans and examined whether protocols…

  6. The right ear advantage revisited: speech lateralisation in dichotic listening using consonant-vowel and vowel-consonant syllables.

    PubMed

    Sætrevik, Bjørn

    2012-01-01

    The dichotic listening task is typically administered by presenting a consonant-vowel (CV) syllable to each ear and asking the participant to report the syllable heard most clearly. The results tend to show more reports of the right ear syllable than of the left ear syllable, an effect called the right ear advantage (REA). The REA is assumed to be due to the crossing over of auditory fibres and the processing of language stimuli being lateralised to left temporal areas. However, the tendency for most dichotic listening experiments to use only CV syllable stimuli limits the extent to which the conclusions can be generalised to also apply to other speech phonemes. The current study re-examines the REA in dichotic listening by using both CV and vowel-consonant (VC) syllables and combinations thereof. Results showed a replication of the REA response pattern for both CV and VC syllables, thus indicating that the general assumption of left-side localisation of processing can be applied for both types of stimuli. Further, on trials where a CV is presented in one ear and a VC is presented in the other ear, the CV is selected more often than the VC, indicating that these phonemes have an acoustic or processing advantage. PMID:24735233

  7. Specific features of vowel-like signals of white whales

    NASA Astrophysics Data System (ADS)

    Bel'Kovich, V. M.; Kreichi, S. A.

    2004-05-01

    The set of acoustic signals of White-Sea white whales comprises about 70 types of signals. Six of them occur most often and constitute 75% of the total number of signals produced by these animals. According to behavioral reactions, white whales distinguish each other by acoustic signals, which is also typical of other animal species and humans. To investigate this phenomenon, signals perceived as vowel-like sounds of speech, including sounds perceived as a “bleat,” were chosen A sample of 480 signals recorded in June and July, 2000, in the White Sea within a reproductive assemblage of white whales near the Large Solovetskii Island was studied. Signals were recorded on a digital data carrier (a SONY minidisk) in the frequency range of 0.06 20 kHz. The purpose of the study was to reveal the perceptive and acoustic features specific to individual animals. The study was carried out using the methods of structural analysis of vocal speech that are employed in lingual criminalistics to identify a speaking person. It was demonstrated that this approach allows one to group the signals by coincident perceptive and acoustic parameters with assigning individual attributes to single parameters. This provided an opportunity to separate conditionally about 40 different sources of acoustic signals according to the totality of coincidences, which corresponded to the number of white whales observed visually. Thus, the application of this method proves to be very promising for the acoustic identification of white whales and other marine mammals, this possibility being very important for biology.

  8. English vowel identification in quiet and noise: effects of listeners' native language background

    PubMed Central

    Jin, Su-Hyun; Liu, Chang

    2014-01-01

    Purpose: To investigate the effect of listener's native language (L1) and the types of noise on English vowel identification in noise. Method: Identification of 12 English vowels was measured in quiet and in long-term speech-shaped noise and multi-talker babble (MTB) noise for English- (EN), Chinese- (CN) and Korean-native (KN) listeners at various signal-to-noise ratios (SNRs). Results: Compared to non-native listeners, EN listeners performed significantly better in quiet and in noise. Vowel identification in long-term speech-shaped noise and in MTB noise was similar between CN and KN listeners. This is different from our previous study in which KN listeners performed better than CN listeners in English sentence recognition in MTB noise. Discussion: Results from the current study suggest that depending on speech materials, the effect of non-native listeners' L1 on speech perception in noise may be different. That is, in the perception of speech materials with little linguistic cues like isolated vowels, the characteristics of non-native listener's native language may not play a significant role. On the other hand, in the perception of running speech in which listeners need to use more linguistic cues (e.g., acoustic-phonetic, semantic, and prosodic cues), the non-native listener's native language background might result in a different masking effect. PMID:25400538

  9. Formant frequencies of Malay vowels produced by Malay children aged between 7 and 12 years.

    PubMed

    Ting, Hua-Nong; Zourmand, Alireza; Chia, See-Yan; Yong, Boon-Fei; Abdul Hamid, Badrulzaman

    2012-09-01

    The formant frequencies of Malaysian Malay children have not been well studied. This article investigates the first four formant frequencies of sustained vowels in 360 Malay children aged between 7 and 12 years using acoustical analysis. Generally, Malay female children had higher formant frequencies than those of their male counterparts. However, no significant differences in all four formant frequencies were observed between the Malay male and female children in most of the vowels and age groups. Significant differences in all formant frequencies were found across the Malay vowels in both Malay male and female children for all age groups except for F4 in female children aged 12 years. Generally, the Malaysian Malay children showed a nonsystematic decrement in formant frequencies with age. Low levels of significant differences in formant frequencies were observed across the age groups in most of the vowels for F1, F3, and F4 in Malay male children and F1 and F4 in Malay female children. PMID:22285457

  10. Parsing the role of consonants versus vowels in the classic Takete-Maluma phenomenon.

    PubMed

    Nielsen, Alan K S; Rendall, Drew

    2013-06-01

    Wolfgang Köhler (1929, Gestalt psychology, New York, NY: Liveright) famously reported a bias in people's choice of nonsense words as labels for novel objects, pointing to possible naïve expectations about language structure. Two accounts have been offered to explain this bias, one focusing on the visuomotor effects of different vowel forms and the other focusing on variation in the acoustic structure and perceptual quality of different consonants. To date, evidence in support of both effects is mixed. Moreover, the veracity of either effect has often been doubted due to perceived limitations in methodologies and stimulus materials. A novel word-construction experiment is presented to test both proposed effects using randomized word- and image-generation techniques to address previous methodological concerns. Results show that participants are sensitive to both vowel and consonant content, constructing novel words of relatively sonorant consonants and rounded vowels to label curved object images, and of relatively plosive consonants and nonrounded vowels to label jagged object images. Results point to additional influences on word construction potentially related to the articulatory affordances or constraints accompanying different word forms. PMID:23205509

  11. The discrimination of baboon grunt calls and human vowel sounds by baboons

    NASA Astrophysics Data System (ADS)

    Hienz, Robert D.; Jones, April M.; Weerts, Elise M.

    2004-09-01

    The ability of baboons to discriminate changes in the formant structures of a synthetic baboon grunt call and an acoustically similar human vowel (/eh/) was examined to determine how comparable baboons are to humans in discriminating small changes in vowel sounds, and whether or not any species-specific advantage in discriminability might exist when baboons discriminate their own vocalizations. Baboons were trained to press and hold down a lever to produce a pulsed train of a standard sound (e.g., /eh/ or a baboon grunt call), and to release the lever only when a variant of the sound occurred. Synthetic variants of each sound had the same first and third through fifth formants (F1 and F3-5), but varied in the location of the second formant (F2). Thresholds for F2 frequency changes were 55 and 67 Hz for the grunt and vowel stimuli, respectively, and were not statistically different from one another. Baboons discriminated changes in vowel formant structures comparable to those discriminated by humans. No distinct advantages in discrimination performances were observed when the baboons discriminated these synthetic grunt vocalizations.

  12. Vocal register effects on vowel spectral noise and roughness: findings for adult males.

    PubMed

    Emanuel, F; Scarinzi, A

    1980-03-01

    This study was the second in a series designed to investigate the effects of vocal register (vocal, fry, modal, and falsetto) on the perceived roughness and spectral noise level of isolated test vowels. The first study (reported previously) was concerned with such effects on the phonations of adult females; in this study the phonations of adult males were investigated. Each of 15 male subjects produced at a controlled intensity each of two test vowels (/u/ and /ae/) in each of three vocal registers. Eleven listeners subsequently rated the test samples for roughness on a 5-point equal-appearing intervals scale. The criterion measure of roughness for each sample was the median of listener ratings (MRR). Each sample was also analyzed to produce its 3-Hz bandwidth acoustic spectrum from which measures of vowel spectral noise were obtained. The criterion measure of spectral noise level (SNL) for each test sample was the mean of 25 measures taken in the frequency range from 100 to 2600 Hz. The major finding was that the MRR and SNL for productions of both test vowels diminished significantly across vocal registers; i.e., from fry, to modal, to falsetto. In general, the present findings for males appeared consistent with those we reported earlier for phonations by adult females. PMID:7358873

  13. Neural representation of three-dimensional acoustic space in the human temporal lobe

    PubMed Central

    Zhang, Xiaolu; Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-01-01

    Sound localization is an important function of the human brain, but the underlying cortical mechanisms remain unclear. In this study, we recorded auditory stimuli in three-dimensional space and then replayed the stimuli through earphones during functional magnetic resonance imaging (fMRI). By employing a machine learning algorithm, we successfully decoded sound location from the blood oxygenation level-dependent signals in the temporal lobe. Analysis of the data revealed that different cortical patterns were evoked by sounds from different locations. Specifically, discrimination of sound location along the abscissa axis evoked robust responses in the left posterior superior temporal gyrus (STG) and right mid-STG, discrimination along the elevation (EL) axis evoked robust responses in the left posterior middle temporal lobe (MTL) and right STG, and discrimination along the ordinate axis evoked robust responses in the left mid-MTL and right mid-STG. These results support a distributed representation of acoustic space in human cortex. PMID:25932011

  14. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the ability of utilizing the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with actual measurements of leak sounds made by a one atmosphere to vacuum leak through a small hole in the pressure wall of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). While the E-FEM method represents a reverberant sound field calculation, of importance to this application is the requirement to also handle the direct field effect of the sound generation. It was also important to be able to compute the sound fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  15. Minke whale song, spacing, and acoustic communication on the Great Barrier Reef, Australia

    NASA Astrophysics Data System (ADS)

    Gedamke, Jason

    An inquisitive population of minke whale (Balaenoptera acutorostrata ) that concentrates on the Great Barrier Reef during its suspected breeding season offered a unique opportunity to conduct a multi-faceted study of a little-known Balaenopteran species' acoustic behavior. Chapter one investigates whether the minke whale is the source of an unusual, complex, and stereotyped sound recorded, the "star-wars" vocalization. A hydrophone array was towed from a vessel to record sounds from circling whales for subsequent localization of sound sources. These acoustic locations were matched with shipboard and in-water observations of the minke whale, demonstrating the minke whale was the source of this unusual sound. Spectral and temporal features of this sound and the source levels at which it is produced are described. The repetitive "star-wars" vocalization appears similar to the songs of other whale species and has characteristics consistent with reproductive advertisement displays. Chapter two investigates whether song (i.e. the "star-wars" vocalization) has a spacing function through passive monitoring of singer spatial patterns with a moored five-sonobuoy array. Active song playback experiments to singers were also conducted to further test song function. This study demonstrated that singers naturally maintain spatial separations between them through a nearest-neighbor analysis and animated tracks of singer movements. In response to active song playbacks, singers generally moved away and repeated song more quickly suggesting that song repetition interval may help regulate spatial interaction and singer separation. These results further indicate the Great Barrier Reef may be an important reproductive habitat for this species. Chapter three investigates whether song is part of a potentially graded repertoire of acoustic signals. Utilizing both vessel-based recordings and remote recordings from the sonobuoy array, temporal and spectral features, source levels, and

  16. Perturbation and Nonlinear Dynamic Analysis of Acoustic Phonatory Signal in Parkinsonian Patients Receiving Deep Brain Stimulation

    ERIC Educational Resources Information Center

    Lee, Victoria S.; Zhou, Xiao Ping; Rahn, Douglas A., III; Wang, Emily Q.; Jiang, Jack J.

    2008-01-01

    Nineteen PD patients who received deep brain stimulation (DBS), 10 non-surgical (control) PD patients, and 11 non-pathologic age- and gender-matched subjects performed sustained vowel phonations. The following acoustic measures were obtained on the sustained vowel phonations: correlation dimension (D[subscript 2]), percent jitter, percent shimmer,…

  17. The perception of lexical stress in German: effects of segmental duration and vowel quality in different prosodic patterns.

    PubMed

    Kohler, Klaus J

    2012-01-01

    Several decades of research, focusing on English, Dutch and German, have set up a hierarchy of acoustic properties for cueing lexical stress. It attributes the strongest cue to criterial-level f0 change, followed by duration, but low weight to energy and to stressed-vowel spectra. This paper re-examines the established view with new data from German. In the natural productions of the German word pair Kaffee 'coffee' - Café 'locality' (with initial vs. final stress in a North German pronunciation), vowel duration was manipulated in a complementary fashion across the two syllables in five steps, spanning the continuum from initial to final stress on each word. The two base words provided different vowel qualities as the second variable, the intervocalic fricative was varied in two values, long and short, taken from Café and Kaffee, and the generated test words were inserted in a low f0 tail and in a high f0 hat-pattern plateau, which both eliminated f0 change as a cue to lexical stress. The sentence stimuli were judged in two listening experiments by 16 listeners in each as to whether the first or the second syllable of the test word was stressed. The results show highly significant effects of vowel duration, vowel quality and fricative duration. The combined vowel-quality and fricative variable can outweigh vowel duration as a cue to lexical stress. The effect of the prosodic frame is only marginal, especially related to a rhythmic factor. The paper concludes that there is no general hierarchy with a fixed ranking of the variables traditionally adduced to signal lexical stress. Every prosodic embedding of segmental sequences defines the hierarchy afresh. PMID:23172240

  18. Acoustic puncture assist device versus loss of resistance technique for epidural space identification

    PubMed Central

    Mittal, Amit Kumar; Goel, Nitesh; Chowdhury, Itee; Shah, Shagun Bhatia; Singh, Brijesh Pratap; Jakhar, Pradeep

    2016-01-01

    Background and Aims: The conventional techniques of epidural space (EDS) identification based on loss of resistance (LOR) have a higher chance of complications, patchy analgesia and epidural failure, which can be minimised by objective confirmation of space before catheter placement. Acoustic puncture assist device (APAD) technique objectively confirms EDS, thus enhancing success, with lesser complications. This study was planned with the objective to evaluate the APAD technique and compare it to LOR technique for EDS identification and its correlation with ultrasound guided EDS depth. Methods: In this prospective study, the lumbar vertebral spaces were scanned by the ultrasound for measuring depth of the EDS and later correlated with procedural depth measured by either of the technique (APAD or LOR). The data were subjected to descriptive statistics; the concordance correlation coefficient and Bland-Altman analysis with 95% confidence limits. Results: Acoustic dip in pitch and descent in pressure tracing on EDS localisation was observed among the patients of APAD group. Analysis of concordance correlation between the ultrasonography (USG) depth and APAD or LOR depth was significant (r ≥ 0.97 in both groups). Bland-Altman analysis revealed a mean difference of 0.171cm in group APAD and 0.154 cm in group LOR. The 95% limits of agreement for the difference between the two measurements were − 0.569 and 0.226 cm in APAD and − 0.530 to 0.222 cm in LOR group. Conclusion: We found APAD to be a precise tool for objective localisation of the EDS, co-relating well with the pre-procedural USG depth of EDS. PMID:27212720

  19. Phoneme Recognition and Confusions with Multichannel Cochlear Implants: Vowels.

    ERIC Educational Resources Information Center

    Valimaa, Taina T.; Maatta, Taisto K.; Lopponen, Heikki J.; Sorri, Martti J.

    2002-01-01

    A study investigated how 19 Finnish adults who were postlingually severely or profoundly hearing impaired would relearn to recognize vowels after receiving multi-channel cochlear implants. Average vowel recognition was 68% 6 months after switch-on, and 80% 24 months after switch-on. Vowels y, e, and o were most difficult. (Contains references.)…

  20. Palatalization and Intrinsic Prosodic Vowel Features in Russian

    ERIC Educational Resources Information Center

    Ordin, Mikhail

    2011-01-01

    The presented study is aimed at investigating the interaction of palatalization and intrinsic prosodic features of the vowel in CVC (consonant+vowel+consonant) syllables in Russian. The universal nature of intrinsic prosodic vowel features was confirmed with the data from the Russian language. It was found that palatalization of the consonants…

  1. Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels

    ERIC Educational Resources Information Center

    Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz

    2012-01-01

    Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…

  2. The Role of Consonant/Vowel Organization in Perceptual Discrimination

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Drabs, Virginie; Content, Alain

    2014-01-01

    According to a recent hypothesis, the CV pattern (i.e., the arrangement of consonant and vowel letters) constrains the mental representation of letter strings, with each vowel or vowel cluster being the core of a unit. Six experiments with the same/different task were conducted to test whether this structure is extracted prelexically. In the…

  3. Factors Related to the Pronunciation of Vowel Clusters.

    ERIC Educational Resources Information Center

    Johnson, Dale D.

    This research report examines the pronunciation that children give to synthetic words containing vowel-cluster spellings and analyzes the observed pronunciations in relation to common English words containing the same vowel clusters. The pronunciations associated with vowel-cluster spellings are among the most unpredictable letter-sound…

  4. Frequency-space prediction filtering for acoustic clutter and random noise attenuation in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Shin, Junseob; Huang, Lianjie

    2016-04-01

    Frequency-space prediction filtering (FXPF), also known as FX deconvolution, is a technique originally developed for random noise attenuation in seismic imaging. FXPF attempts to reduce random noise in seismic data by modeling only real signals that appear as linear or quasilinear events in the aperture domain. In medical ultrasound imaging, channel radio frequency (RF) signals from the main lobe appear as horizontal events after receive delays are applied while acoustic clutter signals from off-axis scatterers and electronic noise do not. Therefore, FXPF is suitable for preserving only the main-lobe signals and attenuating the unwanted contributions from clutter and random noise in medical ultrasound imaging. We adapt FXPF to ultrasound imaging, and evaluate its performance using simulated data sets from a point target and an anechoic cyst. Our simulation results show that using only 5 iterations of FXPF achieves contrast-to-noise ratio (CNR) improvements of 67 % in a simulated noise-free anechoic cyst and 228 % in a simulated anechoic cyst contaminated with random noise of 15 dB signal-to-noise ratio (SNR). Our findings suggest that ultrasound imaging with FXPF attenuates contributions from both acoustic clutter and random noise and therefore, FXPF has great potential to improve ultrasound image contrast for better visualization of important anatomical structures and detection of diseased conditions.

  5. Acoustical characteristics of water sounds for soundscape enhancement in urban open spaces.

    PubMed

    Jeon, Jin Yong; Lee, Pyoung Jik; You, Jin; Kang, Jian

    2012-03-01

    The goal of the present study is to characterize water sounds that can be used in urban open spaces to mask road traffic noise. Sounds and visual images of a number of water features located in urban open places were obtained and subsequently analyzed in terms of psychoacoustical metrics and acoustical measures. Laboratory experiments were then conducted to investigate which water sound is appropriate for masking urban noise. The experiments consisted of two sessions: (1) Audio-only condition and (2) combined audio-visual condition. Subjective responses to stimuli were rated through the use of preference scores and 15 adjectives. The results of the experiments revealed that preference scores for the urban soundscape were affected by the acoustical characteristics of water sounds and visual images of water features; Sharpness that was used to explain the spectral envelopes of water sounds was proved to be a dominant factor for urban soundscape perception; and preferences regarding the urban soundscape were significantly related to adjectives describing "freshness" and "calmness." PMID:22423706

  6. Ample active acoustic space of a frog from the South American temperate forest.

    PubMed

    Penna, Mario; Moreno-Gómez, Felipe N

    2014-03-01

    The efficiency of acoustic communication depends on the power generated by the sound source, the attributes of the environment across which signals propagate, the environmental noise and the sensitivity of the intended receivers. Eupsophus emiliopugini, an anuran from the temperate austral forest communicates by means of an advertisement call of moderate intensity within the range for anurans. To estimate the range over which these frogs communicate effectively, we conducted measurements of call sound levels and of auditory thresholds to pure tones and to synthetic conspecific calls. The results show that E. emiliopugini produces advertisement calls of about 84 dB SPL at 0.25 m from the caller. The signals are affected by attenuation as they propagate, reaching average values of about 47 dB SPL at 8 m from the sound source. Midbrain multi-unit recordings show quite sensitive audiograms within the anuran range, with thresholds of about 44 dB SPL for synthetic imitations of conspecific calls, which would allow communication at distances beyond 8 m. This is an extended range as compared to E. calcaratus, a related syntopic species for which a previous study has shown to be restricted to active acoustic spaces shorter than 2 m. The comparison reveals divergent strategies for related taxa communicating amid the same environment. PMID:24356786

  7. Expansion Techniques of Embedding Audio Watermark Data Rate for Constructing Ubiquitous Acoustic Spaces

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are proposing “Ubiquitous Acoustic Spaces”, where each sound source can emit some address information with audio signals and make us automatically access to its related cyber space, using handheld devices such as cellphones. In order to realize this concept, we have considered three types of extraction methods, which were an acoustic modulation, an audio fingerprint, and an audio watermark technique. Then we have proposed a novel audio watermarking technique, which enables contactless asynchronous detection of embedded audio watermarks through speaker and microphone devices. However its embedding data rate was around 10 [bps], which was not sufficient for embedding generally used URL address texts. Therefore, we have extended the embedding frequency range and proposed a duplicated embedding algorithm, which uses both previously proposed frequency division method and temporal division method together. By these improvements, possible embedding data rate could be extended to 61.5 [bps], and we could extract watermarks through public telephone networks, even from a cell phone sound source. In this paper, we describe abstracts of our improved watermark embedding and extracting algorithms, and experimental results of watermark extraction precision on several audio signal capturing conditions.

  8. An investigation of acoustic noise requirements for the Space Station centrifuge facility

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy

    1994-01-01

    Acoustic noise emissions from the Space Station Freedom (SSF) centrifuge facility hardware represent a potential technical and programmatic risk to the project. The SSF program requires that no payload exceed a Noise Criterion 40 (NC-40) noise contour in any octave band between 63 Hz and 8 kHz as measured 2 feet from the equipment item. Past experience with life science experiment hardware indicates that this requirement will be difficult to meet. The crew has found noise levels on Spacelab flights to be unacceptably high. Many past Ames Spacelab life science payloads have required waivers because of excessive noise. The objectives of this study were (1) to develop an understanding of acoustic measurement theory, instruments, and technique, and (2) to characterize the noise emission of analogous Facility components and previously flown flight hardware. Test results from existing hardware were reviewed and analyzed. Measurements of the spectral and intensity characteristics of fans and other rotating machinery were performed. The literature was reviewed and contacts were made with NASA and industry organizations concerned with or performing research on noise control.

  9. Embedded Vowels: Remedying the Problems Arising out of Embedded Vowels in the English Writings of Arab Learners

    ERIC Educational Resources Information Center

    Khan, Mohamed Fazlulla

    2013-01-01

    L1 habits often tend to interfere with the process of learning a second language. The vowel habits of Arab learners of English are one such interference. Arabic orthography is such that certain vowels indicated by diacritics are often omitted, since an experienced reader of Arabic knows, by habit, the exact vowel sound in each phonetic…

  10. Acoustic space learning for sound-source separation and localization on binaural manifolds.

    PubMed

    Deleforge, Antoine; Forbes, Florence; Horaud, Radu

    2015-02-01

    In this paper, we address the problems of modeling the acoustic space generated by a full-spectrum sound source and using the learned model for the localization and separation of multiple sources that simultaneously emit sparse-spectrum sounds. We lay theoretical and methodological grounds in order to introduce the binaural manifold paradigm. We perform an in-depth study of the latent low-dimensional structure of the high-dimensional interaural spectral data, based on a corpus recorded with a human-like audiomotor robot head. A nonlinear dimensionality reduction technique is used to show that these data lie on a two-dimensional (2D) smooth manifold parameterized by the motor states of the listener, or equivalently, the sound-source directions. We propose a probabilistic piecewise affine mapping model (PPAM) specifically designed to deal with high-dimensional data exhibiting an intrinsic piecewise linear structure. We derive a closed-form expectation-maximization (EM) procedure for estimating the model parameters, followed by Bayes inversion for obtaining the full posterior density function of a sound-source direction. We extend this solution to deal with missing data and redundancy in real-world spectrograms, and hence for 2D localization of natural sound sources such as speech. We further generalize the model to the challenging case of multiple sound sources and we propose a variational EM framework. The associated algorithm, referred to as variational EM for source separation and localization (VESSL) yields a Bayesian estimation of the 2D locations and time-frequency masks of all the sources. Comparisons of the proposed approach with several existing methods reveal that the combination of acoustic-space learning with Bayesian inference enables our method to outperform state-of-the-art methods. PMID:25164245

  11. The ionization instability and resonant acoustic modes suppression by charge space effects in a dusty plasma

    SciTech Connect

    Conde, L.

    2006-03-15

    The large wavenumber suppression of unstable modes by space charge effects of the ionization instability in a weakly ionized and unmagnetized dusty plasma is investigated. The charge losses in the initial equilibrium state are balanced by electron impact ionizations originated by both the thermal electron populations and an additional monoenergetic electron beam. The multifluid dimensionless equations are deduced by using the time and length scales for elastic collisions between ions and neutral atoms and the Poisson equation relates the plasma potential fluctuations with charged particle densities instead of the quasineutral approximation. A general dimensionless dispersion relation is obtained from the linearized transport equations, where the ratios between the characteristic velocities, as the dust ion acoustic (IA), dust acoustic (DA), ion sound, and thermal speeds permits us to evaluate the weight of the different terms. In the long wavelength limit the results obtained using the quasineutral approximation are recovered. The differences found between roots of both dispersion equations are discussed, as well as those of previous models. The unstable mode of the linear ionization instability is originated by the imbalance between ion and electron densities in the rest state caused by the negative charging of dust grains. Contrary to dust free plasmas, the unstable mode exists, even in the absence of the ionizing electron beam. The numerical calculations of the roots of the full dispersion equation present a maximum unstable wavenumber not predicted by the quasineutral approximation, which is related with the minimum allowed length for space charge fluctuations within a fluid model. This upper limit of unstable wave numbers hinders the predicted resonant coupling in the long wavenumber regime between the DA and DIA waves.

  12. Acoustic puncture assist device: A novel technique to identify the epidural space

    PubMed Central

    Al-Mokaddam, MA; Al-Harbi, MK; El-Jandali, ST; Al-Zahrani, TA

    2016-01-01

    Background: Acoustic puncture assist device (APAD) is designed to detect and signal the loss of resistance during the epidural procedure. We aimed to evaluate this device in terms of successful identification of the epidural space and the incidence of accidental dural puncture. Patients and Methods: Following Institutional Review Board approval and written informed consent obtained from all patients, 200 adult patients (107 males) American Society of Anesthesiologists I-III who underwent lower limb orthopedic surgery under lumbar epidural anesthesia using APAD were enrolled in the study. APAD system was connected to the epidural needle using normal saline prefilled extension tube. Numbers of successful epidural attempts and accidental dural tap were documented. Results: The mean values of the depth of epidural space and the time to perform epidural puncture were 5.8 ± 1.0 cm and 3.3 ± 1.4 min, respectively. In 63% of patients, epidural puncture was successful from the first attempt and in 1% it was successful from the fourth attempt. Epidural anesthesia by APAD was successful in 198 cases (99 %). Dural tap occurred in 2 cases (1%). Conclusions: Using APAD, the success of identifying the epidural space was high and reliable. PMID:27051369

  13. Baryon acoustic oscillations in 2D: Modeling redshift-space power spectrum from perturbation theory

    SciTech Connect

    Taruya, Atsushi; Nishimichi, Takahiro; Saito, Shun

    2010-09-15

    We present an improved prescription for the matter power spectrum in redshift space taking proper account of both nonlinear gravitational clustering and redshift distortion, which are of particular importance for accurately modeling baryon acoustic oscillations (BAOs). Contrary to the models of redshift distortion phenomenologically introduced but frequently used in the literature, the new model includes the corrections arising from the nonlinear coupling between the density and velocity fields associated with two competitive effects of redshift distortion, i.e., Kaiser and Finger-of-God effects. Based on the improved treatment of perturbation theory for gravitational clustering, we compare our model predictions with the monopole and quadrupole power spectra of N-body simulations, and an excellent agreement is achieved over the scales of BAOs. Potential impacts on constraining dark energy and modified gravity from the redshift-space power spectrum are also investigated based on the Fisher-matrix formalism, particularly focusing on the measurements of the Hubble parameter, angular diameter distance, and growth rate for structure formation. We find that the existing phenomenological models of redshift distortion produce a systematic error on measurements of the angular diameter distance and Hubble parameter by 1%-2%, and the growth-rate parameter by {approx}5%, which would become non-negligible for future galaxy surveys. Correctly modeling redshift distortion is thus essential, and the new prescription for the redshift-space power spectrum including the nonlinear corrections can be used as an accurate theoretical template for anisotropic BAOs.

  14. Space Shuttle Orbiter Main Engine Ignition Acoustic Pressure Loads Issue: Recent Actions to Install Wireless Instrumentation on STS-129

    NASA Technical Reports Server (NTRS)

    Wells, Nathan; Studor, George

    2009-01-01

    This slide presentation reviews the development and construction of the wireless acoustic instruments surrounding the space shuttle's main engines in preparation for STS-129. The presentation also includes information on end-of-life processing and the mounting procedure for the devices.

  15. Study for Identification of Beneficial Uses of Space (BUS). Volume 2: Technical report. Book 4: Development and business analysis of space processed surface acoustic wave devices

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Preliminary development plans, analysis of required R and D and production resources, the costs of such resources, and, finally, the potential profitability of a commercial space processing opportunity for the production of very high frequency surface acoustic wave devices are presented.

  16. An Amplitude-Based Estimation Method for International Space Station (ISS) Leak Detection and Localization Using Acoustic Sensor Networks

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Madaras, Eric I.

    2009-01-01

    The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.

  17. Learning Vowel Categories from Maternal Speech in Gurindji Kriol

    ERIC Educational Resources Information Center

    Jones, Caroline; Meakins, Felicity; Muawiyath, Shujau

    2012-01-01

    Distributional learning is a proposal for how infants might learn early speech sound categories from acoustic input before they know many words. When categories in the input differ greatly in relative frequency and overlap in acoustic space, research in bilingual development suggests that this affects the course of development. In the present…

  18. Missing-data model of vowel identification.

    PubMed

    de Cheveigné, A; Kawahara, H

    1999-06-01

    Vowel identity correlates well with the shape of the transfer function of the vocal tract, in particular the position of the first two or three formant peaks. However, in voiced speech the transfer function is sampled at multiples of the fundamental frequency (F0), and the short-term spectrum contains peaks at those frequencies, rather than at formants. It is not clear how the auditory system estimates the original spectral envelope from the vowel waveform. Cochlear excitation patterns, for example, resolve harmonics in the low-frequency region and their shape varies strongly with F0. The problem cannot be cured by smoothing: lag-domain components of the spectral envelope are aliased and cause F0-dependent distortion. The problem is severe at high F0's where the spectral envelope is severely undersampled. This paper treats vowel identification as a process of pattern recognition with missing data. Matching is restricted to available data, and missing data are ignored using an F0-dependent weighting function that emphasizes regions near harmonics. The model is presented in two versions: a frequency-domain version based on short-term spectra, or tonotopic excitation patterns, and a time-domain version based on autocorrelation functions. It accounts for the relative F0-independency observed in vowel identification. PMID:10380672

  19. Where Do Illusory Vowels Come from?

    ERIC Educational Resources Information Center

    Dupoux, Emmanuel; Parlato, Erika; Frota, Sonia; Hirose, Yuki; Peperkamp, Sharon

    2011-01-01

    Listeners of various languages tend to perceive an illusory vowel inside consonant clusters that are illegal in their native language. Here, we test whether this phenomenon arises after phoneme categorization or rather interacts with it. We assess the perception of illegal consonant clusters in native speakers of Japanese, Brazilian Portuguese,…

  20. Online Damage Detection on Metal and Composite Space Structures by Active and Passive Acoustic Methods

    NASA Astrophysics Data System (ADS)

    Scheerer, M.; Cardone, T.; Rapisarda, A.; Ottaviano, S.; Ftancesconi, D.

    2012-07-01

    In the frame of ESA funded programme Future Launcher Preparatory Programme Period 1 “Preparatory Activities on M&S”, Aerospace & Advanced Composites and Thales Alenia Space-Italia, have conceived and tested a structural health monitoring approach based on integrated Acoustic Emission - Active Ultrasound Damage Identification. The monitoring methods implemented in the study are both passive and active methods and the purpose is to cover large areas with a sufficient damage size detection capability. Two representative space sub-structures have been built and tested: a composite overwrapped pressure vessel (COPV) and a curved, stiffened Al-Li panel. In each structure, typical critical damages have been introduced: delaminations caused by impacts in the COPV and a crack in the stiffener of the Al-Li panel which was grown during a fatigue test campaign. The location and severity of both types of damages have been successfully assessed online using two commercially available systems: one 6 channel AE system from Vallen and one 64 channel AU system from Acellent.

  1. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  2. Ethnographic model of acoustic use of space in the southern Andes for an archaeo-musicological investigation

    NASA Astrophysics Data System (ADS)

    Perez de Arce, Jose

    2002-11-01

    Studies of ritual celebrations in central Chile conducted in the past 15 years show that the spatial component of sound is a crucial component of the whole. The sonic compositions of these rituals generate complex musical structures that the author has termed ''multi-orchestral polyphonies.'' Their origins have been documented from archaeological remains in a vast region of southern Andes (southern Peru, Bolivia, northern Argentina, north-central Chile). It consists of a combination of dance, space walk-through, spatial extension, multiple movements between listener and orchestra, and multiple relations between ritual and ambient sounds. The characteristics of these observables reveal a complex schematic relation between space and sound. This schema can be used as a valid hypothesis for the study of pre-Hispanic uses of acoustic ritual space. The acoustic features observed in this study are common in Andean ritual and, to some extent are seen in Mesoamerica as well.

  3. Dynamic spectral structure specifies vowels for children and adultsa

    PubMed Central

    Nittrouer, Susan

    2008-01-01

    When it comes to making decisions regarding vowel quality, adults seem to weight dynamic syllable structure more strongly than static structure, although disagreement exists over the nature of the most relevant kind of dynamic structure: spectral change intrinsic to the vowel or structure arising from movements between consonant and vowel constrictions. Results have been even less clear regarding the signal components children use in making vowel judgments. In this experiment, listeners of four different ages (adults, and 3-, 5-, and 7-year-old children) were asked to label stimuli that sounded either like steady-state vowels or like CVC syllables which sometimes had middle sections masked by coughs. Four vowel contrasts were used, crossed for type (front/back or closed/open) and consonant context (strongly or only slightly constraining of vowel tongue position). All listeners recognized vowel quality with high levels of accuracy in all conditions, but children were disproportionately hampered by strong coarticulatory effects when only steady-state formants were available. Results clarified past studies, showing that dynamic structure is critical to vowel perception for all aged listeners, but particularly for young children, and that it is the dynamic structure arising from vocal-tract movement between consonant and vowel constrictions that is most important. PMID:17902868

  4. Vowel Categorization during Word Recognition in Bilingual Toddlers

    PubMed Central

    Ramon-Casas, Marta; Swingley, Daniel; Sebastián-Gallés, Núria; Bosch, Laura

    2009-01-01

    Toddlers’ and preschoolers’ knowledge of the phonological forms of words was tested in Spanish-learning, Catalan-learning, and bilingual children. These populations are of particular interest because of differences in the Spanish and Catalan vowel systems: Catalan has two vowels in a phonetic region where Spanish has only one. The proximity of the Spanish vowel to the Catalan ones might pose special learning problems. Children were shown picture pairs; the target picture’s name was spoken correctly, or a vowel in the target word was altered. Altered vowels either contrasted with the usual vowel in Spanish and Catalan, or only in Catalan. Children’s looking to the target picture was used as a measure of word recognition. Monolinguals’ word recognition was hindered by within-language, but not non-native, vowel changes. Surprisingly, bilingual toddlers did not show sensitivity to changes in vowels contrastive only in Catalan. Among preschoolers, Catalan-dominant bilinguals but not Spanish-dominant bilinguals revealed mispronunciation sensitivity for the Catalan-only contrast. These studies reveal monolingual children’s robust knowledge of native-language vowel categories in words, and show that bilingual children whose two languages contain phonetically overlapping vowel categories may not treat those categories as separate in language comprehension. PMID:19338984

  5. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    PubMed

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  6. Perception of vowels by learners of Spanish and English

    NASA Astrophysics Data System (ADS)

    Garcia-Bayonas, Mariche

    2005-04-01

    This study investigates the perception of English vowels /i I/, /u U/, and /e EI/ and Spanish /i u e/ by native-speakers (NS) and learners (L) and compares these two sets of vowels cross-linguistically. Research on the acquisition of vowels indicates that learners can improve their perception with exposure to the second language [Bohn and Flege (1990)]. Johnson, Flemming, and Wright (1993) investigated the hyperspace effect and how listeners tended to choose extreme vowel qualities in a method of adjustment (MOA) task. The theoretical framework of this study is Fleges (1995) Speech Learning Model. The research question is: Are vowels selected differently by NS and L using synthesized data? Spanish learners (n=54) and English learners (n=17) completed MOA tasks in which they were exposed to 330 synthetically produced vowels to analyze spectral differences in the acquisition of both sound systems, and how the learners vowel system may vary from that of the NS. In the MOA tasks they were asked to select which synthesized vowel sounds resembled the most the ones whose spelling was presented to them. The results include an overview of the vowel formant analysis performed, and which vowels are the most challenging ones to learners.

  7. Vowel categorization during word recognition in bilingual toddlers.

    PubMed

    Ramon-Casas, Marta; Swingley, Daniel; Sebastián-Gallés, Núria; Bosch, Laura

    2009-08-01

    Toddlers' and preschoolers' knowledge of the phonological forms of words was tested in Spanish-learning, Catalan-learning, and bilingual children. These populations are of particular interest because of differences in the Spanish and Catalan vowel systems: Catalan has two vowels in a phonetic region where Spanish has only one. The proximity of the Spanish vowel to the Catalan ones might pose special learning problems. Children were shown picture pairs; the target picture's name was spoken correctly, or a vowel in the target word was altered. Altered vowels either contrasted with the usual vowel in Spanish and Catalan, or only in Catalan. Children's looking to the target picture was used as a measure of word recognition. Monolinguals' word recognition was hindered by within-language, but not non-native, vowel changes. Surprisingly, bilingual toddlers did not show sensitivity to changes in vowels contrastive only in Catalan. Among preschoolers, Catalan-dominant bilinguals but not Spanish-dominant bilinguals revealed mispronunciation sensitivity for the Catalan-only contrast. These studies reveal monolingual children's robust knowledge of native-language vowel categories in words, and show that bilingual children whose two languages contain phonetically overlapping vowel categories may not treat those categories as separate in language comprehension. PMID:19338984

  8. Arabic Phonology: An Acoustical and Physiological Investigation.

    ERIC Educational Resources Information Center

    Al-Ani, Salman H.

    This book presents an acoustical and physiological Investigation of contemporary standard Arabic as spoken in Iraq. Spectrograms and X-ray sound films are used to perform the analysis for the study. With this equipment, the author considers the vowels, consonants, pharyngealized consonants, pharyngeals and glottals, duration, gemination, and…

  9. Gust Acoustic Response of a Single Airfoil Using the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Scott, James (Technical Monitor); Wang, X. Y.; Chang, S. C.; Himansu, A.; Jorgenson, P. C. E.

    2003-01-01

    A 2D parallel Euler code based on the space-time conservation element and solution element (CE/SE) method is validated by solving the benchmark problem I in Category 3 of the Third CAA Workshop. This problem concerns the acoustic field generated by the interaction of a convected harmonic vortical gust with a single airfoil. Three gust frequencies, two gust configurations, and three airfoil geometries are considered. Numerical results at both near and far fields are presented and compared with the analytical solutions, a frequency-domain solver GUST3D solutions, and a time-domain high-order Discontinuous Spectral Element Method (DSEM) solutions. It is shown that the CE/SE solutions agree well with the GUST3D solution for the lowest frequency, while there are discrepancies between CE/SE and GUST3D solutions for higher frequencies. However, the CE/SE solution is in good agreement with the DSEM solution for these higher frequencies. It demonstrates that the CE/SE method can produce accurate results of CAA problems involving complex geometries by using unstructured meshes.

  10. Prosodic effects on glide-vowel sequences in three Romance languages

    NASA Astrophysics Data System (ADS)

    Chitoran, Ioana

    2001-05-01

    Glide-vowel sequences occur in many Romance languages. In some they can vary in production, ranging from diphthongal pronunciation [ja,je] to hiatus [ia,ie]. According to native speakers' impressionistic perceptions, Spanish and Romanian both exhibit this variation, but to different degrees. Spanish favors glide-vowel sequences, while Romanian favors hiatus, occasionally resulting in different pronunciations of the same items: Spanish (b[j]ela, ind[j]ana), Romanian (b[i]ela, ind[i]ana). The third language, French, has glide-vowel sequences consistently (b[j]elle). This study tests the effect of position in the word on the acoustic duration of the sequences. Shorter duration indicates diphthong production [jV], while longer duration, hiatus [iV]. Eleven speakers (4 Spanish, 4 Romanian, 3 French), were recorded. Spanish and Romanian showed a word position effect. Word-initial sequences were significantly longer than word-medial ones (p<0.001), consistent with native speakers more frequent description of hiatus word-initially than medially. The effect was not found in French (p>0.05). In the Spanish and Romanian sentences, V in the sequence bears pitch accent, but not in French. It is therefore possible that duration is sensitive not to the presence/absence of the word boundary, but to its position relative to pitch accent. The results suggest that the word position effect is crucially enhanced by pitch accent on V.

  11. On the number of channels needed to classify vowels: Implications for cochlear implants

    NASA Astrophysics Data System (ADS)

    Fourakis, Marios; Hawks, John W.; Davis, Erin

    2005-09-01

    In cochlear implants the incoming signal is analyzed by a bank of filters. Each filter is associated with an electrode to constitute a channel. The present research seeks to determine the number of channels needed for optimal vowel classification. Formant measurements of vowels produced by men and women [Hillenbrand et al., J. Acoust. Soc. Am. 97, 3099-3111 (1995)] were converted to channel assignments. The number of channels varied from 4 to 20 over two frequency ranges (180-4000 and 180-6000 Hz) in equal bark steps. Channel assignments were submitted to linear discriminant analysis (LDA). Classification accuracy increased with the number of channels, ranging from 30% with 4 channels to 98% with 20 channels, both for the female voice. To determine asymptotic performance, LDA classification scores were plotted against the number of channels and fitted with quadratic equations. The number of channels at which no further improvement occurred was determined, averaging 19 across all conditions with little variation. This number of channels seems to resolve the frequency range spanned by the first three formants finely enough to maximize vowel classification. This resolution may not be achieved using six or eight channels as previously proposed. [Work supported by NIH.

  12. Gust Acoustic Response of a Swept Rectilinear Cascade Using The Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Wang, X. Y.; Himansu, A.; Jorgenson, P. C.; Chang, S. C.

    2001-01-01

    The benchmark problem 3 in Category 3 of the third Computational Aero-Acoustics (CAA) Workshop sponsored by NASA Glenn Research Center is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of a rectilinear swept cascade to an incident gust. The acoustic field generated by the interaction of the gust with swept at plates in the cascade is computed by solving the 3D nonlinear Euler equations using the space-time CE/SE method. A parallel version of the 3D CE/SE Euler solver is employed to obtain numerical solutions for several sweep angles. Numerical solutions are presented and compared with the analytical solutions.

  13. Ion- and electron-acoustic solitons in two-electron temperature space plasmas

    SciTech Connect

    Lakhina, G. S.; Kakad, A. P.; Singh, S. V.; Verheest, F.

    2008-06-15

    Properties of ion- and electron-acoustic solitons are investigated in an unmagnetized multicomponent plasma system consisting of cold and hot electrons and hot ions using the Sagdeev pseudopotential technique. The analysis is based on fluid equations and the Poisson equation. Solitary wave solutions are found when the Mach numbers exceed some critical values. The critical Mach numbers for the ion-acoustic solitons are found to be smaller than those for electron-acoustic solitons for a given set of plasma parameters. The critical Mach numbers of ion-acoustic solitons increase with the increase of hot electron temperature and the decrease of cold electron density. On the other hand, the critical Mach numbers of electron-acoustic solitons increase with the increase of the cold electron density as well as the hot electron temperature. The ion-acoustic solitons have positive potentials for the parameters considered. However, the electron-acoustic solitons have positive or negative potentials depending whether the fractional cold electron density with respect to the ion density is greater or less than a certain critical value. Further, the amplitudes of both the ion- and electron-acoustic solitons increase with the increase of the hot electron temperature. Possible application of this model to electrostatic solitary waves observed on the auroral field lines by the Viking spacecraft is discussed.

  14. Existence domains of slow and fast ion-acoustic solitons in two-ion space plasmas

    SciTech Connect

    Maharaj, S. K.; Bharuthram, R.; Singh, S. V. Lakhina, G. S.

    2015-03-15

    A study of large amplitude ion-acoustic solitons is conducted for a model composed of cool and hot ions and cool and hot electrons. Using the Sagdeev pseudo-potential formalism, the scope of earlier studies is extended to consider why upper Mach number limitations arise for slow and fast ion-acoustic solitons. Treating all plasma constituents as adiabatic fluids, slow ion-acoustic solitons are limited in the order of increasing cool ion concentrations by the number densities of the cool, and then the hot ions becoming complex valued, followed by positive and then negative potential double layer regions. Only positive potentials are found for fast ion-acoustic solitons which are limited only by the hot ion number density having to remain real valued. The effect of neglecting as opposed to including inertial effects of the hot electrons is found to induce only minor quantitative changes in the existence regions of slow and fast ion-acoustic solitons.

  15. Intrinsic fundamental frequency of vowels is moderated by regional dialect.

    PubMed

    Jacewicz, Ewa; Fox, Robert Allen

    2015-10-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352

  16. Discrimination of synthesized English vowels by American and Korean listeners

    NASA Astrophysics Data System (ADS)

    Yang, Byunggon

    2001-05-01

    This study explored the discrimination of synthesized English vowel pairs by 27 American and Korean, male and female listeners. The average formant values of nine monophthongs produced by ten American English male speakers were employed to synthesize the vowels. Then, subjects were instructed explicitly to respond to AX discrimination tasks in which the standard vowel was followed by another one with the increment or decrement of the original formant values. The highest and lowest formant values of the same vowel quality were collected and compared to examine patterns of vowel discrimination. Results showed that the American and Korean groups discriminated the vowel pairs almost identically and their center formant frequency values of the high and low boundary fell almost exactly on those of the standards. In addition, the acceptable range of the same vowel quality was similar among the language and gender groups. The acceptable thresholds of each vowel formed an oval to maintain perceptual contrast from adjacent vowels. Pedagogical implications of those findings are discussed.

  17. Intrinsic fundamental frequency of vowels is moderated by regional dialect

    PubMed Central

    Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    There has been a long-standing debate whether the intrinsic fundamental frequency (IF0) of vowels is an automatic consequence of articulation or whether it is independently controlled by speakers to perceptually enhance vowel contrasts along the height dimension. This paper provides evidence from regional variation in American English that IF0 difference between high and low vowels is, in part, controlled and varies across dialects. The sources of this F0 control are socio-cultural and cannot be attributed to differences in the vowel inventory size. The socially motivated enhancement was found only in prosodically prominent contexts. PMID:26520352

  18. Formant discrimination in noise for isolated vowels

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Kewley-Port, Diane

    2004-11-01

    Formant discrimination for isolated vowels presented in noise was investigated for normal-hearing listeners. Discrimination thresholds for F1 and F2, for the seven American English vowels /eye, smcapi, eh, æ, invv, aye, you/, were measured under two types of noise, long-term speech-shaped noise (LTSS) and multitalker babble, and also under quiet listening conditions. Signal-to-noise ratios (SNR) varied from -4 to +4 dB in steps of 2 dB. All three factors, formant frequency, signal-to-noise ratio, and noise type, had significant effects on vowel formant discrimination. Significant interactions among the three factors showed that threshold-frequency functions depended on SNR and noise type. The thresholds at the lowest levels of SNR were highly elevated by a factor of about 3 compared to those in quiet. The masking functions (threshold vs SNR) were well described by a negative exponential over F1 and F2 for both LTSS and babble noise. Speech-shaped noise was a slightly more effective masker than multitalker babble, presumably reflecting small benefits (1.5 dB) due to the temporal variation of the babble. .

  19. Vowelling and semantic priming effects in Arabic.

    PubMed

    Mountaj, Nadia; El Yagoubi, Radouane; Himmi, Majid; Lakhdar Ghazal, Faouzi; Besson, Mireille; Boudelaa, Sami

    2015-01-01

    In the present experiment we used a semantic judgment task with Arabic words to determine whether semantic priming effects are found in the Arabic language. Moreover, we took advantage of the specificity of the Arabic orthographic system, which is characterized by a shallow (i.e., vowelled words) and a deep orthography (i.e., unvowelled words), to examine the relationship between orthographic and semantic processing. Results showed faster Reaction Times (RTs) for semantically related than unrelated words with no difference between vowelled and unvowelled words. By contrast, Event Related Potentials (ERPs) revealed larger N1 and N2 components to vowelled words than unvowelled words suggesting that visual-orthographic complexity taxes the early word processing stages. Moreover, semantically unrelated Arabic words elicited larger N400 components than related words thereby demonstrating N400 effects in Arabic. Finally, the Arabic N400 effect was not influenced by orthographic depth. The implications of these results for understanding the processing of orthographic, semantic, and morphological structures in Modern Standard Arabic are discussed. PMID:25528401

  20. Vowel and consonant identification tests can be used to compare performances in a multilingual group of cochlear implant patients.

    PubMed

    Pelizzone, M; Boëx, C; Montandon, P

    1993-01-01

    Vowel and consonant identification tests were conducted in the sound-only condition in a multilingual group of 13 totally deaf patients who are users of the Ineraid multichannel cochlear implant. Native languages ranged across French, German, Italian, Spanish, Albanian and Swahili. We found high correlations (r > -0.83) among vowel or consonant identification scores and 'subjective ranking' scores established on the basis of a subjective evaluation of the patient's speech reception abilities in the sound-only condition. Detailed analysis demonstrates that the identification of vowel and consonant is dominated by the perception of acoustic cues characteristic of the set of stimuli used as well as by the strengths and weaknesses of the speech processing of the cochlear implant system. We did not find any systematic pattern in the results that could be related to the native language of the patients. These results suggest that vowel as well as consonant identification tests are effective means to compare the performance of cochlear implant patients even across different native languages. They also indicate that, in the future, one can conduct a fewer number of the many different (e.g. nonsense-syllable, word, sentence, speech-tracking) tests when evaluating the speech recognition abilities of patients with the implant. PMID:8265119

  1. The encoding of vowels and temporal speech cues in the auditory cortex of professional musicians: an EEG study.

    PubMed

    Kühnis, Jürg; Elmer, Stefan; Meyer, Martin; Jäncke, Lutz

    2013-07-01

    Here, we applied a multi-feature mismatch negativity (MMN) paradigm in order to systematically investigate the neuronal representation of vowels and temporally manipulated CV syllables in a homogeneous sample of string players and non-musicians. Based on previous work indicating an increased sensitivity of the musicians' auditory system, we expected to find that musically trained subjects will elicit increased MMN amplitudes in response to temporal variations in CV syllables, namely voice-onset time (VOT) and duration. In addition, since different vowels are principally distinguished by means of frequency information and musicians are superior in extracting tonal (and thus frequency) information from an acoustic stream, we also expected to provide evidence for an increased auditory representation of vowels in the experts. In line with our hypothesis, we could show that musicians are not only advantaged in the pre-attentive encoding of temporal speech cues, but most notably also in processing vowels. Additional "just noticeable difference" measurements suggested that the musicians' perceptual advantage in encoding speech sounds was more likely driven by the generic constitutional properties of a highly trained auditory system, rather than by its specialisation for speech representations per se. These results shed light on the origin of the often reported advantage of musicians in processing a variety of speech sounds. PMID:23664833

  2. Vibration, acoustic, and shock design and test criteria for components on the Solid Rocket Boosters (SRB), Lightweight External Tank (LWT), and Space Shuttle Main Engines (SSME)

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The vibration, acoustics, and shock design and test criteria for components and subassemblies on the space shuttle solid rocket booster (SRB), lightweight tank (LWT), and main engines (SSME) are presented. Specifications for transportation, handling, and acceptance testing are also provided.

  3. New design of the pulsed electro-acoustic upper electrode for space charge measurements during electronic irradiation

    NASA Astrophysics Data System (ADS)

    Riffaud, J.; Griseri, V.; Berquez, L.

    2016-07-01

    The behaviour of space charges injected in irradiated dielectrics has been studied for many years for space industry applications. In our case, the pulsed electro-acoustic method is chosen in order to determine the spatial distribution of injected electrons. The feasibility of a ring-shaped electrode which will allow the measurements during irradiation is presented. In this paper, a computer simulation is made in order to determine the parameters to design the electrode and find its position above the sample. The obtained experimental results on polyethylene naphthalate samples realized during electronic irradiation and through relaxation under vacuum will be presented and discussed.

  4. Arbitrary amplitude fast electron-acoustic solitons in three-electron component space plasmas

    NASA Astrophysics Data System (ADS)

    Mbuli, L. N.; Maharaj, S. K.; Bharuthram, R.; Singh, S. V.; Lakhina, G. S.

    2016-06-01

    We examine the characteristics of fast electron-acoustic solitons in a four-component unmagnetised plasma model consisting of cool, warm, and hot electrons, and cool ions. We retain the inertia and pressure for all the plasma species by assuming adiabatic fluid behaviour for all the species. By using the Sagdeev pseudo-potential technique, the allowable Mach number ranges for fast electron-acoustic solitary waves are explored and discussed. It is found that the cool and warm electron number densities determine the polarity switch of the fast electron-acoustic solitons which are limited by either the occurrence of fast electron-acoustic double layers or warm and hot electron number density becoming unreal. For the first time in the study of solitons, we report on the coexistence of fast electron-acoustic solitons, in addition to the regular fast electron-acoustic solitons and double layers in our multi-species plasma model. Our results are applied to the generation of broadband electrostatic noise in the dayside auroral region.

  5. Characterization of Pump-Induced Acoustics in Space Launch System Main Propulsion System Liquid Hydrogen Feedline Using Airflow Test Data

    NASA Technical Reports Server (NTRS)

    Eberhart, C. J.; Snellgrove, L. M.; Zoladz, T. F.

    2015-01-01

    High intensity acoustic edgetones located upstream of the RS-25 Low Pressure Fuel Turbo Pump (LPFTP) were previously observed during Space Launch System (STS) airflow testing of a model Main Propulsion System (MPS) liquid hydrogen (LH2) feedline mated to a modified LPFTP. MPS hardware has been adapted to mitigate the problematic edgetones as part of the Space Launch System (SLS) program. A follow-on airflow test campaign has subjected the adapted hardware to tests mimicking STS-era airflow conditions, and this manuscript describes acoustic environment identification and characterization born from the latest test results. Fluid dynamics responsible for driving discrete excitations were well reproduced using legacy hardware. The modified design was found insensitive to high intensity edgetone-like discretes over the bandwidth of interest to SLS MPS unsteady environments. Rather, the natural acoustics of the test article were observed to respond in a narrowband-random/mixed discrete manner to broadband noise thought generated by the flow field. The intensity of these responses were several orders of magnitude reduced from those driven by edgetones.

  6. Discrimination of Phonemic Vowel Length by Japanese Infants

    ERIC Educational Resources Information Center

    Sato, Yutaka; Sogabe, Yuko; Mazuka, Reiko

    2010-01-01

    Japanese has a vowel duration contrast as one component of its language-specific phonemic repertory to distinguish word meanings. It is not clear, however, how a sensitivity to vowel duration can develop in a linguistic context. In the present study, using the visual habituation-dishabituation method, the authors evaluated infants' abilities to…

  7. Congruent and Incongruent Semantic Context Influence Vowel Recognition

    ERIC Educational Resources Information Center

    Wotton, J. M.; Elvebak, R. L.; Moua, L. C.; Heggem, N. M.; Nelson, C. A.; Kirk, K. M.

    2011-01-01

    The influence of sentence context on the recognition of naturally spoken vowels degraded by reverberation and Gaussian noise was investigated. Target words were paired to have similar consonant sounds but different vowels (e.g., map/mop) and were embedded early in sentences which provided three types of semantic context. Fifty-eight…

  8. UTILITY OF VOWEL DIGRAPH GENERALIZATIONS IN GRADES ONE THROUGH SIX.

    ERIC Educational Resources Information Center

    BAILEY, MILDRED HART

    SOME VOWEL DIGRAPH GENERALIZATIONS PRESENTLY TAUGHT WERE INVESTIGATED TO DETERMINE THE OVERALL UTILITY OF THE GENERALIZATIONS WHEN APPLIED TO A LIST OF REPRESENTATIVE WORDS MET BY CHILDREN IN READING INSTRUCTION IN GRADES 1 THROUGH 6, TO DETERMINE THE UTILITY OF ALL POSSIBLE SUBGROUPS OF ADJACENT VOWELS, AND TO EVOLVE NEW DIGRAPH GENERALIZATIONS…

  9. Bite Block Vowel Production in Apraxia of Speech

    ERIC Educational Resources Information Center

    Jacks, Adam

    2008-01-01

    Purpose: This study explored vowel production and adaptation to articulatory constraints in adults with acquired apraxia of speech (AOS) plus aphasia. Method: Five adults with acquired AOS plus aphasia and 5 healthy control participants produced the vowels [iota], [epsilon], and [ash] in four word-length conditions in unconstrained and bite block…

  10. Native Italian speakers' perception and production of English vowels.

    PubMed

    Flege, J E; MacKay, I R; Meador, D

    1999-11-01

    This study examined the production and perception of English vowels by highly experienced native Italian speakers of English. The subjects were selected on the basis of the age at which they arrived in Canada and began to learn English, and how much they continued to use Italian. Vowel production accuracy was assessed through an intelligibility test in which native English-speaking listeners attempted to identify vowels spoken by the native Italian subjects. Vowel perception was assessed using a categorial discrimination test. The later in life the native Italian subjects began to learn English, the less accurately they produced and perceived English vowels. Neither of two groups of early Italian/English bilinguals differed significantly from native speakers of English either for production or perception. This finding is consistent with the hypothesis of the speech learning model [Flege, in Speech Perception and Linguistic Experience: Theoretical and Methodological Issues (York, Timonium, MD, 1995)] that early bilinguals establish new categories for vowels found in the second language (L2). The significant correlation observed to exist between the measures of L2 vowel production and perception is consistent with another hypothesis of the speech learning model, viz., that the accuracy with which L2 vowels are produced is limited by how accurately they are perceived. PMID:10573909

  11. Variation in vowel duration among southern African American English speakers

    PubMed Central

    Holt, Yolanda Feimster; Jacewicz, Ewa; Fox, Robert Allen

    2015-01-01

    Purpose Atypical duration of speech segments can signal a speech disorder. This study examined variation in vowel duration in African American English (AAE) relative to White American English (WAE) speakers living in the same dialect region in the South in order to characterize the nature of systematic variation between the two groups. The goal was to establish whether segmental durations in minority populations differ from the well-established patterns in mainstream populations. Method Participants were 32 AAE and 32 WAE speakers differing in age who, in their childhood, attended either segregated (older speakers) or integrated (younger speakers) public schools. Speech materials consisted of 14 vowels produced in hVd-frame. Results AAE vowels were significantly longer than WAE vowels. Vowel duration did not differ as a function of age. The temporal tense-lax contrast was minimized for AAE relative to WAE. Female vowels were significantly longer than male vowels for both AAE and WAE. Conclusions African Americans should be expected to produce longer vowels relative to White speakers in a common geographic area. These longer durations are not deviant but represent a typical feature of AAE. This finding has clinical importance in guiding assessments of speech disorders in AAE speakers. PMID:25951511

  12. Mommy, Speak Clearly: Induced Hearing Loss Shapes Vowel Hyperarticulation

    ERIC Educational Resources Information Center

    Lam, Christa; Kitamura, Christine

    2012-01-01

    Talkers hyperarticulate vowels when communicating with listeners that require increased speech intelligibility. Vowel hyperarticulation is said to be motivated by knowledge of the listener's linguistic needs because it typically occurs in speech to infants, foreigners and hearing-impaired listeners, but not to non-verbal pets. However, the degree…

  13. Vowel Categorization during Word Recognition in Bilingual Toddlers

    ERIC Educational Resources Information Center

    Ramon-Casas, Marta; Swingley, Daniel; Sebastian-Galles, Nuria; Bosch, Laura

    2009-01-01

    Toddlers' and preschoolers' knowledge of the phonological forms of words was tested in Spanish-learning, Catalan-learning, and bilingual children. These populations are of particular interest because of differences in the Spanish and Catalan vowel systems: Catalan has two vowels in a phonetic region where Spanish has only one. The proximity of the…

  14. Correlations of decision weights and cognitive function for the masked discrimination of vowels by young and old adults.

    PubMed

    Gilbertson, Lynn; Lutfi, Robert A

    2014-11-01

    Older adults are often reported in the literature to have greater difficulty than younger adults understanding speech in noise [Helfer and Wilber (1988). J. Acoust. Soc. Am, 859-893]. The poorer performance of older adults has been attributed to a general deterioration of cognitive processing, deterioration of cochlear anatomy, and/or greater difficulty segregating speech from noise. The current work used perturbation analysis [Berg (1990). J. Acoust. Soc. Am., 149-158] to provide a more specific assessment of the effect of cognitive factors on speech perception in noise. Sixteen older (age 56-79 years) and seventeen younger (age 19-30 years) adults discriminated a target vowel masked by randomly selected masker vowels immediately preceding and following the target. Relative decision weights on target and maskers resulting from the analysis revealed large individual differences across participants despite similar performance scores in many cases. On the most difficult vowel discriminations, the older adult decision weights were significantly correlated with inhibitory control (Color Word Interference test) and pure-tone threshold averages (PTA). Young adult decision weights were not correlated with any measures of peripheral (PTA) or central function (inhibition or working memory). PMID:25256580

  15. Automatic pronunciation error detection in non-native speech: the case of vowel errors in Dutch.

    PubMed

    van Doremalen, Joost; Cucchiarini, Catia; Strik, Helmer

    2013-08-01

    This research is aimed at analyzing and improving automatic pronunciation error detection in a second language. Dutch vowels spoken by adult non-native learners of Dutch are used as a test case. A first study on Dutch pronunciation by L2 learners with different L1s revealed that vowel pronunciation errors are relatively frequent and often concern subtle acoustic differences between the realization and the target sound. In a second study automatic pronunciation error detection experiments were conducted to compare existing measures to a metric that takes account of the error patterns observed to capture relevant acoustic differences. The results of the two studies do indeed show that error patterns bear information that can be usefully employed in weighted automatic measures of pronunciation quality. In addition, it appears that combining such a weighted metric with existing measures improves the equal error rate by 6.1 percentage points from 0.297, for the Goodness of Pronunciation (GOP) algorithm, to 0.236. PMID:23927130

  16. Aging effect on Mandarin Chinese vowel and tone identification.

    PubMed

    Yang, Xiaohu; Wang, Yuxia; Xu, Lilong; Zhang, Hui; Xu, Can; Liu, Chang

    2015-10-01

    Mandarin Chinese speech sounds (vowels × tones) were presented to younger and older Chinese-native speakers with normal hearing. For the identification of vowel-plus-tone, vowel-only, and tone-only, younger listeners significantly outperformed older listeners. The tone 3 identification scores correlated significantly with the age of older listeners. Moreover, for older listeners, the identification rate of vowel-plus-tone was significantly lower than that of vowel-only and tone-only, whereas for younger listeners, there was no difference among the three identification scores. Therefore, aging negatively affected Mandarin vowel and tone perception, especially when listeners needed to process both phonemic and tonal information. PMID:26520353

  17. The multidimensional nature of hyperspeech: evidence from Japanese vowel devoicing.

    PubMed

    Martin, Andrew; Utsugi, Akira; Mazuka, Reiko

    2014-08-01

    We investigate the hypothesis that infant-directed speech is a form of hyperspeech, optimized for intelligibility, by focusing on vowel devoicing in Japanese. Using a corpus of infant-directed and adult-directed Japanese, we show that speakers implement high vowel devoicing less often when speaking to infants than when speaking to adults, consistent with the hyperspeech hypothesis. The same speakers, however, increase vowel devoicing in careful, read speech, a speech style which might be expected to pattern similarly to infant-directed speech. We argue that both infant-directed and read speech can be considered listener-oriented speech styles-each is optimized for the specific needs of its intended listener. We further show that in non-high vowels, this trend is reversed: speakers devoice more often in infant-directed speech and less often in read speech, suggesting that devoicing in the two types of vowels is driven by separate mechanisms in Japanese. PMID:24813573

  18. The Spelling of Vowels Is Influenced by Australian and British English Dialect Differences

    ERIC Educational Resources Information Center

    Kemp, Nenagh

    2009-01-01

    Two experiments examined the influence of dialect on the spelling of vowel sounds. British and Australian children (6 to 8 years) and university students wrote words whose unstressed vowel sound is spelled i or e and pronounced /I/ or /schwa/. Participants often (mis)spelled these vowel sounds as they pronounced them. When vowels were pronounced…

  19. We're Not in Kansas Anymore: The TOTO Strategy for Decoding Vowel Pairs

    ERIC Educational Resources Information Center

    Meese, Ruth Lyn

    2016-01-01

    Vowel teams such as vowel digraphs present a challenge to struggling readers. Some researchers assert that phonics generalizations such as the "two vowels go walking and the first one does the talking" rule do not hold often enough to be reliable for children. Others suggest that some vowel teams are highly regular and that children can…

  20. A Comparative Study of English and Spanish Vowel Systems: Theoretical and Practical Implications for Teaching Pronunciation.

    ERIC Educational Resources Information Center

    Odisho, Edward Y.

    A study examines two major types of vowel systems in languages, centripetal and centrifugal. English is associated with the centripetal system, in which vowel quality and quantity (rhythm) are heavily influenced by stress. In this system, vowels have a strong tendency to move toward the center of the vowel area. Spanish is associated with the…

  1. Arbitrary amplitude slow electron-acoustic solitons in three-electron temperature space plasmas

    SciTech Connect

    Mbuli, L. N.; Maharaj, S. K.; Bharuthram, R.; Singh, S. V.; Lakhina, G. S.

    2015-06-15

    We examine the characteristics of large amplitude slow electron-acoustic solitons supported in a four-component unmagnetised plasma composed of cool, warm, hot electrons, and cool ions. The inertia and pressure for all the species in this plasma system are retained by assuming that they are adiabatic fluids. Our findings reveal that both positive and negative potential slow electron-acoustic solitons are supported in the four-component plasma system. The polarity switch of the slow electron-acoustic solitons is determined by the number densities of the cool and warm electrons. Negative potential solitons, which are limited by the cool and warm electron number densities becoming unreal and the occurrence of negative potential double layers, are found for low values of the cool electron density, while the positive potential solitons occurring for large values of the cool electron density are only limited by positive potential double layers. Both the lower and upper Mach numbers for the slow electron-acoustic solitons are computed and discussed.

  2. Acoustic-phonetic correlates of talker intelligibility for adults and children

    NASA Astrophysics Data System (ADS)

    Hazan, Valerie; Markham, Duncan

    2004-11-01

    This study investigated acoustic-phonetic correlates of intelligibility for adult and child talkers, and whether the relative intelligibility of different talkers was dependent on listener characteristics. In experiment 1, word intelligibility was measured for 45 talkers (18 women, 15 men, 6 boys, 6 girls) from a homogeneous accent group. The material consisted of 124 words familiar to 7-year-olds that adequately covered all frequent consonant confusions; stimuli were presented to 135 adult and child listeners in low-level background noise. Seven-to-eight-year-old listeners made significantly more errors than 12-year-olds or adults, but the relative intelligibility of individual talkers was highly consistent across groups. In experiment 2, listener ratings on a number of voice dimensions were obtained for the adults talkers identified in experiment 1 as having the highest and lowest intelligibility. Intelligibility was significantly correlated with subjective dimensions reflecting articulation, voice dynamics, and general quality. Finally, in experiment 3, measures of fundamental frequency, long-term average spectrum, word duration, consonant-vowel intensity ratio, and vowel space size were obtained for all talkers. Overall, word intelligibility was significantly correlated with the total energy in the 1- to 3-kHz region and word duration; these measures predicted 61% of the variability in intelligibility. The fact that the relative intelligibility of individual talkers was remarkably consistent across listener age groups suggests that the acoustic-phonetic characteristics of a talker's utterance are the primary factor in determining talker intelligibility. Although some acoustic-phonetic correlates of intelligibility were identified, variability in the profiles of the ``best'' talkers suggests that high intelligibility can be achieved through a combination of different acoustic-phonetic characteristics. .

  3. Effects of coda voicing and aspiration on Hindi vowels

    NASA Astrophysics Data System (ADS)

    Lampp, Claire; Reklis, Heidi

    2001-05-01

    This study reexamines the well-attested coda voicing effect on vowel duration [Chen, Phonetica 22, 125-159 (1970)], in conjunction with the relationship between vowel duration and aspiration of codas. The first step was to replicate the results of Maddieson and Gandour [UCLA Working Papers Phonetics 31, 46-52 (1976)] with a larger, language-specific data set. Four nonsense syllables ending in [open-o] followed by [k, kh, g, gh] were read aloud in ten different carrier sentences by four native speakers of Hindi. Results confirm that longer vowels precede voiced word-final consonants and aspirated word-final consonants. Thus, among the syllables, vowel duration would be longest when preceding the voiced aspirate [gh]. Coda voicing, and thus, vowel duration, have been shown to correlate negatively to vowel F1 in English and Arabic [Wolf, J. Phonetics 6, 299-309 (1978); de Jong and Zawaydeh ibid, 30, 53-75 (2002)]. It is not known whether vowel F1 depends directly on coda voicing, or is determined indirectly via duration. Since voicing and aspiration both increase duration, F1 measurements of this data set (which will be presented) may answer that question.

  4. Dynamic Spectral Structure Specifies Vowels for Adults and Children

    PubMed Central

    Nittrouer, Susan; Lowenstein, Joanna H.

    2014-01-01

    The dynamic specification account of vowel recognition suggests that formant movement between vowel targets and consonant margins is used by listeners to recognize vowels. This study tested that account by measuring contributions to vowel recognition of dynamic (i.e., time-varying) spectral structure and coarticulatory effects on stationary structure. Adults and children (four-and seven-year-olds) were tested with three kinds of consonant-vowel-consonant syllables: (1) unprocessed; (2) sine waves that preserved both stationary coarticulated and dynamic spectral structure; and (3) vocoded signals that primarily preserved that stationary, but not dynamic structure. Sections of two lengths were removed from syllable middles: (1) half the vocalic portion; and (2) all but the first and last three pitch periods. Adults performed accurately with unprocessed and sine-wave signals, as long as half the syllable remained; their recognition was poorer for vocoded signals, but above chance. Seven-year-olds performed more poorly than adults with both sorts of processed signals, but disproportionately worse with vocoded than sine-wave signals. Most four-year-olds were unable to recognize vowels at all with vocoded signals. Conclusions were that both dynamic and stationary coarticulated structures support vowel recognition for adults, but children attend to dynamic spectral structure more strongly because early phonological organization favors whole words. PMID:25536845

  5. The Acoustic Voice Quality Index: Toward Improved Treatment Outcomes Assessment in Voice Disorders

    ERIC Educational Resources Information Center

    Maryn, Youri; De Bodt, Marc; Roy, Nelson

    2010-01-01

    Voice practitioners require an objective index of dysphonia severity as a means to reliably track treatment outcomes. To ensure ecological validity however, such a measure should survey both sustained vowels and continuous speech. In an earlier study, a multivariate acoustic model referred to as the Acoustic Voice Quality Index (AVQI), consisting…

  6. Vocal Attack Time of Different Pitch Levels and Vowels in Mandarin.

    PubMed

    Zhang, Ruifeng; Baken, R J; Kong, Jiangping

    2015-09-01

    The purpose of this study was to investigate how vocal attack time (VAT) varies when young adults articulate the three vertex vowels in Mandarin Chinese at five linguistically unconstrained pitch levels. Sound pressure and electroglottographic signals were recorded simultaneously from 53 male and 53 female subjects saying sustained /A/, /i/, and /u/ at five equally spaced pitch heights, each being higher than the preceding one. Then analyses of means, variance, and correlation were performed to explore the relationships of VAT/pitch levels and VAT/vowels. Findings were As mean STs (semitone) increase linearly from levels 1 to 5, mean VATs decrease nonlinearly in a big group of subjects but increase nonlinearly in a small group of them. Based on the body-cover model of F0 control, data here lead to the guess that different people incline to use different strategies in increasing pitch height. When males, females, and males plus females are considered as a whole, average STs and VATs tend to be positively correlated among the three vertex vowels. PMID:26231723

  7. Acoustic behavior of ordered droplets in a liquid: A phase space approach

    SciTech Connect

    Rivera, A.L.; Lozada-Cassou, M.; Palomino, M.R.; Icaza, M. de; Castano, V.M.

    2005-03-01

    The transmission of an acoustical signal through a spatial arrangement consisting of a bidimensional crystal of droplets (liquid spheres) immersed into another liquid is analyzed. As a first approximation, the paraxial case is solved by considering a set of acoustical lenses which allow us to model the effect of each droplet on the signal. An expression for the Wigner distribution function that lets us evaluate the corresponding image, diffraction pattern, and even the output signal of any given paraxial input signal to that crystalline substrate is obtained, with particular emphasis on the case of an incoming plane wave. To solve the nonparaxial situation, a generalization of the concept of focal distance interpreting every sphere as a superposition of concentric rings of different radius, which permits us to find a general expression for the Wigner distribution function is proposed.

  8. Acoustic behavior of ordered droplets in a liquid: a phase space approach.

    PubMed

    Rivera, A L; Palomino, M R; de Icaza, M; Lozada-Cassou, M; Castaño, V M

    2005-03-01

    The transmission of an acoustical signal through a spatial arrangement consisting of a bidimensional crystal of droplets (liquid spheres) immersed into another liquid is analyzed. As a first approximation, the paraxial case is solved by considering a set of acoustical lenses which allow us to model the effect of each droplet on the signal. An expression for the Wigner distribution function that lets us evaluate the corresponding image, diffraction pattern, and even the output signal of any given paraxial input signal to that crystalline substrate is obtained, with particular emphasis on the case of an incoming plane wave. To solve the nonparaxial situation, a generalization of the concept of focal distance interpreting every sphere as a superposition of concentric rings of different radius, which permits us to find a general expression for the Wigner distribution function is proposed. PMID:15903601

  9. Engaging spaces: Intimate electro-acoustic display in alternative performance venues

    NASA Astrophysics Data System (ADS)

    Bahn, Curtis; Moore, Stephan

    2001-05-01

    In past presentations to the ASA, we have described the design and construction of four generations of unique spherical speakers (multichannel, outward-radiating geodesic speaker arrays) and Sensor-Speaker-Arrays, (SenSAs: combinations of various sensor devices with outward-radiating multichannel speaker arrays). This presentation will detail the ways in which arrays of these speakers have been employed in alternative performance venues-providing presence and intimacy in the performance of electro-acoustic chamber music and sound installation, while engaging natural and unique acoustical qualities of various locations. We will present documentation of the use of multichannel sonic diffusion arrays in small clubs, ``black-box'' theaters, planetariums, and art galleries.

  10. Phonics Skills: Practice Book D, Vowel Combinations. Practice in Basic Phonics Skills for Students of All Ages.

    ERIC Educational Resources Information Center

    1982

    Intended for student use at home as a supplement to schoolwork or at school as a phonics text, this workbook presents the irregular vowel sounds, including the vowel diphthongs, the r-controlled vowels, and other irregular vowel combinations. Each vowel or combination is presented in a lesson of two to six pages, depending on its perceived…

  11. Sound in ecclesiastical spaces in Cordoba. Architectural projects incorporating acoustic methodology (El sonido del espacio eclesial en Cordoba. El proyecto arquitectonico como procedimiento acustico)

    NASA Astrophysics Data System (ADS)

    Suarez, Rafael

    2003-11-01

    This thesis is concerned with the acoustic analysis of ecclesiastical spaces, and the subsequent implementation of acoustic design methodology in architectural renovations. One begins with an adequate architectural design of specific elements (shape, materials, and textures), with the intention of elimination of acoustic deficiencies that are common in such spaces. These are those deficiencies that impair good speech intelligibility and good musical audibility. The investigation is limited to churches in the province of Cordoba and to churches built after the reconquest of Spain (1236) and up until the 18th century. Selected churches are those that have undergone architectural renovations to adapt them to new uses or to make them more suitable for liturgical use. The thesis attempts to summarize the acoustic analyses and the acoustical solutions that have been implemented. The results are presented in a manner that should be useful for the adoption of a model for the functional renovation of ecclesiastical spaces. Such would allow those involved in architectural projects to specify the nature of the sound, even though somewhat intangible, within the ecclesiastical space. Thesis advisors: Jaime Navarro and Juan J. Sendra Copies of this thesis written in Spanish may be obtained by contacting the advisor, Jaime Navarro, E.T.S. de Arquitectura de Sevilla, Dpto. de Construcciones Arquitectonicas I, Av. Reina Mercedes, 2, 41012 Sevilla, Spain. E-mail address: jnavarro@us.es

  12. Interactions of speaking condition and auditory feedback on vowel production in postlingually deaf adults with cochlear implants.

    PubMed

    Ménard, Lucie; Polak, Marek; Denny, Margaret; Burton, Ellen; Lane, Harlan; Matthies, Melanie L; Marrone, Nicole; Perkell, Joseph S; Tiede, Mark; Vick, Jennell

    2007-06-01

    This study investigates the effects of speaking condition and auditory feedback on vowel production by postlingually deafened adults. Thirteen cochlear implant users produced repetitions of nine American English vowels prior to implantation, and at one month and one year after implantation. There were three speaking conditions (clear, normal, and fast), and two feedback conditions after implantation (implant processor turned on and off). Ten normal-hearing controls were also recorded once. Vowel contrasts in the formant space (expressed in mels) were larger in the clear than in the fast condition, both for controls and for implant users at all three time samples. Implant users also produced differences in duration between clear and fast conditions that were in the range of those obtained from the controls. In agreement with prior work, the implant users had contrast values lower than did the controls. The implant users' contrasts were larger with hearing on than off and improved from one month to one year postimplant. Because the controls and implant users responded similarly to a change in speaking condition, it is inferred that auditory feedback, although demonstrably important for maintaining normative values of vowel contrasts, is not needed to maintain the distinctiveness of those contrasts in different speaking conditions. PMID:17552727

  13. Interpreting the Old and Middle English Close Vowels.

    ERIC Educational Resources Information Center

    Stockwell, Robert; Minkova, Donka

    2002-01-01

    Examines the proposal of a simultaneous lengthening and lowering rule for Middle English short high vowels that undergo open syllable lengthening. Argues there are obstacles to reconstructing [I] [u] for Old English. (Author/VWL)

  14. Use of Acoustic Cues by Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Giezen, Marcel R.; Escudero, Paola; Baker, Anne

    2010-01-01

    Purpose: This study examined the use of different acoustic cues in auditory perception of consonant and vowel contrasts by profoundly deaf children with a cochlear implant (CI) in comparison to age-matched children and young adults with normal hearing. Method: A speech sound categorization task in an XAB format was administered to 15 children ages…

  15. The role of selective attention in the acquisition of English tense and lax vowels by native Spanish listeners: comparison of three training methods

    PubMed Central

    Kondaurova, Maria V.; Francis, Alexander L.

    2010-01-01

    This study investigates the role of two processes, cue enhancement (learning to attend to acoustic cues which characterize a speech contrast for native listeners) and cue inhibition (learning to ignore cues that do not), in the acquisition of the American English tense and lax ([i] vs.[I]) vowels by native Spanish listeners. This contrast is acoustically distinguished by both vowel spectrum and duration. However, while native English listeners rely primarily on spectrum, inexperienced Spanish listeners tend to rely exclusively on duration. Twenty-nine native Spanish listeners, initially reliant on vowel duration, received either enhancement training, inhibition training, or training with a natural cue distribution. Results demonstrated that reliance on spectrum properties increased over baseline for all three groups. However, inhibitory training was more effective relative to enhancement training and both inhibitory and enhancement training were more effective relative to natural distribution training in decreasing listeners’ attention to duration. These results suggest that phonetic learning may involve two distinct cognitive processes, cue enhancement and cue inhibition, that function to shift selective attention between separable acoustic dimensions. Moreover, cue-specific training (whether enhancing or inhibitory) appears to be more effective for the acquisition of second language speech contrasts. PMID:21499531

  16. The effect of language immersion education on the preattentive perception of native and non-native vowel contrasts.

    PubMed

    Peltola, Maija S; Tuomainen, Outi; Koskinen, Mira; Aaltonen, Olli

    2007-01-01

    Proficiency in a second language (L2) may depend upon the age of exposure and the continued use of the mother tongue (L1) during L2 acquisition. The effect of early L2 exposure on the preattentive perception of native and non-native vowel contrasts was studied by measuring the mismatch negativity (MMN) response from 14-year-old children. The test group consisted of six Finnish children who had participated in English immersion education. The control group consisted of eight monolingual Finns. The subjects were presented with Finnish and English synthetic vowel contrasts. The aim was to see whether early exposure had resulted in the development of a new language-specific memory trace for the contrast phonemically irrelevant in L1. The results indicated that only the contrast with the largest acoustic distance elicited an MMN response in the Bilingual group, while the Monolingual group showed a response also to the native contrast. This may suggest that native-like memory traces for prototypical vowels were not formed in early language immersion. PMID:17109242

  17. The Basis for Language Acquisition: Congenitally Deaf Infants Discriminate Vowel Length in the First Months after Cochlear Implantation.

    PubMed

    Vavatzanidis, Niki Katerina; Mürbe, Dirk; Friederici, Angela; Hahne, Anja

    2015-12-01

    One main incentive for supplying hearing impaired children with a cochlear implant is the prospect of oral language acquisition. Only scarce knowledge exists, however, of what congenitally deaf children actually perceive when receiving their first auditory input, and specifically what speech-relevant features they are able to extract from the new modality. We therefore presented congenitally deaf infants and young children implanted before the age of 4 years with an oddball paradigm of long and short vowel variants of the syllable /ba/. We measured the EEG in regular intervals to study their discriminative ability starting with the first activation of the implant up to 8 months later. We were thus able to time-track the emerging ability to differentiate one of the most basic linguistic features that bears semantic differentiation and helps in word segmentation, namely, vowel length. Results show that already 2 months after the first auditory input, but not directly after implant activation, these early implanted children differentiate between long and short syllables. Surprisingly, after only 4 months of hearing experience, the ERPs have reached the same properties as those of the normal hearing control group, demonstrating the plasticity of the brain with respect to the new modality. We thus show that a simple but linguistically highly relevant feature such as vowel length reaches age-appropriate electrophysiological levels as fast as 4 months after the first acoustic stimulation, providing an important basis for further language acquisition. PMID:26351863

  18. The effects of indexical and phonetic variation on vowel perception in typically developing 9- to 12-year-old children.

    PubMed

    Jacewicz, Ewa; Fox, Robert Allen

    2014-04-01

    PURPOSE The purpose of this study was to investigate how linguistic knowledge interacts with indexical knowledge in older children's perception under demanding listening conditions created by extensive talker variability. METHOD Twenty-five 9- to 12-year-old children, 12 from North Carolina (NC) and 13 from Wisconsin (WI), identified 12 vowels in isolated /hVd/ words produced by 120 talkers representing the 2 dialects (NC and WI), both genders, and 3 age groups (generations) of residents from the same geographic locations as the listeners. RESULTS Identification rates were higher for responses to talkers from the same dialect as the listeners and for female speech. Listeners were sensitive to systematic positional variations in vowels and their dynamic structure (formant movement) associated with generational differences in vowel pronunciation resulting from sound change in a speech community. Overall identification rate was 71.7%, which is 8.5% lower than for the adults responding to the same stimuli in Jacewicz and Fox (2012). CONCLUSION Typically developing older children were successful in dealing with both phonetic and indexical variation related to talker dialect, gender, and generation. They were less consistent than the adults, most likely because of less efficient encoding of acoustic-phonetic information in the speech of multiple talkers and relative inexperience with indexical variation. PMID:24686520

  19. Early learners' discrimination of second-language vowels.

    PubMed

    Højen, Anders; Flege, James E

    2006-05-01

    It is uncertain from previous research to what extent the perceptual system retains plasticity after attunement to the native language (L1) sound system. This study evaluated second-language (L2) vowel discrimination by individuals who began learning the L2 as children ("early learners"). Experiment 1 identified procedures that lowered discrimination scores for foreign vowel contrasts in an AXB test (with three physically different stimuli per trial, where "X" was drawn from the same vowel category as "A" or "B"). Experiment 2 examined the AXB discrimination of English vowels by native Spanish early learners and monolingual speakers of Spanish and English (20 per group) at interstimulus intervals (ISIs) of 1000 and 0 ms. The Spanish monolinguals obtained near-chance scores for three difficult vowel contrasts, presumably because they did not perceive the vowels as distinct phonemes and because the experimental design hindered low-level encoding strategies. Like the English monolinguals, the early learners obtained high scores, indicating they had shown considerable perceptual learning. However, statistically significant differences between early learners and English monolinguals for two of three difficult contrasts at the 0-ms ISI suggested that their underlying perceptual systems were not identical. Implications for claims regarding perceptual plasticity following L1 attunement are discussed. PMID:16708962

  20. Greek perception and production of an English vowel contrast: A preliminary study

    NASA Astrophysics Data System (ADS)

    Podlipský, Václav J.

    2005-04-01

    This study focused on language-independent principles functioning in acquisition of second language (L2) contrasts. Specifically, it tested Bohn's Desensitization Hypothesis [in Speech perception and linguistic experience: Issues in Cross Language Research, edited by W. Strange (York Press, Baltimore, 1995)] which predicted that Greek speakers of English as an L2 would base their perceptual identification of English /i/ and /I/ on durational differences. Synthetic vowels differing orthogonally in duration and spectrum between the /i/ and /I/ endpoints served as stimuli for a forced-choice identification test. To assess L2 proficiency and to evaluate the possibility of cross-language category assimilation, productions of English /i/, /I/, and /ɛ/ and of Greek /i/ and /e/ were elicited and analyzed acoustically. The L2 utterances were also rated for the degree of foreign accent. Two native speakers of Modern Greek with low and 2 with intermediate experience in English participated. Six native English (NE) listeners and 6 NE speakers tested in an earlier study constituted the control groups. Heterogeneous perceptual behavior was observed for the L2 subjects. It is concluded that until acquisition in completely naturalistic settings is tested, possible interference of formally induced meta-linguistic differentiation between a ``short'' and a ``long'' vowel cannot be eliminated.

  1. Improved neural representation of vowels in electric stimulation using desynchronizing pulse trains

    NASA Astrophysics Data System (ADS)

    Litvak, Leonid; Delgutte, Bertrand; Eddington, Donald

    2003-10-01

    Current cochlear implant processors poorly represent sound waveforms in the temporal discharge patterns of auditory-nerve fibers (ANFs). A previous study [Litvak et al., J. Acoust. Soc. Am. 114, 2079-2098 (2003)] showed that the temporal representation of sinusoidal stimuli can be improved in a majority of ANFs by encoding the stimuli as small modulations of a sustained, high-rate (5 kpps), desynchronizing pulse train (DPT). Here, these findings are extended to more complex stimuli by recording ANF responses to pulse trains modulated by bandpass filtered vowels. Responses to vowel modulators depended strongly on the discharge pattern evoked by the unmodulated DPT. ANFs that gave sustained responses to the DPT had period histograms that resembled the modulator waveform for low (<5%) modulation depths. Spectra of period histograms contained peaks near the formant frequencies. In contrast, ANFs that gave a transient (<1 min) response to the DPT poorly represented the formant frequencies. A model incorporating a linear modulation filter, a noisy threshold, and neural refractoriness predicts the shapes of period histograms for both types of fibers. These results suggest that a DPT-enhanced strategy may achieve good representation of the stimulus fine structure in the temporal discharge patterns of ANFs for frequencies up to 1000 Hz. It remains to be seen whether these temporal discharge patterns can be utilized by cochlear implant subjects.

  2. The acoustics of uvulars in Tlingit

    NASA Astrophysics Data System (ADS)

    Denzer-King, Ryan E.

    This paper looks at the acoustics of uvulars in Tlingit, an Athabaskan language spoken in Alaska and Canada. Data from five native speakers was used for acoustic analysis for tokens from five phoneme groups (alveolars, plain velars, labialized velars, plain uvulars, and labialized uvulars). The tokens were analyzed by computing spectral moments of plosive bursts and fricatives, and F2 and F3 values for post-consonantal vowels, which were used to calculate locus equations, a descriptive measure of the relationship between F2 at vowel onset and midpoint. Several trends were observed, including a greater difference between F2 and F 3 after uvulars than after velars, as well as a higher center of gravity (COG) and lower skew and kurtosis for uvulars than for velars. The comparison of plain versus labialized consonants supports the finding of Suh (2008) that labialization lowers mean burst energy, or COG, and additionally found labialization to raise skew and kurtosis.

  3. The acoustic and visual factors influencing the construction of tranquil space in urban and rural environments tranquil spaces-quiet places?

    PubMed

    Pheasant, Robert; Horoshenkov, Kirill; Watts, Greg; Barrett, Brendan

    2008-03-01

    Prior to this work no structured mechanism existed in the UK to evaluate the tranquillity of open spaces with respect to the characteristics of both acoustic and visual stimuli. This is largely due to the fact that within the context of "tranquil" environments, little is known about the interaction of the audio-visual modalities and how they combine to lead to the perception of tranquillity. This paper presents the findings of a study in which visual and acoustic data, captured from 11 English rural and urban landscapes, were used by 44 volunteers to make subjective assessments of both their perceived tranquillity of a location, and the loudness of five generic soundscape components. The results were then analyzed alongside objective measurements taken in the laboratory. It was found that the maximum sound pressure level (L(Amax)) and the percentage of natural features present at a location were the key factors influencing tranquillity. Engineering formulas for the tranquillity as a function of the noise level and proportion of the natural features are proposed. PMID:18345834

  4. Tutorial on architectural acoustics

    NASA Astrophysics Data System (ADS)

    Shaw, Neil; Talaske, Rick; Bistafa, Sylvio

    2002-11-01

    This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).

  5. Acoustic telemetry and network analysis reveal the space use of multiple reef predators and enhance marine protected area design.

    PubMed

    Lea, James S E; Humphries, Nicolas E; von Brandis, Rainer G; Clarke, Christopher R; Sims, David W

    2016-07-13

    Marine protected areas (MPAs) are commonly employed to protect ecosystems from threats like overfishing. Ideally, MPA design should incorporate movement data from multiple target species to ensure sufficient habitat is protected. We used long-term acoustic telemetry and network analysis to determine the fine-scale space use of five shark and one turtle species at a remote atoll in the Seychelles, Indian Ocean, and evaluate the efficacy of a proposed MPA. Results revealed strong, species-specific habitat use in both sharks and turtles, with corresponding variation in MPA use. Defining the MPA's boundary from the edge of the reef flat at low tide instead of the beach at high tide (the current best in Seychelles) significantly increased the MPA's coverage of predator movements by an average of 34%. Informed by these results, the larger MPA was adopted by the Seychelles government, demonstrating how telemetry data can improve shark spatial conservation by affecting policy directly. PMID:27412274

  6. Acoustic profiles of distinct emotional expressions in laughter.

    PubMed

    Szameitat, Diana P; Alter, Kai; Szameitat, André J; Wildgruber, Dirk; Sterr, Annette; Darwin, Chris J

    2009-07-01

    Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech. PMID:19603892

  7. Massive particles in acoustic space-times: Emergent inertia and passive gravity

    SciTech Connect

    Milgrom, Mordehai

    2006-04-15

    I show that massive-particle dynamics can be simulated by a weak, external perturbation on a potential flow in an ideal fluid. The perturbation defining a particle is dictated in a small (spherical) region that is otherwise free to roam in the fluid. Here I take it as an external potential that couples to the fluid density or as a rigid distribution of sources with vanishing total outflux. The effective Lagrangian for such particles is shown to be of the form mc{sup 2}l(U{sup 2}/c{sup 2}), where U{yields} is the velocity of the particle relative to the fluid and c the speed of sound. This can serve as a model for emergent relativistic inertia a la Mach's principle with m playing the role of inertial mass, and also of analog gravity where m is also the passive gravitational mass. The mass m depends on the particle type and intrinsic structure (and on position if the background density is not constant), while l is universal: For D-dimensional particles l{proportional_to}F(1,1/2;D/2;U{sup 2}/c{sup 2}) (F is the hypergeometric function). These particles have the following interesting dynamics: Particles fall in the same way in the analog gravitational field mimicked by the flow, independent of their internal structure, thus satisfying the weak equivalence principle. For D{<=}5 they all have a relativistic limit with the acquired energy and momentum diverging as U{yields}c. For D{<=}7 the null geodesics of the standard acoustic metric solve our equation of motion. Interestingly, for D=4 the dynamics is very nearly Lorentzian: l{proportional_to}-mc{sup 2}{gamma}{sup -1}{lambda}({gamma}) (up to a constant), with {lambda}=(1+{gamma}{sup -1}){sup -1} varying between 1/2 and 1 ({gamma} is the 'Lorentz factor' for the particle velocity relative to the fluid). The particles can be said to follow the geodesics of a generalized acoustic metric of a Finslerian type that shares the null geodesics with the standard acoustic metric. In vortex geometries, the ergosphere is

  8. Generalization of von Neumann analysis for a model of two discrete half-spaces: The acoustic case

    USGS Publications Warehouse

    Haney, M.M.

    2007-01-01

    Evaluating the performance of finite-difference algorithms typically uses a technique known as von Neumann analysis. For a given algorithm, application of the technique yields both a dispersion relation valid for the discrete time-space grid and a mathematical condition for stability. In practice, a major shortcoming of conventional von Neumann analysis is that it can be applied only to an idealized numerical model - that of an infinite, homogeneous whole space. Experience has shown that numerical instabilities often arise in finite-difference simulations of wave propagation at interfaces with strong material contrasts. These interface instabilities occur even though the conventional von Neumann stability criterion may be satisfied at each point of the numerical model. To address this issue, I generalize von Neumann analysis for a model of two half-spaces. I perform the analysis for the case of acoustic wave propagation using a standard staggered-grid finite-difference numerical scheme. By deriving expressions for the discrete reflection and transmission coefficients, I study under what conditions the discrete reflection and transmission coefficients become unbounded. I find that instabilities encountered in numerical modeling near interfaces with strong material contrasts are linked to these cases and develop a modified stability criterion that takes into account the resulting instabilities. I test and verify the stability criterion by executing a finite-difference algorithm under conditions predicted to be stable and unstable. ?? 2007 Society of Exploration Geophysicists.

  9. Nonlinear Dust Acoustic Waves in Dissipative Space Dusty Plasmas with Superthermal Electrons and Nonextensive Ions

    NASA Astrophysics Data System (ADS)

    El-Hanbaly, A. M.; El-Shewy, E. K.; Sallah, M.; Darweesh, H. F.

    2016-05-01

    The nonlinear characteristics of the dust acoustic (DA) waves are studied in a homogeneous, collisionless, unmagnetized, and dissipative dusty plasma composed of negatively charged dusty grains, superthermal electrons, and nonextensive ions. Sagdeev pseudopotential technique has been employed to study the large amplitude DA waves. It (Sagdeev pseudopotential) has an evidence for the existence of compressive and rarefractive solitons. The global features of the phase portrait are investigated to understand the possible types of solutions of the Sagdeev form. On the other hand, the reductive perturbation technique has been used to study small amplitude DA waves and yields the Korteweg-de Vries-Burgers (KdV-Burgers) equation that exhibits both soliton and shock waves. The behavior of the obtained results of both large and small amplitude is investigated graphically in terms of the plasma parameters like dust kinematic viscosity, superthermal and nonextensive parameters.

  10. Semi-analytical modeling of acoustic beam divergence in homogeneous anisotropic half-spaces.

    PubMed

    Kono, Naoyuki; Hirose, Sohichi

    2016-02-01

    Beam divergences of acoustical fields in semi-infinite homogeneous anisotropic media are calculated based on a semi-analytical model. The model for a plane source in a semi-infinite homogeneous anisotropic medium is proposed as an extended model for a point source in an infinite medium. Beam divergences propagating along crystallographic axes 〈100〉, 〈110〉, and 〈111〉 in a cubic crystal, a single crystalline Ni-based alloy, are measured and compared to calculation results for verifying the model. The contribution of beam divergence attenuation to the total attenuation for propagating in anisotropic polycrystalline materials is quantitatively evaluated in isolation from scattering attenuation effects. PMID:26508085

  11. Acoustic emission frequency discrimination

    NASA Technical Reports Server (NTRS)

    Sugg, Frank E. (Inventor); Graham, Lloyd J. (Inventor)

    1988-01-01

    In acoustic emission nondestructive testing, broadband frequency noise is distinguished from narrow banded acoustic emission signals, since the latter are valid events indicative of structural flaws in the material being examined. This is accomplished by separating out those signals which contain frequency components both within and beyond (either above or below) the range of valid acoustic emission events. Application to acoustic emission monitoring during nondestructive bond verification and proof loading of undensified tiles on the Space Shuttle Orbiter is considered.

  12. Interaction of cues to vowel identity and consonant voicing: Cross-language perception

    NASA Astrophysics Data System (ADS)

    Morrison, Geoffrey Stewart

    2002-05-01

    Canadian English has two high front vowels differing in spectral and duration properties, Spanish has one high front vowel, and Japanese has two high front vowels differing in duration only. Vowel duration is a major cue to post-vocalic consonant voicing in English, but not in Japanese or Spanish. Canadian English, Japanese, and Mexican Spanish listeners identified members of a multidimensional edited speech continuum covering the English words bit, beat, bid, bead. The continuum was created by systematically varying the spectral properties of the vowel, and the durations of the vowel, the consonant closure, and the carrier sentence. English listeners had a categorical cutoff between /i/ and /I/ based primarily on the spectral properties of the vowel. Half the English listeners identified consonant voicing using vowel duration. Japanese listeners had a categorical cutoff between the English vowels based primarily on the duration of the vowel. The location of the cutoff was the same as the categorical cutoff between Japanese long /i:/ and short /i/. Japanese listeners identified consonant voicing at random. Spanish listeners identified the English vowels using vowel duration but did not have a categorical cutoff. Half the Spanish listeners identified consonant voicing using the spectral properties of the vowel.

  13. Vowel normalization for accent: An investigation of perceptual plasticity in young adults

    NASA Astrophysics Data System (ADS)

    Evans, Bronwen G.; Iverson, Paul

    2001-05-01

    Previous work has emphasized the role of early experience in the ability to accurately perceive and produce foreign or foreign-accented speech. This study examines how listeners at a much later stage in language development-early adulthood-adapt to a non-native accent within the same language. A longitudinal study investigated whether listeners who had had no previous experience of living in multidialectal environments adapted their speech perception and production when attending university. Participants were tested before beginning university and then again 3 months later. An acoustic analysis of production was carried out and perceptual tests were used to investigate changes in word intelligibility and vowel categorization. Preliminary results suggest that listeners are able to adjust their phonetic representations and that these patterns of adjustment are linked to the changes in production that speakers typically make due to sociolinguistic factors when living in multidialectal environments.

  14. An evaluation of Space Shuttle STS-2 payload bay acoustic data and comparison with predictions

    NASA Technical Reports Server (NTRS)

    Wilby, J. F.; Piersol, A. G.; Wilby, E. G.

    1982-01-01

    Space average sound pressure levels computed from measurements at 18 locations in the payload bay of the Space Shuttle orbiter vehicle during the STS-2 launch were compared with predicted levels obtained using the PACES computer program. The comparisons were performed over the frequency range 12.5 Hz to 1000 Hz, since the test data at higher frequencies are contaminated by instrumentation background noise. In general the PACES computer program tends to overpredict the space average sound levels in the payload bay, although the magnitude of the discrepancy is usually small. Furthermore the discrepancy depends to some extent on the manner in which the payload is modeled analytically, and the method used to determine the "measured' space average sound pressure levels. Thus the difference between predicted and measured sound levels, averaged over the 20 one third octave bands from 12.5 Hz to 1000 Hz, varies from 1 dB to 3.5 dB.

  15. Analysis of Acoustic Features in Speakers with Cognitive Disorders and Speech Impairments

    NASA Astrophysics Data System (ADS)

    Saz, Oscar; Simón, Javier; Rodríguez, W. Ricardo; Lleida, Eduardo; Vaquero, Carlos

    2009-12-01

    This work presents the results in the analysis of the acoustic features (formants and the three suprasegmental features: tone, intensity and duration) of the vowel production in a group of 14 young speakers suffering different kinds of speech impairments due to physical and cognitive disorders. A corpus with unimpaired children's speech is used to determine the reference values for these features in speakers without any kind of speech impairment within the same domain of the impaired speakers; this is 57 isolated words. The signal processing to extract the formant and pitch values is based on a Linear Prediction Coefficients (LPCs) analysis of the segments considered as vowels in a Hidden Markov Model (HMM) based Viterbi forced alignment. Intensity and duration are also based in the outcome of the automated segmentation. As main conclusion of the work, it is shown that intelligibility of the vowel production is lowered in impaired speakers even when the vowel is perceived as correct by human labelers. The decrease in intelligibility is due to a 30% of increase in confusability in the formants map, a reduction of 50% in the discriminative power in energy between stressed and unstressed vowels and to a 50% increase of the standard deviation in the length of the vowels. On the other hand, impaired speakers keep good control of tone in the production of stressed and unstressed vowels.

  16. Speech Recognition System and Formant Based Analysis of Spoken Arabic Vowels

    NASA Astrophysics Data System (ADS)

    Alotaibi, Yousef Ajami; Hussain, Amir

    Arabic is one of the world's oldest languages and is currently the second most spoken language in terms of number of speakers. However, it has not received much attention from the traditional speech processing research community. This study is specifically concerned with the analysis of vowels in modern standard Arabic dialect. The first and second formant values in these vowels are investigated and the differences and similarities between the vowels are explored using consonant-vowels-consonant (CVC) utterances. For this purpose, an HMM based recognizer was built to classify the vowels and the performance of the recognizer analyzed to help understand the similarities and dissimilarities between the phonetic features of vowels. The vowels are also analyzed in both time and frequency domains, and the consistent findings of the analysis are expected to facilitate future Arabic speech processing tasks such as vowel and speech recognition and classification.

  17. Reading Arabic Texts: Effects of Text Type, Reader Type and Vowelization.

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim

    1998-01-01

    Investigates the effect of vowels on reading accuracy in Arabic orthography. Finds that vowels had a significant effect on reading accuracy of poor and skilled readers in reading each of four kinds of texts. (NH)

  18. Effects of frequency shifts and visual gender information on vowel category judgments

    NASA Astrophysics Data System (ADS)

    Glidden, Catherine; Assmann, Peter F.

    2003-10-01

    Visual morphing techniques were used together with a high-quality vocoder to study the audiovisual contribution of talker gender to the identification of frequency-shifted vowels. A nine-step continuum ranging from ``bit'' to ``bet'' was constructed from natural recorded syllables spoken by an adult female talker. Upward and downward frequency shifts in spectral envelope (scale factors of 0.85 and 1.0) were applied in combination with shifts in fundamental frequency, F0 (scale factors of 0.5 and 1.0). Downward frequency shifts generally resulted in malelike voices whereas upward shifts were perceived as femalelike. Two separate nine-step visual continua from ``bit'' to ``bet'' were also constructed, one from a male face and the other a female face, each producing the end-point words. Each step along the two visual continua was paired with the corresponding step on the acoustic continuum, creating natural audiovisual utterances. Category boundary shifts were found for both acoustic cues (F0 and formant frequency shifts) and visual cues (visual gender). The visual gender effect was larger when acoustic and visual information were matched appropriately. These results suggest that visual information provided by the speech signal plays an important supplemental role in talker normalization.

  19. Linear and nonlinear analysis of dust acoustic waves in dissipative space dusty plasmas with trapped ions

    NASA Astrophysics Data System (ADS)

    El-Hanbaly, A. M.; El-Shewy, E. K.; Sallah, M.; Darweesh, H. F.

    2015-05-01

    The propagation of linear and nonlinear dust acoustic waves in a homogeneous unmagnetized, collisionless and dissipative dusty plasma consisted of extremely massive, micron-sized, negative dust grains has been investigated. The Boltzmann distribution is suggested for electrons whereas vortex-like distribution for ions. In the linear analysis, the dispersion relation is obtained, and the dependence of damping rate of the waves on the carrier wave number , the dust kinematic viscosity coefficient and the ratio of the ions to the electrons temperatures is discussed. In the nonlinear analysis, the modified Korteweg-de Vries-Burgers (mKdV-Burgers) equation is derived via the reductive perturbation method. Bifurcation analysis is discussed for non-dissipative system in the absence of Burgers term. In the case of dissipative system, the tangent hyperbolic method is used to solve mKdV-Burgers equation, and yield the shock wave solution. The obtained results may be helpful in better understanding of waves propagation in the astrophysical plasmas as well as in inertial confinement fusion laboratory plasmas.

  20. Consonant/vowel asymmetry in early word form recognition.

    PubMed

    Poltrock, Silvana; Nazzi, Thierry

    2015-03-01

    Previous preferential listening studies suggest that 11-month-olds' early word representations are phonologically detailed, such that minor phonetic variations (i.e., mispronunciations) impair recognition. However, these studies focused on infants' sensitivity to mispronunciations (or omissions) of consonants, which have been proposed to be more important for lexical identity than vowels. Even though a lexically related consonant advantage has been consistently found in French from 14 months of age onward, little is known about its developmental onset. The current study asked whether French-learning 11-month-olds exhibit a consonant-vowel asymmetry when recognizing familiar words, which would be reflected in vowel mispronunciations being more tolerated than consonant mispronunciations. In a baseline experiment (Experiment 1), infants preferred listening to familiar words over nonwords, confirming that at 11 months of age infants show a familiarity effect rather than a novelty effect. In Experiment 2, which was constructed using the familiar words of Experiment 1, infants preferred listening to one-feature vowel mispronunciations over one-feature consonant mispronunciations. Given the familiarity preference established in Experiment 1, this pattern of results suggests that recognition of early familiar words is more dependent on their consonants than on their vowels. This adds another piece of evidence that, at least in French, consonants already have a privileged role in lexical processing by 11 months of age, as claimed by Nespor, Peña, and Mehler (2003). PMID:25544396

  1. Acoustic modeling of the speech organ

    NASA Astrophysics Data System (ADS)

    Kacprowski, J.

    The state of research on acoustic modeling of phonational and articulatory speech producing elements is reviewed. Consistent with the physical interpretation of the speech production process, the acoustic theory of speech production is expressed as the product of three factors: laryngeal involvement, sound transmission, and emanations from the mouth and/or nose. Each of these factors is presented in the form of a simplified mathematical description which provides the theoretical basis for the formation of physical models of the appropriate functional members of this complex bicybernetic system. Vocal tract wall impedance, vocal tract synthesizers, laryngeal dysfunction, vowel nasalization, resonance circuits, and sound wave propagation are discussed.

  2. Improving L2 Listeners' Perception of English Vowels: A Computer-Mediated Approach

    ERIC Educational Resources Information Center

    Thomson, Ron I.

    2012-01-01

    A high variability phonetic training technique was employed to train 26 Mandarin speakers to better perceive ten English vowels. In eight short training sessions, learners identified 200 English vowel tokens, produced in a post bilabial stop context by 20 native speakers. Learners' ability to identify English vowels significantly improved in the…

  3. Emergence of a Vowel System in a Young Cochlear Implant Recipient.

    ERIC Educational Resources Information Center

    Ertmer, David J.

    2001-01-01

    This article chronicles changes in vowel production by a child with congenital deafness who received a multichannel cochlear implant at 19 months. The child exhibited three vowel types before implantation, however, a total of nine different vowel types were observed during her first year of implant experience. (Contains references.) (Author/CR)

  4. Children's Perception of Conversational and Clear American-English Vowels in Noise

    NASA Astrophysics Data System (ADS)

    Leone, Dorothy

    A handful of studies have examined children's perception of clear speech in the presence of background noise. Although accurate vowel perception is important for listeners' comprehension, no study has focused on whether vowels uttered in clear speech aid intelligibility for children listeners. In the present study, American-English (AE) speaking children repeated the AE vowels /epsilon, ae, alpha, lambda in the nonsense word /g[schwa]bVp[schwa]/ in phrases produced in conversational and clear speech by two female AE-speaking adults. The recordings of the adults' speech were presented at a signal-to-noise ratio (SNR) of -6 dB to 15 AE-speaking children (ages 5.0-8.5) in an examination of whether the accuracy of AE school-age children's vowel identification in noise is more accurate when utterances are produced in clear speech than in conversational speech. Effects of the particular vowel uttered and talker effects were also examined. Clear speech vowels were repeated significantly more accurately (87%) than conversational speech vowels (59%), suggesting that clear speech aids children's vowel identification. Results varied as a function of the talker and particular vowel uttered. Child listeners repeated one talker's vowels more accurately than the other's and front vowels more accurately than central and back vowels. The findings support the use of clear speech for enhancing adult-to-child communication in AE, particularly in noisy environments.

  5. The Effect of Hearing Loss on Identification of Asynchronous Double Vowels

    ERIC Educational Resources Information Center

    Lentz, Jennifer J.; Marsh, Shavon L.

    2006-01-01

    This study determined whether listeners with hearing loss received reduced benefits due to an onset asynchrony between sounds. Seven normal-hearing listeners and 7 listeners with hearing impairment (HI) were presented with 2 synthetic, steady-state vowels. One vowel (the late-arriving vowel) was 250 ms in duration, and the other (the…

  6. Textual Input Enhancement for Vowel Blindness: A Study with Arabic ESL Learners

    ERIC Educational Resources Information Center

    Alsadoon, Reem; Heift, Trude

    2015-01-01

    This study explores the impact of textual input enhancement on the noticing and intake of English vowels by Arabic L2 learners of English. Arabic L1 speakers are known to experience "vowel blindness," commonly defined as a difficulty in the textual decoding and encoding of English vowels due to an insufficient decoding of the word form.…

  7. The Role of Vowels in Reading Semitic Scripts: Data from Arabic and Hebrew.

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim

    2001-01-01

    Investigates the effect of vowels and context on reading accuracy of skilled adult native Arabic speakers in Arabic and in Hebrew, their second language. Reveals a significant effect for vowels and for context across all reading conditions in Arabic and Hebrew. Finds that the vowelized texts in Arabic and the pointed and unpointed texts in Hebrew…

  8. Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora

    ERIC Educational Resources Information Center

    Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Halle, Pierre; Goldstein, Louis M.

    2012-01-01

    Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and…

  9. Vowels, Syllables, and Letter Names: Differences between Young Children's Spelling in English and Portuguese

    ERIC Educational Resources Information Center

    Pollo, Tatiana Cury; Kessler, Brett; Treiman, Rebecca

    2005-01-01

    Young Portuguese-speaking children have been reported to produce more vowel- and syllable-oriented spellings than have English speakers. To investigate the extent and source of such differences, we analyzed children's vocabulary and found that Portuguese words have more vowel letter names and a higher vowel-consonant ratio than do English words.…

  10. Vowel Targeted Intervention for Children with Persisting Speech Difficulties: Impact on Intelligibility

    ERIC Educational Resources Information Center

    Speake, Jane; Stackhouse, Joy; Pascoe, Michelle

    2012-01-01

    Compared to the treatment of consonant segments, the treatment of vowels is infrequently described in the literature on children's speech difficulties. Vowel difficulties occur less frequently than those with consonants but may have significant impact on intelligibility. In order to evaluate the effectiveness of vowel targeted intervention (VTI)…

  11. Vowel Cluster-Phoneme Correspondences in 20,000 English Words.

    ERIC Educational Resources Information Center

    Johnson, Dale D.

    The symbol-sound correspondence status of vowel-cluster (two or more adjacent vowel letters) spelling in American English was investigated. The source of the study was Venezky's 1963 revision of the Thorndike Frequency Count. A computer print-out of the 20,000 word corpus was analyzed to determine the letter-sound correspondence of vowel cluster…

  12. Non-native Speech Perception Training Using Vowel Subsets: Effects of Vowels in Sets and Order of Training

    PubMed Central

    Nishi, Kanae; Kewley-Port, Diane

    2008-01-01

    Purpose Nishi and Kewley-Port (2007) trained Japanese listeners to perceive nine American English monophthongs and showed that a protocol using all nine vowels (fullset) produced better results than the one using only the three more difficult vowels (subset). The present study extended the target population to Koreans and examined whether protocols combining the two stimulus sets would provide more effective training. Method Three groups of five Korean listeners were trained on American English vowels for nine days using one of the three protocols: fullset only, first three days on subset then six days on fullset, or first six days on fullset then three days on subset. Participants' performance was assessed by pre- and post-training tests, as well as by a mid-training test. Results 1) Fullset training was also effective for Koreans; 2) no advantage was found for the two combined protocols over the fullset only protocol, and 3) sustained “non-improvement” was observed for training using one of the combined protocols. Conclusions In using subsets for training American English vowels, care should be taken not only in the selection of subset vowels, but also for the training orders of subsets. PMID:18664694

  13. A neuronal model of vowel normalization and representation.

    PubMed

    Sussman, H M

    1986-05-01

    A speculative neuronal model for vowel normalization and representation is offered. The neurophysiological basis for the premise is the "combination-sensitive" neuron recently documented in the auditory cortex of the mustached bat (N. Suga, W. E. O'Neill, K. Kujirai, and T. Manabe, 1983, Journal of Neurophysiology, 49, 1573-1627). These neurons are specialized to respond to either precise frequency, amplitude, or time differentials between specific harmonic components of the pulse-echo pair comprising the biosonar signal of the bat. Such multiple frequency comparisons lie at the heart of human vowel perception and categorization. A representative vowel normalization algorithm is used to illustrate the operational principles of the neuronal model in accomplishing both normalization and categorization in early infancy. The neurological precursors to a phonemic vocalic system is described based on the neurobiological events characterizing regressive neurogenesis. PMID:3013360

  14. Interaural timing cues do not contribute to the map of space in the ferret superior colliculus: a virtual acoustic space study.

    PubMed

    Campbell, Robert A A; Doubell, Timothy P; Nodal, Fernando R; Schnupp, Jan W H; King, Andrew J

    2006-01-01

    In this study, we used individualized virtual acoustic space (VAS) stimuli to investigate the representation of auditory space in the superior colliculus (SC) of anesthetized ferrets. The VAS stimuli were generated by convolving broadband noise bursts with each animal's own head-related transfer function and presented over earphones. Comparison of the amplitude spectra of the free-field and VAS signals and of the spatial receptive fields of neurons recorded in the inferior colliculus with each form of stimulation confirmed that the VAS provided an accurate simulation of sounds presented in the free field. Units recorded in the deeper layers of the SC responded predominantly to virtual sound directions within the contralateral hemifield. In most cases, increasing the sound level resulted in stronger spike discharges and broader spatial receptive fields. However, the preferred sound directions, as defined by the direction of the centroid vector, remained largely unchanged across different levels and, as observed in previous free-field studies, varied topographically in azimuth along the rostrocaudal axis of the SC. We also examined the contribution of interaural time differences (ITDs) to map topography by digitally manipulating the VAS stimuli so that ITDs were held constant while allowing other spatial cues to vary naturally. The response properties of the majority of units, including centroid direction, remained unchanged with fixed ITDs, indicating that sensitivity to this cue is not responsible for tuning to different sound directions. These results are consistent with previous data suggesting that sensitivity to interaural level differences and spectral cues provides the basis for the map of auditory space in the mammalian SC. PMID:16162823

  15. Frequency-space domain acoustic wave simulation with the BiCGstab (ℓ) iterative method

    NASA Astrophysics Data System (ADS)

    Du, Zengli; Liu, Jianjun; Liu, Wenge; Li, Chunhong

    2016-02-01

    The vast computational cost and memory requirements of LU decomposition are major obstacles to 3D seismic modelling in the frequency-space domain. BiCGstab (ℓ) is an effective bi-conjugate gradient method to solve the giant sparse linear equations, but the convergence rate is extremely low when the threshold value is set small enough. The BiCGstab (ℓ) iterative method was introduced into 3D numerical simulation to overcome these problems in this paper. Numerical examples have shown that the precision of the BiCGstab (ℓ) iterative method meets the demand of seismic modelling and the result is equivalent to that of LU decomposition. The computational cost and memory resource demands of the BiCGstab (ℓ) iterative method are superior to that of LU decomposition. It is an effective method of 3D seismic modelling in the frequency-space domain.

  16. Quantifying loss of acoustic communication space for right whales in and around a U.S. National Marine Sanctuary.

    PubMed

    Hatch, Leila T; Clark, Christopher W; Van Parijs, Sofie M; Frankel, Adam S; Ponirakis, Dimitri W

    2012-12-01

    The effects of chronic exposure to increasing levels of human-induced underwater noise on marine animal populations reliant on sound for communication are poorly understood. We sought to further develop methods of quantifying the effects of communication masking associated with human-induced sound on contact-calling North Atlantic right whales (Eubalaena glacialis) in an ecologically relevant area (~10,000 km(2) ) and time period (peak feeding time). We used an array of temporary, bottom-mounted, autonomous acoustic recorders in the Stellwagen Bank National Marine Sanctuary to monitor ambient noise levels, measure levels of sound associated with vessels, and detect and locate calling whales. We related wind speed, as recorded by regional oceanographic buoys, to ambient noise levels. We used vessel-tracking data from the Automatic Identification System to quantify acoustic signatures of large commercial vessels. On the basis of these integrated sound fields, median signal excess (the difference between the signal-to-noise ratio and the assumed recognition differential) for contact-calling right whales was negative (-1 dB) under current ambient noise levels and was further reduced (-2 dB) by the addition of noise from ships. Compared with potential communication space available under historically lower noise conditions, calling right whales may have lost, on average, 63-67% of their communication space. One or more of the 89 calling whales in the study area was exposed to noise levels ≥120 dB re 1 μPa by ships for 20% of the month, and a maximum of 11 whales were exposed to noise at or above this level during a single 10-min period. These results highlight the limitations of exposure-threshold (i.e., dose-response) metrics for assessing chronic anthropogenic noise effects on communication opportunities. Our methods can be used to integrate chronic and wide-ranging noise effects in emerging ocean-planning forums that seek to improve management of cumulative effects

  17. Asymmetries in the Processing of Vowel Height

    ERIC Educational Resources Information Center

    Scharinger, Mathias; Monahan, Philip J.; Idsardi, William J.

    2012-01-01

    Purpose: Speech perception can be described as the transformation of continuous acoustic information into discrete memory representations. Therefore, research on neural representations of speech sounds is particularly important for a better understanding of this transformation. Speech perception models make specific assumptions regarding the…

  18. Speech recognition: Acoustic, phonetic and lexical knowledge

    NASA Astrophysics Data System (ADS)

    Zue, V. W.

    1985-08-01

    During this reporting period we continued to make progress on the acquisition of acoustic-phonetic and lexical knowledge. We completed development of a continuous digit recognition system. The system was constructed to investigate the use of acoustic-phonetic knowledge in a speech recognition system. The significant achievements of this study include the development of a soft-failure procedure for lexical access and the discovery of a set of acoustic-phonetic features for verification. We completed a study of the constraints that lexical stress imposes on word recognition. We found that lexical stress information alone can, on the average, reduce the number of word candidates from a large dictionary by more than 80 percent. In conjunction with this study, we successfully developed a system that automatically determines the stress pattern of a word from the acoustic signal. We performed an acoustic study on the characteristics of nasal consonants and nasalized vowels. We have also developed recognition algorithms for nasal murmurs and nasalized vowels in continuous speech. We finished the preliminary development of a system that aligns a speech waveform with the corresponding phonetic transcription.

  19. Analysis and analogy in the perception of vowels.

    PubMed

    Remez, Robert E; Fellowes, Jennifer M; Blumenthal, Eva Y; Nagel, Dalia Shoretz

    2003-10-01

    In two experiments, we investigated the creation of conceptual analogies to a contrast between vowels. An ordering procedure was used to determine the reliability of simple sensory and abstract analogies to vowel contrasts composed by naive volunteers. The results indicate that test subjects compose stable and consistent analogies to a meaningless segmental linguistic contrast, some invoking simple and complex relational properties. Although in the literature of psychophysics such facility has been explained as an effect of sensory analysis, the present studies indicate the action of a far subtler and more versatile cognitive function akin to the creation of meaning in figurative language. PMID:14704027

  20. Acoustic source for generating an acoustic beam

    DOEpatents

    Vu, Cung Khac; Sinha, Dipen N.; Pantea, Cristian

    2016-05-31

    An acoustic source for generating an acoustic beam includes a housing; a plurality of spaced apart piezo-electric layers disposed within the housing; and a non-linear medium filling between the plurality of layers. Each of the plurality of piezoelectric layers is configured to generate an acoustic wave. The non-linear medium and the plurality of piezo-electric material layers have a matching impedance so as to enhance a transmission of the acoustic wave generated by each of plurality of layers through the remaining plurality of layers.

  1. Structural-acoustic optimum design of shell structures in open/closed space based on a free-form optimization method

    NASA Astrophysics Data System (ADS)

    Shimoda, Masatoshi; Shimoide, Kensuke; Shi, Jin-Xing

    2016-03-01

    Noise reduction by structural geometry optimization has attracted much attention among designers. In the present work, we propose a free-form optimization method for the structural-acoustic design optimization of shell structures to reduce the noise of a targeted frequency or frequency range in an open or closed space. The objective of the design optimization is to minimize the average structural vibration-induced sound pressure at the evaluation points in the acoustic field under a volume constraint. For the shape design optimization, we carry out structural-acoustic coupling analysis and adjoint analysis to calculate the shape gradient functions. Then, we use the shape gradient functions in velocity analysis to update the shape of shell structures. We repeat this process until convergence is confirmed to obtain the optimum shape of the shell structures in a structural-acoustic coupling system. The numerical results for the considered examples showed that the proposed design optimization process can significantly reduce the noise in both open and closed spaces.

  2. Dynamic surface acoustic response to a thermal expansion source on an anisotropic half space.

    PubMed

    Zhao, Peng; Zhao, Ji-Cheng; Weaver, Richard

    2013-05-01

    The surface displacement response to a distributed thermal expansion source is solved using the reciprocity principle. By convolving the strain Green's function with the thermal stress field created by an ultrafast laser illumination, the complete surface displacement on an anisotropic half space induced by laser absorption is calculated in the time domain. This solution applies to the near field surface displacement due to pulse laser absorption. The solution is validated by performing ultrafast laser pump-probe measurements and showing very good agreement between the measured time-dependent probe beam deflection and the computed surface displacement. PMID:23654371

  3. Intelligibility of American English Vowels of Native and Non-Native Speakers in Quiet and Speech-Shaped Noise

    ERIC Educational Resources Information Center

    Liu, Chang; Jin, Su-Hyun

    2013-01-01

    This study examined intelligibility of twelve American English vowels produced by English, Chinese, and Korean native speakers in quiet and speech-shaped noise in which vowels were presented at six sensation levels from 0 dB to 10 dB. The slopes of vowel intelligibility functions and the processing time for listeners to identify vowels were…

  4. Impact of chevron spacing and asymmetric distribution on supersonic jet acoustics and flow

    NASA Astrophysics Data System (ADS)

    Heeb, N.; Gutmark, E.; Kailasanath, K.

    2016-05-01

    An experimental investigation into the effect of chevron spacing and distribution on supersonic jets was performed. Cross-stream and streamwise particle imaging velocimetry measurements were used to relate flow field modification to sound field changes measured by far-field microphones in the overexpanded, ideally expanded, and underexpanded regimes. Drastic modification of the jet cross-section was achieved by the investigated configurations, with both elliptic and triangular shapes attained downstream. Consequently, screech was nearly eliminated with reductions in the range of 10-25 dB depending on the operating condition. Analysis of the streamwise velocity indicated that both the mean shock spacing and strength were reduced resulting in an increase in the broadband shock associated noise spectral peak frequency and a reduction in the amplitude, respectively. Maximum broadband shock associated noise amplitude reductions were in the 5-7 dB range. Chevron proximity was found to be the primary driver of peak vorticity production, though persistence followed the opposite trend. The integrated streamwise vorticity modulus was found to be correlated with peak large scale turbulent mixing noise reduction, though optimal overall sound pressure level reductions did not necessarily follow due to the shock/fine scale mixing noise sources. Optimal large scale mixing noise reductions were in the 5-6 dB range.

  5. Acoustic energy density distribution and sound intensity vector field inside coupled spaces.

    PubMed

    Meissner, Mirosław

    2012-07-01

    In this paper, the modal expansion method supported by a computer implementation has been used to predict steady-state distributions of the potential and kinetic energy densities, and the active and reactive sound intensities inside two coupled enclosures. The numerical study was dedicated to low-frequency room responses. Calculation results have shown that the distribution of energetic quantities in coupled spaces is strongly influenced by the modal localization. Appropriate descriptors of the localization effect were introduced to identify localized modes. As was evidenced by numerical data, the characteristic objects in the active intensity field are vortices positioned irregularly inside the room. It was found that vortex centers lie exactly on the lines corresponding to zeros of the eigenfunction for a dominant mode. Finally, an impact of the wall impedance on the quantitative relationship between the active and reactive intensities was analyzed and it was concluded that for very small sound damping the behavior of the sound intensity inside the room space is essentially only oscillatory. PMID:22779472

  6. Acoustic neuroma

    MedlinePlus

    Vestibular schwannoma; Tumor - acoustic; Cerebellopontine angle tumor; Angle tumor ... Acoustic neuromas have been linked with the genetic disorder neurofibromatosis type 2 (NF2). Acoustic neuromas are uncommon.

  7. Detection and modeling of the acoustic perturbation produced by the launch of the Space Shuttle using the Global Positioning System

    NASA Astrophysics Data System (ADS)

    Bowling, T. J.; Calais, E.; Dautermann, T.

    2010-12-01

    Rocket launches are known to produce infrasonic pressure waves that propagate into the ionosphere where coupling between electrons and neutral particles induces fluctuations in ionospheric electron density observable in GPS measurements. We have detected ionospheric perturbations following the launch of space shuttle Atlantis on 11 May 2009 using an array of continually operating GPS stations across the Southeastern coast of the United States and in the Caribbean. Detections are prominent to the south of the westward shuttle trajectory in the area of maximum coupling between the acoustic wave and Earth’s magnetic field, move at speeds consistent with the speed of sound, and show coherency between stations covering a large geographic range. We model the perturbation as an explosive source located at the point of closest approach between the shuttle path and each sub-ionospheric point. The neutral pressure wave is propagated using ray tracing, resultant changes in electron density are calculated at points of intersection between rays and satellite-to-reciever line-of-sight, and synthetic integrated electron content values are derived. Arrival times of the observed and synthesized waveforms match closely, with discrepancies related to errors in the apriori sound speed model used for ray tracing. Current work includes the estimation of source location and energy.

  8. Higher-order corrections to nonlinear dust-ion-acoustic shock waves in a degenerate dense space plasma

    NASA Astrophysics Data System (ADS)

    El-Labany, S. K.; El-Taibany, W. F.; El-Samahy, A. E.; Hafez, A. M.; Atteya, A.

    2014-12-01

    A reductive perturbation technique is employed to investigate the contribution of higher-order nonlinearity and dissipation to nonlinear dust-ion-acoustic (DIA) shock waves in a three-component degenerate dense space plasma. The model consists of degenerate electron (being either ultrarelativistic or nonrelativistic), nonrelativistic ion fluid and stationary heavy dust grains. A nonlinear Burger equation and a linear inhomogeneous Burger-type equation are derived. The present model admits only compressive DIA shocks. Including these higher-order corrections results in creating new solitary wave structures " humped DIA shock" waves. For the case of ultrarelativistic (nonrelativistic) electrons, one (two) humped DIA shock is (are) created. The DIA shock wave amplitude and velocity is larger in case of ultrarelativistic electrons than of nonrelativistic electrons. It is shown that the effects of kinematic viscosity, heavy dust grains number density, and equilibrium ion number density have important roles in the basic features of the produced DIA shocks and the associated electric fields. The implications of our results to dense plasmas in astrophysical objects (e.g., non-rotating white dwarf stars) are briefly discussed.

  9. Kinematic dust viscosity effect on linear and nonlinear dust-acoustic waves in space dusty plasmas with nonthermal ions

    SciTech Connect

    El-Hanbaly, A. M.; Sallah, M.; El-Shewy, E. K.; Darweesh, H. F.

    2015-10-15

    Linear and nonlinear dust-acoustic (DA) waves are studied in a collisionless, unmagnetized and dissipative dusty plasma consisting of negatively charged dust grains, Boltzmann-distributed electrons, and nonthermal ions. The normal mode analysis is used to obtain a linear dispersion relation illustrating the dependence of the wave damping rate on the carrier wave number, the dust viscosity coefficient, the ratio of the ion temperature to the electron temperatures, and the nonthermal parameter. The plasma system is analyzed nonlinearly via the reductive perturbation method that gives the KdV-Burgers equation. Some interesting physical solutions are obtained to study the nonlinear waves. These solutions are related to soliton, a combination between a shock and a soliton, and monotonic and oscillatory shock waves. Their behaviors are illustrated and shown graphically. The characteristics of the DA solitary and shock waves are significantly modified by the presence of nonthermal (fast) ions, the ratio of the ion temperature to the electron temperature, and the dust kinematic viscosity. The topology of the phase portrait and the potential diagram of the KdV-Burgers equation is illustrated, whose advantage is the ability to predict different classes of traveling wave solutions according to different phase orbits. The energy of the soliton wave and the electric field are calculated. The results in this paper can be generalized to analyze the nature of plasma waves in both space and laboratory plasma systems.

  10. Kinematic dust viscosity effect on linear and nonlinear dust-acoustic waves in space dusty plasmas with nonthermal ions

    NASA Astrophysics Data System (ADS)

    El-Hanbaly, A. M.; Sallah, M.; El-Shewy, E. K.; Darweesh, H. F.

    2015-10-01

    Linear and nonlinear dust-acoustic (DA) waves are studied in a collisionless, unmagnetized and dissipative dusty plasma consisting of negatively charged dust grains, Boltzmann-distributed electrons, and nonthermal ions. The normal mode analysis is used to obtain a linear dispersion relation illustrating the dependence of the wave damping rate on the carrier wave number, the dust viscosity coefficient, the ratio of the ion temperature to the electron temperatures, and the nonthermal parameter. The plasma system is analyzed nonlinearly via the reductive perturbation method that gives the KdV-Burgers equation. Some interesting physical solutions are obtained to study the nonlinear waves. These solutions are related to soliton, a combination between a shock and a soliton, and monotonic and oscillatory shock waves. Their behaviors are illustrated and shown graphically. The characteristics of the DA solitary and shock waves are significantly modified by the presence of nonthermal (fast) ions, the ratio of the ion temperature to the electron temperature, and the dust kinematic viscosity. The topology of the phase portrait and the potential diagram of the KdV-Burgers equation is illustrated, whose advantage is the ability to predict different classes of traveling wave solutions according to different phase orbits. The energy of the soliton wave and the electric field are calculated. The results in this paper can be generalized to analyze the nature of plasma waves in both space and laboratory plasma systems.

  11. Near noise field characteristics of Nike rocket motors for application to space vehicle payload acoustic qualification

    NASA Technical Reports Server (NTRS)

    Hilton, D. A.; Bruton, D.

    1977-01-01

    Results of a series of noise measurements that were made under controlled conditions during the static firing of two Nike solid propellant rocket motors are presented. The usefulness of these motors as sources for general spacecraft noise testing was assessed, and the noise expected in the cargo bay of the orbiter was reproduced. Brief descriptions of the Nike motor, the general procedures utilized for the noise tests, and representative noise data including overall sound pressure levels, one third octave band spectra, and octave band spectra were reviewed. Data are presented on two motors of different ages in order to show the similarity between noise measurements made on motors having different loading dates. The measured noise from these tests is then compared to that estimated for the space shuttle orbiter cargo bay.

  12. Estimating pore-space gas hydrate saturations from well log acoustic data

    USGS Publications Warehouse

    Lee, Myung W.; Waite, William F.

    2008-01-01

    Relating pore-space gas hydrate saturation to sonic velocity data is important for remotely estimating gas hydrate concentration in sediment. In the present study, sonic velocities of gas hydrate–bearing sands are modeled using a three-phase Biot-type theory in which sand, gas hydrate, and pore fluid form three homogeneous, interwoven frameworks. This theory is developed using well log compressional and shear wave velocity data from the Mallik 5L-38 permafrost gas hydrate research well in Canada and applied to well log data from hydrate-bearing sands in the Alaskan permafrost, Gulf of Mexico, and northern Cascadia margin. Velocity-based gas hydrate saturation estimates are in good agreement with Nuclear Magneto Resonance and resistivity log estimates over the complete range of observed gas hydrate saturations.

  13. Identification of American English vowels by native Japanese speakers: Talker-and-token-based analysis

    NASA Astrophysics Data System (ADS)

    Nozawa, Takeshi; Frieda, Elaina M.; Wayland, Ratree

    2005-09-01

    Native speakers of Japanese identified American English vowels /i, I, ɛ, æ, squflg, squflg/ produced by four female native speakers of American English in /CVC/ contexts. Native speakers of American English served as the control group, and they outperformed the Japanese subjects in identifying all the English vowels in every /CVC/ context. In another experiment the Japanese subjects equated these English vowels with Japanese vowels. In general, English vowels were equated with phonetically close Japanese vowels, but significant talker effect was observed. The /i/ tokens equated with the Japanese long high front vowel /ii/ were much more correctly identified as /i/ than those equated with the Japanese short high front vowel /i/. These tokens were more often misidentified as /I/. The /squflg/ and /squflg/ tokens were predominantly equated with the Japanese low vowel /a/. The percent-correct identification of /squflg/ and /squflg/ was low in most of the /CVC/ contexts, and these two vowels were often misidentified as each other, and the Japanese subjects' latency before they decided what vowel they had heard was longer when /squflg/ or /squflg/ tokens were presented. The Japanese subjects do not seem to have salient cues to differentiate /squflg/ and /squflg/.

  14. The influence of different native language systems on vowel discrimination and identification

    NASA Astrophysics Data System (ADS)

    Kewley-Port, Diane; Bohn, Ocke-Schwen; Nishi, Kanae

    2005-04-01

    The ability to identify the vowel sounds of a language reliably is dependent on the ability to discriminate between vowels at a more sensory level. This study examined how the complexity of the vowel systems of three native languages (L1) influenced listeners perception of American English (AE) vowels. AE has a fairly complex vowel system with 11 monophthongs. In contrast, Japanese has only 5 spectrally different vowels, while Swedish has 9 and Danish has 12. Six listeners, with exposure of less than 4 months in English speaking environments, participated from each L1. Their performance in two tasks was compared to 6 AE listeners. As expected, there were large differences in a linguistic identification task using 4 confusable AE low vowels. Japanese listeners performed quite poorly compared to listeners with more complex L1 vowel systems. Thresholds for formant discrimination for the 3 groups were very similar to those of native AE listeners. Thus it appears that sensory abilities for discriminating vowels are only slightly affected by native vowel systems, and that vowel confusions occur at a more central, linguistic level. [Work supported by funding from NIHDCD-02229 and the American-Scandinavian Foundation.

  15. Who's talking now? Infants' perception of vowels with infant vocal properties.

    PubMed

    Polka, Linda; Masapollo, Matthew; Ménard, Lucie

    2014-07-01

    Little is known about infants' abilities to perceive and categorize their own speech sounds or vocalizations produced by other infants. In the present study, prebabbling infants were habituated to /i/ ("ee") or /a/ ("ah") vowels synthesized to simulate men, women, and children, and then were presented with new instances of the habituation vowel and a contrasting vowel on different trials, with all vowels simulating infant talkers. Infants showed greater recovery of interest to the contrasting vowel than to the habituation vowel, which demonstrates recognition of the habituation-vowel category when it was produced by an infant. A second experiment showed that encoding the vowel category and detecting the novel vowel required additional processing when infant vowels were included in the habituation set. Despite these added cognitive demands, infants demonstrated the ability to track vowel categories in a multitalker array that included infant talkers. These findings raise the possibility that young infants can categorize their own vocalizations, which has important implications for early vocal learning. PMID:24890498

  16. Stimulus Variability and Perceptual Learning of Nonnative Vowel Categories

    ERIC Educational Resources Information Center

    Brosseau-Lapre, Francoise; Rvachew, Susan; Clayards, Meghan; Dickson, Daniel

    2013-01-01

    English-speakers' learning of a French vowel contrast (/schwa/-/slashed o/) was examined under six different stimulus conditions in which contrastive and noncontrastive stimulus dimensions were varied orthogonally to each other. The distribution of contrastive cues was varied across training conditions to create single prototype, variable far…

  17. Comparison of Nasal Acceleration and Nasalance across Vowels

    ERIC Educational Resources Information Center

    Thorp, Elias B.; Virnik, Boris T.; Stepp, Cara E.

    2013-01-01

    Purpose: The purpose of this study was to determine the performance of normalized nasal acceleration (NNA) relative to nasalance as estimates of nasalized versus nonnasalized vowel and sentence productions. Method: Participants were 18 healthy speakers of American English. NNA was measured using a custom sensor, and nasalance was measured using…

  18. Hemispheric Differences in the Effects of Context on Vowel Perception

    ERIC Educational Resources Information Center

    Sjerps, Matthias J.; Mitterer, Holger; McQueen, James M.

    2012-01-01

    Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners' right or left…

  19. Vowel Epenthesis and Segment Identity in Korean Learners of English

    ERIC Educational Resources Information Center

    de Jong, Kenneth; Park, Hanyong

    2012-01-01

    Recent literature has sought to understand the presence of epenthetic vowels after the productions of postvocalic word-final consonants by second language (L2) learners whose first languages (L1s) restrict the presence of obstruents in coda position. Previous models include those in which epenthesis is seen as a strategy to mitigate the effects of…

  20. Vowels in Early Words: An Event-Related Potential Study

    ERIC Educational Resources Information Center

    Mani, Nivedita; Mills, Debra L.; Plunkett, Kim

    2012-01-01

    Previous behavioural research suggests that infants possess phonologically detailed representations of the vowels and consonants in familiar words. These tasks examine infants' sensitivity to mispronunciations of a target label in the presence of a target and distracter image. Sensitivity to the mispronunciation may, therefore, be contaminated by…

  1. Finding Words in a Language that Allows Words without Vowels

    ERIC Educational Resources Information Center

    El Aissati, Abder; McQueen, James M.; Cutler, Anne

    2012-01-01

    Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring "win" in "twin" because "t" cannot be a word). However, the constraint would be counter-productive in…

  2. Effect of Audio vs. Video on Aural Discrimination of Vowels

    ERIC Educational Resources Information Center

    McCrocklin, Shannon

    2012-01-01

    Despite the growing use of media in the classroom, the effects of using of audio versus video in pronunciation teaching has been largely ignored. To analyze the impact of the use of audio or video training on aural discrimination of vowels, 61 participants (all students at a large American university) took a pre-test followed by two training…

  3. Auditory Spectral Integration in the Perception of Static Vowels

    ERIC Educational Resources Information Center

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2011-01-01

    Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…

  4. Comparing Identification of Standardized and Regionally Valid Vowels

    ERIC Educational Resources Information Center

    Wright, Richard; Souza, Pamela

    2012-01-01

    Purpose: In perception studies, it is common to use vowel stimuli from standardized recordings or synthetic stimuli created using values from well-known published research. Although the use of standardized stimuli is convenient, unconsidered dialect and regional accent differences may introduce confounding effects. The goal of this study was to…

  5. Vowel Harmony in Palestinian Arabic: A Metrical Perspective.

    ERIC Educational Resources Information Center

    Abu-Salim, I. M.

    1987-01-01

    The autosegmental rule of vowel harmony (VH) in Palestinian Arabic is shown to be constrained simultaneously by metrical and segmental boundaries. The indicative prefix bi- is no longer an exception to VH if a structure is assumed that disallows the prefix from sharing a foot with the stem, consequently blocking VH. (Author/LMO)

  6. Competing Triggers: Transparency and Opacity in Vowel Harmony

    ERIC Educational Resources Information Center

    Kimper, Wendell A.

    2011-01-01

    This dissertation takes up the issue of transparency and opacity in vowel harmony--that is, when a segment is unable to undergo a harmony process, will it be skipped over by harmony (transparent) or will it prevent harmony from propagating further (opaque)? I argue that the choice between transparency and opacity is best understood as a…

  7. Lingual Electromyography Related to Tongue Movements in Swedish Vowel Production.

    ERIC Educational Resources Information Center

    Hirose, Hajime; And Others

    1979-01-01

    In order to investigate the articulatory dynamics of the tongue in the production of Swedish vowels, electromyographic (EMG) and X-ray microbeam studies were performed on a native Swedish subject. The EMG signals were used to obtain average indication of the muscle activity of the tongue as a function of time. (NCR)

  8. Short Vowels. Fun with Phonics! Book 4. Grades K-1.

    ERIC Educational Resources Information Center

    Eaton, Deborah

    This book is a hands-on activity resource for kindergarten and first grade that makes phonics instruction easy and fun for teachers and children in the classroom. The book offers methods for practice, reinforcement, and assessment of phonetic skills using a poem as a foundation for teaching short vowels. The poem is duplicated so children can work…

  9. Does Size Matter? Subsegmental Cues to Vowel Mispronunciation Detection

    ERIC Educational Resources Information Center

    Mani, Nivedita; Plunkett, Kim

    2011-01-01

    Children look longer at a familiar object when presented with either correct pronunciations or small mispronunciations of consonants in the object's label, but not following larger mispronunciations. The current article examines whether children display a similar graded sensitivity to different degrees of mispronunciations of the vowels in…

  10. Experimental Approach to the Study of Vowel Perception in German

    ERIC Educational Resources Information Center

    Wangler, Hans-Heinrich; Weiss, Rudolf

    1975-01-01

    An experimental phonetic investigation is described whose goal it was to develop a test which could be used to establish norms in the perception of vowels by native speakers of German. Particular emphasis is placed upon the design of the experiment. The test procedure and the results are discussed. Available from Albert J. Phiebig, Inc., P.O. Box…

  11. Fricatives, Affricates, and Vowels in Croatian Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Mildner, Vesna; Liker, Marko

    2008-01-01

    The aim of the research was to analyse the speech of children with cochlear implants over approximately a 46-month period, and compare it with the speech of hearing controls. It focused on three categories of sounds in Croatian: vowels (F1 and F2 of /i/, /e/, /a/, /o/ and /u/), fricatives /s/ and /[esh]/ (spectral differences expressed in terms of…

  12. Vowel Harmony: A Variable Rule in Brazilian Portuguese.

    ERIC Educational Resources Information Center

    Bisol, Leda

    1989-01-01

    Examines vowel harmony in the "Gaucho dialect" of the Brazilian state of Rio Grande do Sul. Informants from four areas of the state were studied: the capital city (Porto Alegre), the border region with Uruguay, and two areas of the interior populated by descendants of nineteenth-century immigrants from Europe, mainly Germans and Italians. (VWL)

  13. Influence of lips on the production of vowels based on finite element simulations and experiments.

    PubMed

    Arnela, Marc; Blandin, Rémi; Dabbaghchian, Saeed; Guasch, Oriol; Alías, Francesc; Pelorson, Xavier; Van Hirtum, Annemie; Engwall, Olov

    2016-05-01

    Three-dimensional (3-D) numerical approaches for voice production are currently being investigated and developed. Radiation losses produced when sound waves emanate from the mouth aperture are one of the key aspects to be modeled. When doing so, the lips are usually removed from the vocal tract geometry in order to impose a radiation impedance on a closed cross-section, which speeds up the numerical simulations compared to free-field radiation solutions. However, lips may play a significant role. In this work, the lips' effects on vowel sounds are investigated by using 3-D vocal tract geometries generated from magnetic resonance imaging. To this aim, two configurations for the vocal tract exit are considered: with lips and without lips. The acoustic behavior of each is analyzed and compared by means of time-domain finite element simulations that allow free-field wave propagation and experiments performed using 3-D-printed mechanical replicas. The results show that the lips should be included in order to correctly model vocal tract acoustics not only at high frequencies, as commonly accepted, but also in the low frequency range below 4 kHz, where plane wave propagation occurs. PMID:27250177

  14. Speech of cochlear implant patients: a longitudinal study of vowel production.

    PubMed

    Perkell, J; Lane, H; Svirsky, M; Webster, J

    1992-05-01

    Acoustic parameters were measured for vowels spoken in /hVd/ context by four postlingually deafened recipients of multichannel (Ineraid) cochlear implants. Three of the subjects became totally deaf in adulthood after varying periods of partial hearing loss; the fourth became totally deaf at age four. The subjects received different degrees of perceptual benefit from the prosthesis. Recordings were made before, and at intervals following speech processor activation. The measured parameters included F1, F2, F0, SPL, duration, and amplitude difference between the first two harmonic peaks in the log magnitude spectrum (H 1-H2). Numerous changes in parameter values were observed from pre- to post-implant, with differences among subjects. Many changes, but not all, were in the direction of normative data, and most changes were consistent with hypotheses about relations among the parameters. Some of the changes tended to enhance phonemic contrasts; others had the opposite effect. For three subjects, H 1-H2 changed in a direction consistent with measurements of their average air flow when reading; that relation was more complex for the fourth subject. The results are interpreted with respect to: characteristics of the individual subjects, including vowel identification scores; mechanical interactions among glottal and supraglottal articulations; and hypotheses about the role of auditory feedback in the control of speech production. Almost all the observed differences could be attributed to changes in the average settings of speaking rate, F0 and SPL, which presumably can be perceived without the need for spectral place information. Some observed F2 realignment may be attributable to the reception of spectral cues. PMID:1629489

  15. Effects of speaking condition and hearing status on vowel production in postlingually deaf adults with cochlear implant

    NASA Astrophysics Data System (ADS)

    Menard, Lucie; Denny, Margaret; Lane, Harlan; Matthies, Melanie L.; Perkell, Joseph S.; Stockmann, Ellen; Vick, Jennell; Zandipour, Majid; Balkany, Thomas; Polack, Marek; Tiede, Mark K.

    2005-09-01

    This study investigates the effects of speaking condition and hearing status on vowel production by postlingually deafened adults. Thirteen cochlear implant users produced repetitions of nine American English vowels in three time samples (prior to implantation, one month, and one year after implantation); in three speaking conditions (clear, normal, and fast); and in two feedback conditions after implantation (implant processor turned on and off). Ten speakers with normal hearing were also recorded. Results show that vowel contrast, in the F1 versus F2 space, in mels, does not differ from the pre-implant stage to the 1-month stage. This finding indicates that shortly after implantation, speakers had not had enough experience with hearing from the implant to adequately retune their auditory feedback systems and use auditory feedback to improve feedforward commands. After 1 year of implant use, contrast distances had increased in both feedback conditions (processor on and off), indicating improvement in feedforward commands for phoneme production. Furthermore, after long-term auditory deprivation, speakers were producing differences in contrast between clear and fast conditions in the range of those found for normal-hearing speakers, leading to the inference that maintenance of these distinctions is not affected by hearing status. [Research supported by NIDCD.

  16. Liquid Rocket Booster (LRB) for the Space Transportation System (STS) systems study. Appendix B: Liquid rocket booster acoustic and thermal environments

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The ascent thermal environment and propulsion acoustic sources for the Martin-Marietta Corporation designed Liquid Rocket Boosters (LRB) to be used with the Space Shuttle Orbiter and External Tank are described. Two designs were proposed: one using a pump-fed propulsion system and the other using a pressure-fed propulsion system. Both designs use LOX/RP-1 propellants, but differences in performance of the two propulsion systems produce significant differences in the proposed stage geometries, exhaust plumes, and resulting environments. The general characteristics of the two designs which are significant for environmental predictions are described. The methods of analysis and predictions for environments in acoustics, aerodynamic heating, and base heating (from exhaust plume effects) are also described. The acoustic section will compare the proposed exhaust plumes with the current SRB from the standpoint of acoustics and ignition overpressure. The sections on thermal environments will provide details of the LRB heating rates and indications of possible changes in the Orbiter and ET environments as a result of the change from SRBs to LRBs.

  17. Acoustic correlates of vocal quality.

    PubMed

    Eskenazi, L; Childers, D G; Hicks, D M

    1990-06-01

    We have investigated the relationship between various voice qualities and several acoustic measures made from the vowel /i/ phonated by subjects with normal voices and patients with vocal disorders. Among the patients (pathological voices), five qualities were investigated: overall severity, hoarseness, breathiness, roughness, and vocal fry. Six acoustic measures were examined. With one exception, all measures were extracted from the residue signal obtained by inverse filtering the speech signal using the linear predictive coding (LPC) technique. A formal listening test was implemented to rate each pathological voice for each vocal quality. A formal listening test also rated overall excellence of the normal voices. A scale of 1-7 was used. Multiple linear regression analysis between the results of the listening test and the various acoustic measures was used with the prediction sums of squares (PRESS) as the selection criteria. Useful prediction equations of order two or less were obtained relating certain acoustic measures and the ratings of pathological voices for each of the five qualities. The two most useful parameters for predicting vocal quality were the Pitch Amplitude (PA) and the Harmonics-to-Noise Ratio (HNR). No acoustic measure could rank the normal voices. PMID:2359270

  18. Modeling of aerodynamic interaction between vocal folds and vocal tract during production of a vowel-voiceless plosive-vowel sequence.

    PubMed

    Delebecque, Louis; Pelorson, Xavier; Beautemps, Denis

    2016-01-01

    The context of this study is the physical modeling of speech production. The objective is, by using a mechanical replica of the vocal tract, to test quantitatively an aerodynamic model of the interaction between the vocal folds and the vocal tract during the production of a vowel-voiceless plosive-vowel sequence. The first step is to realize acoustic and aerodynamic measurements on a speaker during the production of an /apa/ sequence. The aperture and width of the lips are also derived from a high-speed video recording of the subject's face. Theoretical models to describe the flow through the lips and the effect of an expansion of the supraglottal cavity are proposed and validated by comparison with measurements made using a self-oscillating replica of the phonatory system. Finally, using these models, numerical simulations of an /apa/ sequence are performed using the measured lip parameters as the only time-varying input parameters. The results of these simulations suggest that the realization of an occlusion of the vocal tract produces a passive increase in glottal area associated with a voice offset and that the expansion of the supraglottal cavity is responsible for the extension of the phonation up to 40 ms after closure of the lips. PMID:26827030

  19. AST Launch Vehicle Acoustics

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Counter, D.; Giacomoni, D.

    2015-01-01

    The liftoff phase induces acoustic loading over a broad frequency range for a launch vehicle. These external acoustic environments are then used in the prediction of internal vibration responses of the vehicle and components which result in the qualification levels. Thus, predicting these liftoff acoustic (LOA) environments is critical to the design requirements of any launch vehicle. If there is a significant amount of uncertainty in the predictions or if acoustic mitigation options must be implemented, a subscale acoustic test is a feasible pre-launch test option to verify the LOA environments. The NASA Space Launch System (SLS) program initiated the Scale Model Acoustic Test (SMAT) to verify the predicted SLS LOA environments and to determine the acoustic reduction with an above deck water sound suppression system. The SMAT was conducted at Marshall Space Flight Center and the test article included a 5% scale SLS vehicle model, tower and Mobile Launcher. Acoustic and pressure data were measured by approximately 250 instruments. The SMAT liftoff acoustic results are presented, findings are discussed and a comparison is shown to the Ares I Scale Model Acoustic Test (ASMAT) results.

  20. Space-Time Correlation of Stable Boundary-Layer, Weak Wind Data from Ground Based Acoustic Sensors

    NASA Astrophysics Data System (ADS)

    Smoot, A. R.; Thomas, C. K.

    2011-12-01

    We present data collected using ground based acoustic sensing in order to connect near-surface motions including turbulence and sub-meso modes under stable, weak wind conditions to possible external forcing mechanisms from aloft. Under stable stratification and weak wind conditions the generation of the weak, intermittent turbulence is poorly understood, but critical to understanding and modeling the dispersion and diffusion of pollutants and other trace gases. Recent studies have suggested that the driving processes behind weak wind turbulence may include external forcing on the sub-meso scale. The forcing mechanisms may include gravity waves, 2 dimensional horizontal modes, solitons, or interactions between surface flow and low-level jets. Efforts to detect weak wind, sub-meso scale processes have failed so far due to a lack of sufficient spatial coverage necessary for capturing these events. This research has taken an unconventional observational approach by using a pair of SODAR (Sound Detection And Ranging) units. The SODARs have collected data on short time scales with a significant vertical (15 - 300 meters) and horizontal (200 - 1000 meters) coverage. The experiment took place on Oregon State University's Research Farms located roughly a mile to the east of OSU's campus. The site was chosen for its homogenous terrain which allowed the two SODAR's to be separated across the domain without their measurements being contaminated by influence from surface heterogeneity. The experiment has provided a data set comprised of more than 3 months of semi-continuous SODAR data. By making use of the Multi-resolution Decomposition method we will present results on the space-time correlations of the boundary-layer winds on multiple different time scales. The results will be a significant step towards improving the predictability of weak wind meanderings, identifying scaling parameters for sub-meso scale motions, and help to improve air quality and diffusion models.

  1. The effect of hearing loss on identification of asynchronous double vowels.

    PubMed

    Lentz, Jennifer J; Marsh, Shavon L

    2006-12-01

    This study determined whether listeners with hearing loss received reduced benefits due to an onset asynchrony between sounds. Seven normal-hearing listeners and 7 listeners with hearing impairment (HI) were presented with 2 synthetic, steady-state vowels. One vowel (the late-arriving vowel) was 250 ms in duration, and the other (the early-arriving vowel) varied in duration between 350 and 550 ms. The vowels had simultaneous offsets, and therefore an onset asynchrony between the 2 vowels ranged between 100 and 300 ms. The early-arriving and late-arriving vowels also had either the same or different fundamental frequencies. Increases in onset asynchrony and differences in fundamental frequency led to better vowel-identification performance for both groups, with listeners with HI benefiting less from onset asynchrony than normal-hearing listeners. The presence of fundamental frequency differences did not influence the benefit received from onset asynchrony for either group. Excitation pattern modeling indicated that the reduced benefit received from onset asynchrony was not easily predicted by the reduced audibility of the vowel sounds for listeners with HI. Therefore, suprathreshold factors such as loss of the cochlear nonlinearity, reduced temporal integration, and the perception of vowel dominance probably play a greater role in the reduced benefit received from onset asynchrony in listeners with HI. PMID:17197501

  2. The Time Course for Processing Vowels and Lexical Tones: Reading Aloud Thai Words.

    PubMed

    Davis, Chris; Schoknecht, Colin; Kim, Jeesun; Burnham, Denis

    2016-06-01

    Three naming aloud experiments and a lexical decision (LD) experiment used masked priming to index the processing of written Thai vowels and tones. Thai allows for manipulation of the mapping between orthography and phonology not possible in other orthographies, for example, the use of consonants, vowels and tone markers in both horizontal and vertical orthographic positions (HOPs and VOPs). Experiment I showed that changing a vowel between prime and target slowed down target naming but changing a tone mark did not. Experiment I used an across item-design and a different number of HOPs in the way vowels and tones were specified. Experiment 2 used a within-item design and tested vowel and tone changes for both 2-HOP and 3-HOP targets separately. The 3-HOP words showed the same tone and vowel change effect as Experiment 1, whereas 2-HOP items did not. It was speculated that the 2-HOP result was due to the variable position of the vowel affecting priming. Experiment 3 employed a more stringent control over the 2-HOP vowel and tone items and found priming for the tone changes but not for vowel changes. The final experiment retested the items from Experiment 3 with the LD task and found no priming for the tone change items, indicating that the tone effect in Experiment 3 was due to processes involved in naming aloud. In all, the results supported the view that for naming a word, the development of tone information is slower than vowel information. PMID:27363253

  3. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    PubMed Central

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  4. Acoustic Predictors of Intelligibility for Segmentally Interrupted Speech: Temporal Envelope, Voicing, and Duration

    ERIC Educational Resources Information Center

    Fogerty, Daniel

    2013-01-01

    Purpose: Temporal interruption limits the perception of speech to isolated temporal glimpses. An analysis was conducted to determine the acoustic parameter that best predicts speech recognition from temporal fragments that preserve different types of speech information--namely, consonants and vowels. Method: Young listeners with normal hearing…

  5. Articulatory-to-Acoustic Relations in Talkers with Dysarthria: A First Analysis

    ERIC Educational Resources Information Center

    Mefferd, Antje

    2015-01-01

    Purpose: The primary purpose of this study was to determine the strength of interspeaker and intraspeaker articulatory-to-acoustic relations of vowel contrast produced by talkers with dysarthria and controls. Methods: Six talkers with amyotrophic lateral sclerosis (ALS), six talkers with Parkinson's disease (PD), and 12 controls repeated a…

  6. Acoustic Cues to Perception of Word Stress by English, Mandarin, and Russian Speakers

    ERIC Educational Resources Information Center

    Chrabaszcz, Anna; Winn, Matthew; Lin, Candise Y.; Idsardi, William J.

    2014-01-01

    Purpose: This study investigated how listeners' native language affects their weighting of acoustic cues (such as vowel quality, pitch, duration, and intensity) in the perception of contrastive word stress. Method: Native speakers (N = 45) of typologically diverse languages (English, Russian, and Mandarin) performed a stress identification…

  7. Toward the Development of an Objective Index of Dysphonia Severity: A Four-Factor Acoustic Model

    ERIC Educational Resources Information Center

    Awan, Shaheen N.; Roy, Nelson

    2006-01-01

    During assessment and management of individuals with voice disorders, clinicians routinely attempt to describe or quantify the severity of a patient's dysphonia. This investigation used acoustic measures derived from sustained vowel samples to predict dysphonia severity (as determined by auditory-perceptual ratings), for a diverse set of voice…

  8. The Acoustic and Perceptual Correlates of Emphasis in Urban Jordanian Arabic

    ERIC Educational Resources Information Center

    Al-Masri, Mohammad

    2009-01-01

    Acoustic and perceptual correlates of emphasis, a secondary articulation in the posterior vocal tract, in Urban Jordanian Arabic were studied. CVC monosyllables and CV.CVC bisyllables with emphatic and plain target consonants in word-initial, word-medial and word-final positions were examined. Spectral measurements on the target vowels at vowel…

  9. On the Synchronization of Acoustic Gravity Waves

    NASA Astrophysics Data System (ADS)

    Lonngren, Karl E.; Bai, Er-Wei

    Using the model proposed by Stenflo, we demonstrate that acoustic gravity waves found in one region of space can be synchronized with acoustic gravity waves found in another region of space using techniques from modern control theory.

  10. Sensitivity of envelope following responses to vowel polarity.

    PubMed

    Easwar, Vijayalakshmi; Beamish, Laura; Aiken, Steven; Choi, Jong Min; Scollie, Susan; Purcell, David

    2015-02-01

    Envelope following responses (EFRs) elicited by stimuli of opposite polarities are often averaged due to their insensitivity to polarity when elicited by amplitude modulated tones. A recent report illustrates that individuals exhibit varying degrees of polarity-sensitive differences in EFR amplitude when elicited by vowel stimuli (Aiken and Purcell, 2013). The aims of the current study were to evaluate the incidence and degree of polarity-sensitive differences in EFRs recorded in a large group of individuals, and to examine potential factors influencing the polarity-sensitive nature of EFRs. In Experiment I of the present study, we evaluated the incidence and degree of polarity-sensitive differences in EFR amplitude in a group of 39 participants. EFRs were elicited by opposite polarities of the vowel /ε/ in a natural /hVd/ context presented at 80 dB SPL. Nearly 30% of the participants with detectable responses (n = 24) showed a difference of greater than ∼39 nV in EFR response amplitude between the two polarities, that was unexplained by variations in noise estimates. In Experiment II, we evaluated the effect of vowel, frequency of harmonics and presence of the first harmonic (h1) on the polarity sensitivity of EFRs in 20 participants with normal hearing. For vowels /u/, /a/ and /i/, EFRs were elicited by two simultaneously presented carriers representing the first formant (resolved harmonics), and the second and higher formants (unresolved harmonics). Individual but simultaneous EFRs were elicited by the formant carriers by separating the fundamental frequency in the two carriers by 8 Hz. Vowels were presented as part of a naturally produced, but modified sequence /susaʃi/, at an overall level of 65 dB SPL. To evaluate the effect of h1 on polarity sensitivity of EFRs, EFRs were elicited by the same vowels without h1 in an identical sequence. A repeated measures analysis of variance indicated a significant effect of polarity on EFR amplitudes for the

  11. Vowel production, speech-motor control, and phonological encoding in people who are lesbian, bisexual, or gay, and people who are not

    NASA Astrophysics Data System (ADS)

    Munson, Benjamin; Deboe, Nancy

    2003-10-01

    A recent study (Pierrehumbert, Bent, Munson, and Bailey, submitted) found differences in vowel production between people who are lesbian, bisexual, or gay (LBG) and people who are not. The specific differences (more fronted /u/ and /a/ in the non-LB women; an overall more-contracted vowel space in the non-gay men) were not amenable to an interpretation based on simple group differences in vocal-tract geometry. Rather, they suggested that differences were either due to group differences in some other skill, such as motor control or phonological encoding, or learned. This paper expands on this research by examining vowel production, speech-motor control (measured by diadochokinetic rates), and phonological encoding (measured by error rates in a tongue-twister task) in people who are LBG and people who are not. Analyses focus on whether the findings of Pierrehumbert et al. (submitted) are replicable, and whether group differences in vowel production are related to group differences in speech-motor control or phonological encoding. To date, 20 LB women, 20 non-LB women, 7 gay men, and 7 non-gay men have participated. Preliminary analyses suggest that there are no group differences in speech motor control or phonological encoding, suggesting that the earlier findings of Pierrehumbert et al. reflected learned behaviors.

  12. Capabilities, Design, Construction and Commissioning of New Vibration, Acoustic, and Electromagnetic Capabilities Added to the World's Largest Thermal Vacuum Chamber at NASA's Space Power Facility

    NASA Technical Reports Server (NTRS)

    Motil, Susan M.; Ludwiczak, Damian R.; Carek, Gerald A.; Sorge, Richard N.; Free, James M.; Cikanek, Harry A., III

    2011-01-01

    NASA s human space exploration plans developed under the Exploration System Architecture Studies in 2005 included a Crew Exploration Vehicle launched on an Ares I launch vehicle. The mass of the Crew Exploration Vehicle and trajectory of the Ares I coupled with the need to be able to abort across a large percentage of the trajectory generated unprecedented testing requirements. A future lunar lander added to projected test requirements. In 2006, the basic test plan for Orion was developed. It included several types of environment tests typical of spacecraft development programs. These included thermal-vacuum, electromagnetic interference, mechanical vibration, and acoustic tests. Because of the size of the vehicle and unprecedented acoustics, NASA conducted an extensive assessment of options for testing, and as result, chose to augment the Space Power Facility at NASA Plum Brook Station, of the John H. Glenn Research Center to provide the needed test capabilities. The augmentation included designing and building the World s highest mass capable vibration table, the highest power large acoustic chamber, and adaptation of the existing World s largest thermal vacuum chamber as a reverberant electromagnetic interference test chamber. These augmentations were accomplished from 2007 through early 2011. Acceptance testing began in Spring 2011 and will be completed in the Fall of 2011. This paper provides an overview of the capabilities, design, construction and acceptance of this extraordinary facility.

  13. Comment on "Existence domains of slow and fast ion-acoustic solitons in two-ion space plasmas" [Phys. Plasmas 22, 032313 (2015)

    NASA Astrophysics Data System (ADS)

    Olivier, C. P.; Maharaj, S. K.; Bharuthram, R.

    2016-06-01

    In a series of papers by Maharaj et al., including "Existence domains of slow and fast ion-acoustic solitons in two-ion space plasmas" [Phys. Plasmas 22, 032313 (2015)], incorrect expressions for the Sagdeev potential are presented. In this paper, we provide the correct expression of the Sagdeev potential. The correct expression was used to generate the numerical results for the above-mentioned series of papers, so that all results and conclusions are correct, despite the wrong Sagdeev potential expressions printed in the papers. The correct expression of the Sagdeev potential presented here is a very useful generic expression in the sense that a single expression can be used to study nonlinear structures associated with any acoustic mode, despite the fact that the supersonic and subsonic species would vary if solitons associated with different linear modes are studied.

  14. Automated Classification of Vowel Category and Speaker Type in the High-Frequency Spectrum

    PubMed Central

    Donai, Jeremy J.; Motiian, Saeid; Doretto, Gianfranco

    2016-01-01

    The high-frequency region of vowel signals (above the third formant or F3) has received little research attention. Recent evidence, however, has documented the perceptual utility of high-frequency information in the speech signal above the traditional frequency bandwidth known to contain important cues for speech and speaker recognition. The purpose of this study was to determine if high-pass filtered vowels could be separated by vowel category and speaker type in a supervised learning framework. Mel frequency cepstral coefficients (MFCCs) were extracted from productions of six vowel categories produced by two male, two female, and two child speakers. Results revealed that the filtered vowels were well separated by vowel category and speaker type using MFCCs from the high-frequency spectrum. This demonstrates the presence of useful information for automated classification from the high-frequency region and is the first study to report findings of this nature in a supervised learning framework.

  15. Automated Classification of Vowel Category and Speaker Type in the High-Frequency Spectrum.

    PubMed

    Donai, Jeremy J; Motiian, Saeid; Doretto, Gianfranco

    2016-04-20

    The high-frequency region of vowel signals (above the third formant or F3) has received little research attention. Recent evidence, however, has documented the perceptual utility of high-frequency information in the speech signal above the traditional frequency bandwidth known to contain important cues for speech and speaker recognition. The purpose of this study was to determine if high-pass filtered vowels could be separated by vowel category and speaker type in a supervised learning framework. Mel frequency cepstral coefficients (MFCCs) were extracted from productions of six vowel categories produced by two male, two female, and two child speakers. Results revealed that the filtered vowels were well separated by vowel category and speaker type using MFCCs from the high-frequency spectrum. This demonstrates the presence of useful information for automated classification from the high-frequency region and is the first study to report findings of this nature in a supervised learning framework. PMID:27588160

  16. Evaluating acoustic speaker normalization algorithms: evidence from longitudinal child data.

    PubMed

    Kohn, Mary Elizabeth; Farrington, Charlie

    2012-03-01

    Speaker vowel formant normalization, a technique that controls for variation introduced by physical differences between speakers, is necessary in variationist studies to compare speakers of different ages, genders, and physiological makeup in order to understand non-physiological variation patterns within populations. Many algorithms have been established to reduce variation introduced into vocalic data from physiological sources. The lack of real-time studies tracking the effectiveness of these normalization algorithms from childhood through adolescence inhibits exploration of child participation in vowel shifts. This analysis compares normalization techniques applied to data collected from ten African American children across five time points. Linear regressions compare the reduction in variation attributable to age and gender for each speaker for the vowels BEET, BAT, BOT, BUT, and BOAR. A normalization technique is successful if it maintains variation attributable to a reference sociolinguistic variable, while reducing variation attributable to age. Results indicate that normalization techniques which rely on both a measure of central tendency and range of the vowel space perform best at reducing variation attributable to age, although some variation attributable to age persists after normalization for some sections of the vowel space. PMID:22423719

  17. Adaptive Multi-Rate Compression Effects on Vowel Analysis

    PubMed Central

    Ireland, David; Knuepffer, Christina; McBride, Simon J.

    2015-01-01

    Signal processing on digitally sampled vowel sounds for the detection of pathological voices has been firmly established. This work examines compression artifacts on vowel speech samples that have been compressed using the adaptive multi-rate codec at various bit-rates. Whereas previous work has used the sensitivity of machine learning algorithm to test for accuracy, this work examines the changes in the extracted speech features themselves and thus report new findings on the usefulness of a particular feature. We believe this work will have potential impact for future research on remote monitoring as the identification and exclusion of an ill-defined speech feature that has been hitherto used, will ultimately increase the robustness of the system. PMID:26347863

  18. Constraints of Tones, Vowels and Consonants on Lexical Selection in Mandarin Chinese.

    PubMed

    Wiener, Seth; Turnbull, Rory

    2016-03-01

    Previous studies have shown that when speakers of European languages are asked to turn nonwords into words by altering either a vowel or consonant, they tend to treat vowels as more mutable than consonants. These results inspired the universal vowel mutability hypothesis: listeners learn to cope with vowel variability because vowel information constrains lexical selection less tightly and allows for more potential candidates than does consonant information. The present study extends the word reconstruction paradigm to Mandarin Chinese--a Sino-Tibetan language, which makes use of lexically contrastive tone. Native speakers listened to word-like nonwords (e.g., su3) and were asked to change them into words by manipulating a single consonant (e.g., tu3), vowel (e.g., si3), or tone (e.g., su4). Additionally, items were presented in a fourth condition in which participants could change any part. The participants' reaction times and responses were recorded. Results revealed that participants responded faster and more accurately in both the free response and the tonal change conditions. Unlike previous reconstruction studies on European languages, where vowels were changed faster and more often than consonants, these results demonstrate that, in Mandarin, changes to vowels and consonants were both overshadowed by changes to tone, which was the preferred modification to the stimulus nonwords, while changes to vowels were the slowest and least accurate. Our findings show that the universal vowel mutability hypothesis is not consistent with a tonal language, that Mandarin tonal information is lower-priority than consonants and vowels and that vowel information most tightly constrains Mandarin lexical access. PMID:27089806

  19. Variability of Vowel Formant Frequencies and the Quantal Theory of Speech: A First Report

    PubMed Central

    Pisoni, David B.

    2012-01-01

    This paper reports the results of a study in which variability of formant frequencies for different vowels was examined with regard to several predictions derived from the quantal theory of speech. Two subjects were required to reproduce eight different steady-state synthetic vowels which were presented repeatedly in a randomized order. Spectral analysis was carried out on the vocal responses in order to obtain means and standard deviations of the vowel formant frequencies. In the spirit of the quantal theory, it was predicted that the points vowel, /i/, /a/ and /u/ would show lower standard deviations than the nonpoint vowels because these vowels are assumed to be produced at places in the vocal tract where small perturbations in articulation produce only minimal changes in the resulting formant frequencies. That is, these vowels are assumed to be quantal vowels. The results of this study provided little support for the hypothesis under consideration. A discussion of the outcome of the results as well as some speculation as to its failure to find support for the quantal theory is provided in the report. Several final comments are also offered about computer simulation studies of speech production and the need for additional empirical studies on vowel production with real talkers. PMID:7280032

  20. Vowel Perception in Listeners With Normal Hearing and in Listeners With Hearing Loss: A Preliminary Study

    PubMed Central

    Charles, Lauren; Street, Nicole Drakopoulos

    2015-01-01

    Objectives To determine the influence of hearing loss on perception of vowel slices. Methods Fourteen listeners aged 20-27 participated; ten (6 males) had hearing within normal limits and four (3 males) had moderate-severe sensorineural hearing loss (SNHL). Stimuli were six naturally-produced words consisting of the vowels /i a u æ ɛ ʌ/ in a /b V b/ context. Each word was presented as a whole and in eight slices: the initial transition, one half and one fourth of initial transition, full central vowel, one-half central vowel, ending transition, one half and one fourth of ending transition. Each of the 54 stimuli was presented 10 times at 70 dB SPL (sound press level); listeners were asked to identify the word. Stimuli were shaped using signal processing software for the listeners with SNHL to mimic gain provided by an appropriately-fitting hearing aid. Results Listeners with SNHL had a steeper rate of decreasing vowel identification with decreasing slice duration as compared to listeners with normal hearing, and the listeners with SNHL showed different patterns of vowel identification across vowels when compared to listeners with normal hearing. Conclusion Abnormal temporal integration is likely affecting vowel identification for listeners with SNHL, which in turn affects vowel internal representation at different levels of the auditory system. PMID:25729492

  1. Perception of speaker size and sex of vowel sounds

    NASA Astrophysics Data System (ADS)

    Smith, David R. R.; Patterson, Roy D.

    2005-04-01

    Glottal-pulse rate (GPR) and vocal-tract length (VTL) are both related to speaker size and sex-however, it is unclear how they interact to determine our perception of speaker size and sex. Experiments were designed to measure the relative contribution of GPR and VTL to judgements of speaker size and sex. Vowels were scaled to represent people with different GPRs and VTLs, including many well beyond the normal population values. In a single interval, two response rating paradigm, listeners judged the size (using a 7-point scale) and sex/age of the speaker (man, woman, boy, or girl) of these scaled vowels. Results from the size-rating experiments show that VTL has a much greater influence upon judgements of speaker size than GPR. Results from the sex-categorization experiments show that judgements of speaker sex are influenced about equally by GPR and VTL for vowels with normal GPR and VTL values. For abnormal combinations of GPR and VTL, where low GPRs are combined with short VTLs, VTL has more influence than GPR in sex judgements. [Work supported by the UK MRC (G9901257) and the German Volkswagen Foundation (VWF 1/79 783).

  2. Syllabification effects on the acoustic structure of intervocalic /r/

    NASA Astrophysics Data System (ADS)

    Huffman, Marie

    2005-09-01

    Imaging and modeling studies suggest that American English /r/ has a complex articulatory profile. Gick [Phonology 16, 29-54(1999)] has proposed that dialectal differences in the presence of /r/ follow from the effects of syllable structure and prosody on component vocalic and consonantal gestures of /r/. This study presents acoustic data on word-medial, intervocalic /r/'s for speakers of two varieties of American English. Both varieties show an effect of /r/ on F3 and/or F4 of a preceding vowel. Where they differ is the acoustic properties of the constriction portion of intervocalic /r/. For one group, the intervocalic /r/ is very vocalic, with little difference in formant amplitude compared to the preceding vowel. For the other group, intervocalic /r/ is more consonantal, with clearly weaker formant structure than the preceding vowel. These differences in the acoustic profile of intervocalic /r/ co-vary with dialectal differences in production of final coda /r/. These results support a gestural account of /r/ variability, while also demonstrating the need for explicit principles of syllable organization which must be specified for each dialect. [Work supported by NSF Grant No. 0325188.

  3. On the resolution of phonological constraints in spoken production: Acoustic and response time evidence.

    PubMed

    Bürki, Audrey; Frauenfelder, Ulrich H; Alario, F-Xavier

    2015-10-01

    This study examines the production of words the pronunciation of which depends on the phonological context. Participants produced adjective-noun phrases starting with the French determiner un. The pronunciation of this determiner requires a liaison consonant before vowels. Naming latencies and determiner acoustic durations were shorter when the adjective and the noun both started with vowels or both with consonants, than when they had different onsets. These results suggest that the liaison process is not governed by the application of a local contextual phonological rule; they rather favor the hypothesis that pronunciation variants with and without the liaison consonant are stored in memory. PMID:26520356

  4. Encoding of vowel-like sounds in the auditory nerve: Model predictions of discrimination performance

    NASA Astrophysics Data System (ADS)

    Tan, Qing; Carney, Laurel H.

    2005-03-01

    The sensitivity of listeners to changes in the center frequency of vowel-like harmonic complexes as a function of the center frequency of the complex cannot be explained by changes in the level of the stimulus [Lyzenga and Horst, J. Acoust. Soc. Am. 98, 1943-1955 (1995)]. Rather, a complex pattern of sensitivity is seen; for a spectrum with a triangular envelope, the greatest sensitivity occurs when the center frequency falls between harmonics, whereas for a spectrum with a trapezoidal envelope, greatest sensitivity occurs when the center frequency is aligned with a harmonic. In this study, the thresholds of a population model of auditory-nerve (AN) fibers were quantitatively compared to these trends in psychophysical thresholds. Single-fiber and population model responses were evaluated in terms of both average discharge rate and the combination of rate and timing information. Results indicate that phase-locked responses of AN fibers encode phase transitions associated with minima in these amplitude-modulated stimuli. The temporal response properties of a single AN fiber, tuned to a frequency slightly above the center frequency of the harmonic complex, were able to explain the trends in thresholds for both triangular- and trapezoidal-shaped spectra. .

  5. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English.

    PubMed

    Banzina, Elina; Dilley, Laura C; Hewitt, Lynne E

    2016-08-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found, likely due to a transfer of native phonological features. Next, a cross-modal phonological priming technique combined with a lexical decision task assessed the effect of inaccurate SS and UU syllable productions on native American English listeners' speech processing. Inaccurate UU vowels led to significant inhibition of lexical access, while reduced SS vowels revealed less interference. The results have implications for understanding the role of SS and UU syllables for word recognition and English pronunciation instruction. PMID:25980971

  6. Advanced Distributed Measurements and Data Processing at the Vibro-Acoustic Test Facility, GRC Space Power Facility, Sandusky, Ohio - an Architecture and an Example

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.; Evans, Richard K.

    2009-01-01

    A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.

  7. Acoustic-Phonetic Differences between Infant- and Adult-Directed Speech: The Role of Stress and Utterance Position

    ERIC Educational Resources Information Center

    Wang, Yuanyuan; Seidl, Amanda; Cristia, Alejandrina

    2015-01-01

    Previous studies have shown that infant-directed speech (IDS) differs from adult-directed speech (ADS) on a variety of dimensions. The aim of the current study was to investigate whether acoustic differences between IDS and ADS in English are modulated by prosodic structure. We compared vowels across the two registers (IDS, ADS) in both stressed…

  8. Point Vowel Duration in Children with Hearing Aids and Cochlear Implants at 4 and 5 Years of Age

    ERIC Educational Resources Information Center

    Vandam, Mark; Ide-Helvie, Dana; Moeller, Mary Pat

    2011-01-01

    This work investigates the developmental aspects of the duration of point vowels in children with normal hearing compared with those with hearing aids and cochlear implants at 4 and 5 years of age. Younger children produced longer vowels than older children, and children with hearing loss (HL) produced longer and more variable vowels than their…

  9. Infant Directed Speech in Natural Interaction--Norwegian Vowel Quantity and Quality

    ERIC Educational Resources Information Center

    Englund, Kjellrun T.; Behne, Dawn M.

    2005-01-01

    An interactive face-to-face setting is used to study natural infant directed speech (IDS) compared to adult directed speech (ADS). With distinctive vowel quantity and vowel quality, Norwegian IDS was used in a natural quasi-experimental design. Six Norwegian mothers were recorded over a period of 6 months alone with their infants and in an adult…

  10. Phonology, Decoding, and Lexical Compensation in Vowel Spelling Errors Made by Children with Dyslexia

    ERIC Educational Resources Information Center

    Bernstein, Stuart E.

    2009-01-01

    A descriptive study of vowel spelling errors made by children first diagnosed with dyslexia (n = 79) revealed that phonological errors, such as "bet" for "bat", outnumbered orthographic errors, such as "bate" for "bait". These errors were more frequent in nonwords than words, suggesting that lexical context helps with vowel spelling. In a second…

  11. Sounds and Stories: A Vowel-Centered Approach to Reading Proficiency. Teacher's Manual.

    ERIC Educational Resources Information Center

    Weinstein, Marcia

    Very often, the greatest source of difficulty for the disabled reader is the inconsistency of the vowel sounds in the English language. This program, intended primarily for remedial use with any reader above the first grade, is designed to attack this problem by providing intense, highly structured practice in the regular vowel sounds while…

  12. Children's Perception of Conversational and Clear American-English Vowels in Noise

    ERIC Educational Resources Information Center

    Leone, Dorothy; Levy, Erika S.

    2015-01-01

    Purpose: Much of a child's day is spent listening to speech in the presence of background noise. Although accurate vowel perception is important for listeners' accurate speech perception and comprehension, little is known about children's vowel perception in noise. "Clear speech" is a speech style frequently used by talkers in the…

  13. The Prosodic Licensing of Coda Consonants in Early Speech: Interactions with Vowel Length

    ERIC Educational Resources Information Center

    Miles, Kelly; Yuen, Ivan; Cox, Felicity; Demuth, Katherine

    2016-01-01

    English has a word-minimality requirement that all open-class lexical items must contain at least two moras of structure, forming a bimoraic foot (Hayes, 1995).Thus, a word with either a long vowel, or a short vowel and a coda consonant, satisfies this requirement. This raises the question of when and how young children might learn this…

  14. Vowel Confusion Patterns in Adults during Initial 4 Years of Implant Use

    ERIC Educational Resources Information Center

    Vaalimaa, Taina T.; Sorri, Martti J.; Laitakari, Jaakko; Sivonen, Ville; Muhli, Arto

    2011-01-01

    This study investigated adult cochlear implant users' (n == 39) vowel recognition and confusions by an open-set syllable test during 4 years of implant use, in a prospective repeated-measures design. Subjects' responses were coded for phoneme errors and estimated by the generalized mixed model. Improvement in overall vowel recognition was highest…

  15. The Influence of Reduced Audible Bandwidth on Asynchronous Double-Vowel Identification

    ERIC Educational Resources Information Center

    Valentine, Susie; Lentz, Jennifer J.

    2012-01-01

    Purpose: In this study, the authors sought to determine whether reduced audible bandwidth associated with hearing loss contributes to difficulty benefiting from an onset asynchrony between sounds. Method: Synthetic double-vowel identification was measured for normal-hearing listeners and listeners with hearing loss. One vowel (Target 2) was 250 ms…

  16. Perception and Production of Five English Front Vowels by College Students

    ERIC Educational Resources Information Center

    Lin, Ching-Ying

    2014-01-01

    This study was to explore whether college students could perceive and produce five English front vowels well or not. It also examined the relationship between English speaking and listening. To be more specific, the study attempted to probe which vowels that learners could be confused easily in speaking and listening. The results revealed that…

  17. Orthographic Context Sensitivity in Vowel Decoding by Portuguese Monolingual and Portuguese-English Bilingual Children

    ERIC Educational Resources Information Center

    Vale, Ana Paula

    2011-01-01

    This study examines the pronunciation of the first vowel in decoding disyllabic pseudowords derived from Portuguese words. Participants were 96 Portuguese monolinguals and 52 Portuguese-English bilinguals of equivalent Portuguese reading levels. The results indicate that sensitivity to vowel context emerges early, both in monolinguals and in…

  18. Linguistic Variation in a Border Town: Palatalization of Dental Stops and Vowel Nasalization in Rivera

    ERIC Educational Resources Information Center

    Castaneda-Molla, Rosa Maria

    2011-01-01

    This study focuses on the analysis of variation at the phonological level, specifically the variable realization of palatalization of dental stops before the high vowel /i/ and vowel nasalization in the speech of bilingual speakers of Uruguayan Portuguese (UP) in the city of Rivera, Uruguay. The data were collected in participant-observation and…

  19. Shallow and Deep Orthographies in Hebrew: The Role of Vowelization in Reading Development for Unvowelized Scripts

    ERIC Educational Resources Information Center

    Schiff, Rachel

    2012-01-01

    The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts,…

  20. Effects of Short- and Long-Term Changes in Auditory Feedback on Vowel and Sibilant Contrasts

    ERIC Educational Resources Information Center

    Lane, Harlan; Matthies, Melanie L.; Guenther, Frank H.; Denny, Margaret; Perkell, Joseph S.; Stockmann, Ellen; Tiede, Mark; Vick, Jennell; Zandipour, Majid

    2007-01-01

    Purpose: To assess the effects of short- and long-term changes in auditory feedback on vowel and sibilant contrasts and to evaluate hypotheses arising from a model of speech motor planning. Method: The perception and production of vowel and sibilant contrasts were measured in 8 postlingually deafened adults prior to activation of their cochlear…

  1. Shallow and deep orthographies in Hebrew: the role of vowelization in reading development for unvowelized scripts.

    PubMed

    Schiff, Rachel

    2012-12-01

    The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts, whereas among fourth and sixth graders reading of unvowelized scripts developed to a greater degree than the reading of vowelized scripts. An analysis of the mediation effect for children's mastery of vowelized reading speed and accuracy on their mastery of unvowelized reading speed and comprehension revealed that in second grade, reading accuracy of vowelized words mediated the reading speed and comprehension of unvowelized scripts. In the fourth grade, accuracy in reading both vowelized and unvowelized words mediated the reading speed and comprehension of unvowelized scripts. By sixth grade, accuracy in reading vowelized words offered no mediating effect, either on reading speed or comprehension of unvowelized scripts. The current outcomes thus suggest that young Hebrew readers undergo a scaffolding process, where vowelization serves as the foundation for building initial reading abilities and is essential for successful and meaningful decoding of unvowelized scripts. PMID:22210537

  2. Regional Dialect Variation in the Vowel Systems of Typically Developing Children

    ERIC Educational Resources Information Center

    Jacewicz, Ewa; Fox, Robert Allen; Salmons, Joseph

    2011-01-01

    Purpose: To investigate regional dialect variation in the vowel systems of typically developing 8- to 12-year-old children. Method: Thirteen vowels in isolated "h_d" words were produced by 94 children and 93 adults (males and females). All participants spoke American English and were born and raised in 1 of 3 distinct dialect regions in the United…

  3. Vowel Representations in the Invented Spellings of Spanish-English Bilingual Kindergartners

    ERIC Educational Resources Information Center

    Raynolds, Laura B.; Uhry, Joanna K.; Brunner, Jessica

    2013-01-01

    The study compared the invented spelling of vowels in kindergarten native Spanish speaking children with that of English monolinguals. It examined whether, after receiving phonics instruction for short vowels, the spelling of native Spanish-speaking kindergartners would contain phonological errors that were influenced by their first language.…

  4. The Acquisition of Phonetic Details: Evidence from the Production of English Reduced Vowels by Korean Learners

    ERIC Educational Resources Information Center

    Han, Jeong-Im; Hwang, Jong-Bai; Choi, Tae-Hwan

    2011-01-01

    The purpose of this study was to evaluate the acquisition of non-contrastive phonetic details of a second language. Reduced vowels in English are realized as a schwa or barred- i depending on their phonological contexts, but Korean has no reduced vowels. Two groups of Korean learners of English who differed according to the experience of residence…

  5. The Effects of Surgical Rapid Maxillary Expansion (SRME) on Vowel Formants

    ERIC Educational Resources Information Center

    Sari, Emel; Kilic, Mehmet Akif

    2009-01-01

    The objective of this study was to investigate the effect of surgical rapid maxillary expansion (SRME) on vowel production. The subjects included 12 patients, whose speech were considered perceptually normal, that had undergone surgical RME for expansion of a narrow maxilla. They uttered the following Turkish vowels, ([a], [[epsilon

  6. Interaction of Native- and Second-Language Vowel System(s) in Early and Late Bilinguals

    ERIC Educational Resources Information Center

    Baker, Wendy; Trofimovich, Pavel

    2005-01-01

    The objective of this study was to determine how bilinguals' age at the time of language acquisition influenced the organization of their phonetic system(s). The productions of six English and five Korean vowels by English and Korean monolinguals were compared to the productions of the same vowels by early and late Korean-English bilinguals…

  7. Cross-Linguistic Differences in the Immediate Serial Recall of Consonants versus Vowels

    ERIC Educational Resources Information Center

    Kissling, Elizabeth M.

    2012-01-01

    The current study investigated native English and native Arabic speakers' phonological short-term memory for sequences of consonants and vowels. Phonological short-term memory was assessed in immediate serial recall tasks conducted in Arabic and English for both groups. Participants (n = 39) heard series of six consonant-vowel syllables and wrote…

  8. Evaluating Computational Models in Cognitive Neuropsychology: The Case from the Consonant/Vowel Distinction

    ERIC Educational Resources Information Center

    Knobel, Mark; Caramazza, Alfonso

    2007-01-01

    Caramazza et al. [Caramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing of consonants and vowels. "Nature," 403(6768), 428-430.] report two patients who exhibit a double dissociation between consonants and vowels in speech production. The patterning of this double dissociation cannot be explained by appealing to…

  9. Scale Model Thruster Acoustic Measurement Results

    NASA Technical Reports Server (NTRS)

    Kenny, R. Jeremy; Vargas, Magda B.

    2013-01-01

    Subscale rocket acoustic data is used to predict acoustic environments for full scale rockets. Over the last several years acoustic data has been collected during horizontal tests of solid rocket motors. Space Launch System (SLS) Scale Model Acoustic Test (SMAT) was designed to evaluate the acoustics of the SLS vehicle including the liquid engines and solid rocket boosters. SMAT is comprised of liquid thrusters scalable to the Space Shuttle Main engines (SSME) and Rocket Assisted Take Off (RATO) motors scalable to the 5-segment Reusable Solid Rocket Motor (RSTMV). Horizontal testing of the liquid thrusters provided an opportunity to collect acoustic data from liquid thrusters to characterize the acoustic environments. Acoustic data was collected during the horizontal firings of a single thruster and a 4-thruster (Quad) configuration. Presentation scope. Discuss the results of the single and 4-thruster acoustic measurements. Compare the measured acoustic levels of the liquid thrusters to the Solid Rocket Test Motor V - Nozzle 2 (SRTMV-N2).

  10. An acoustic and electroglottographic study of V-glottal stop-V in two indigenous American languages

    NASA Astrophysics Data System (ADS)

    Esposito, Christina M.; Scarborough, Rebecca

    2001-05-01

    Both Pima, a Uto-Aztecan language spoken in Arizona, and Santa Ana del Valle Zapotec (SADVZ), an Otomanguean language spoken in Oaxaca, Mexico, have sequences of two vowels separated by an intervening glottal stop. In both languages, this V?V sequence becomes reduced in certain occurrences, with the perceptual effect of the loss of /?/ in Pima and the loss of V2 in SADVZ. The purpose of this study is to provide an acoustic and electroglottographic (EGG) description of these sequences in both their full and reduced forms, prompted by varying speech rate. Two acoustic measures of phonation type (H1-H2, H1-A3) and two EGG measures (OQ and peak closing velocity) were made at vowel midpoints and adjacent to /?/. For Pima, an issue of interest is what properties of the /V?V/ sequences (when V1=V2) allow them to be distinguished from phonemic long vowels in the reduced forms where /?/ is lost. It is hypothesized that /?/ will be preserved as vowel glottalization. For SADVZ, an important question is why the vowels sound creaky despite a lack of spectral evidence for creak. It is hoped that more direct EGG measures will show the perceived phonation.

  11. Study of acoustic correlates associate with emotional speech

    NASA Astrophysics Data System (ADS)

    Yildirim, Serdar; Lee, Sungbok; Lee, Chul Min; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Ebrahim; Narayanan, Shrikanth

    2004-10-01

    This study investigates the acoustic characteristics of four different emotions expressed in speech. The aim is to obtain detailed acoustic knowledge on how a speech signal is modulated by changes from neutral to a certain emotional state. Such knowledge is necessary for automatic emotion recognition and classification and emotional speech synthesis. Speech data obtained from two semi-professional actresses are analyzed and compared. Each subject produces 211 sentences with four different emotions; neutral, sad, angry, happy. We analyze changes in temporal and acoustic parameters such as magnitude and variability of segmental duration, fundamental frequency and the first three formant frequencies as a function of emotion. Acoustic differences among the emotions are also explored with mutual information computation, multidimensional scaling and acoustic likelihood comparison with normal speech. Results indicate that speech associated with anger and happiness is characterized by longer duration, shorter interword silence, higher pitch and rms energy with wider ranges. Sadness is distinguished from other emotions by lower rms energy and longer interword silence. Interestingly, the difference in formant pattern between [happiness/anger] and [neutral/sadness] are better reflected in back vowels such as /a/(/father/) than in front vowels. Detailed results on intra- and interspeaker variability will be reported.

  12. Pre-attentive detection of vowel contrasts utilizes both phonetic and auditory memory representations.

    PubMed

    Winkler, I; Lehtokoski, A; Alku, P; Vainio, M; Czigler, I; Csépe, V; Aaltonen, O; Raimo, I; Alho, K; Lang, H; Iivonen, A; Näätänen, R

    1999-01-01

    Event-related brain potentials (ERP) were recorded to infrequent changes of a synthesized vowel (standard) to another vowel (deviant) in speakers of Hungarian and Finnish language, which are remotely related to each other with rather similar vowel systems. Both language groups were presented with identical stimuli. One standard-deviant pair represented an across-vowel category contrast in Hungarian, but a within-category contrast in Finnish, with the other pair having the reversed role in the two languages. Both within- and across-category contrasts elicited the mismatch negativity (MMN) ERP component in the native speakers of either language. The MMN amplitude was larger in across- than within-category contrasts in both language groups. These results suggest that the pre-attentive change-detection process generating the MMN utilized both auditory (sensory) and phonetic (categorical) representations of the test vowels. PMID:9838192

  13. Detecting Nasal Vowels in Speech Interfaces Based on Surface Electromyography.

    PubMed

    Freitas, João; Teixeira, António; Silva, Samuel; Oliveira, Catarina; Dias, Miguel Sales

    2015-01-01

    Nasality is a very important characteristic of several languages, European Portuguese being one of them. This paper addresses the challenge of nasality detection in surface electromyography (EMG) based speech interfaces. We explore the existence of useful information about the velum movement and also assess if muscles deeper down in the face and neck region can be measured using surface electrodes, and the best electrode location to do so. The procedure we adopted uses Real-Time Magnetic Resonance Imaging (RT-MRI), collected from a set of speakers, providing a method to interpret EMG data. By ensuring compatible data recording conditions, and proper time alignment between the EMG and the RT-MRI data, we are able to accurately estimate the time when the velum moves and the type of movement when a nasal vowel occurs. The combination of these two sources revealed interesting and distinct characteristics in the EMG signal when a nasal vowel is uttered, which motivated a classification experiment. Overall results of this experiment provide evidence that it is possible to detect velum movement using sensors positioned below the ear, between mastoid process and the mandible, in the upper neck region. In a frame-based classification scenario, error rates as low as 32.5% for all speakers and 23.4% for the best speaker have been achieved, for nasal vowel detection. This outcome stands as an encouraging result, fostering the grounds for deeper exploration of the proposed approach as a promising route to the development of an EMG-based speech interface for languages with strong nasal characteristics. PMID:26069968

  14. Detecting Nasal Vowels in Speech Interfaces Based on Surface Electromyography

    PubMed Central

    Freitas, João; Teixeira, António; Silva, Samuel; Oliveira, Catarina; Dias, Miguel Sales

    2015-01-01

    Nasality is a very important characteristic of several languages, European Portuguese being one of them. This paper addresses the challenge of nasality detection in surface electromyography (EMG) based speech interfaces. We explore the existence of useful information about the velum movement and also assess if muscles deeper down in the face and neck region can be measured using surface electrodes, and the best electrode location to do so. The procedure we adopted uses Real-Time Magnetic Resonance Imaging (RT-MRI), collected from a set of speakers, providing a method to interpret EMG data. By ensuring compatible data recording conditions, and proper time alignment between the EMG and the RT-MRI data, we are able to accurately estimate the time when the velum moves and the type of movement when a nasal vowel occurs. The combination of these two sources revealed interesting and distinct characteristics in the EMG signal when a nasal vowel is uttered, which motivated a classification experiment. Overall results of this experiment provide evidence that it is possible to detect velum movement using sensors positioned below the ear, between mastoid process and the mandible, in the upper neck region. In a frame-based classification scenario, error rates as low as 32.5% for all speakers and 23.4% for the best speaker have been achieved, for nasal vowel detection. This outcome stands as an encouraging result, fostering the grounds for deeper exploration of the proposed approach as a promising route to the development of an EMG-based speech interface for languages with strong nasal characteristics. PMID:26069968

  15. 3D acoustic wave modelling with time-space domain dispersion-relation-based finite-difference schemes and hybrid absorbing boundary conditions

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Sen, Mrinal K.

    2011-09-01

    Most conventional finite-difference methods adopt second-order temporal and (2M)th-order spatial finite-difference stencils to solve the 3D acoustic wave equation. When spatial finite-difference stencils devised from the time-space domain dispersion relation are used to replace these conventional spatial finite-difference stencils devised from the space domain dispersion relation, the accuracy of modelling can be increased from second-order along any directions to (2M)th-order along 48 directions. In addition, the conventional high-order spatial finite-difference modelling accuracy can be improved by using a truncated finite-difference scheme. In this paper, we combine the time-space domain dispersion-relation-based finite difference scheme and the truncated finite-difference scheme to obtain optimised spatial finite-difference coefficients and thus to significantly improve the modelling accuracy without increasing computational cost, compared with the conventional space domain dispersion-relation-based finite difference scheme. We developed absorbing boundary conditions for the 3D acoustic wave equation, based on predicting wavefield values in a transition area by weighing wavefield values from wave equations and one-way wave equations. Dispersion analyses demonstrate that high-order spatial finite-difference stencils have greater accuracy than low-order spatial finite-difference stencils for high frequency components of wavefields, and spatial finite-difference stencils devised in the time-space domain have greater precision than those devised in the space domain under the same discretisation. The modelling accuracy can be improved further by using the truncated spatial finite-difference stencils. Stability analyses show that spatial finite-difference stencils devised in the time-space domain have better stability condition. Numerical modelling experiments for homogeneous, horizontally layered and Society of Exploration Geophysicists/European Association of

  16. Speech Coding in the Brain: Representation of Vowel Formants by Midbrain Neurons Tuned to Sound Fluctuations(1,2,3).

    PubMed

    Carney, Laurel H; Li, Tianhao; McDonough, Joyce M

    2015-01-01

    Current models for neural coding of vowels are typically based on linear descriptions of the auditory periphery, and fail at high sound levels and in background noise. These models rely on either auditory nerve discharge rates or phase locking to temporal fine structure. However, both discharge rates and phase locking saturate at moderate to high sound levels, and phase locking is degraded in the CNS at middle to high frequencies. The fact that speech intelligibility is robust over a wide range of sound levels is problematic for codes that deteriorate as the sound level increases. Additionally, a successful neural code must function for speech in background noise at levels that are tolerated by listeners. The model presented here resolves these problems, and incorporates several key response properties of the nonlinear auditory periphery, including saturation, synchrony capture, and phase locking to both fine structure and envelope temporal features. The model also includes the properties of the auditory midbrain, where discharge rates are tuned to amplitude fluctuation rates. The nonlinear peripheral response features create contrasts in the amplitudes of low-frequency neural rate fluctuations across the population. These patterns of fluctuations result in a response profile in the midbrain that encodes vowel formants over a wide range of levels and in background noise. The hypothesized code is supported by electrophysiological recordings from the inferior colliculus of awake rabbits. This model provides information for understanding the structure of cross-linguistic vowel spaces, and suggests strategies for automatic formant detection and speech enhancement for listeners with hearing loss. PMID:26464993

  17. Speech Coding in the Brain: Representation of Vowel Formants by Midbrain Neurons Tuned to Sound Fluctuations1,2,3

    PubMed Central

    Li, Tianhao; McDonough, Joyce M.

    2015-01-01

    Abstract Current models for neural coding of vowels are typically based on linear descriptions of the auditory periphery, and fail at high sound levels and in background noise. These models rely on either auditory nerve discharge rates or phase locking to temporal fine structure. However, both discharge rates and phase locking saturate at moderate to high sound levels, and phase locking is degraded in the CNS at middle to high frequencies. The fact that speech intelligibility is robust over a wide range of sound levels is problematic for codes that deteriorate as the sound level increases. Additionally, a successful neural code must function for speech in background noise at levels that are tolerated by listeners. The model presented here resolves these problems, and incorporates several key response properties of the nonlinear auditory periphery, including saturation, synchrony capture, and phase locking to both fine structure and envelope temporal features. The model also includes the properties of the auditory midbrain, where discharge rates are tuned to amplitude fluctuation rates. The nonlinear peripheral response features create contrasts in the amplitudes of low-frequency neural rate fluctuations across the population. These patterns of fluctuations result in a response profile in the midbrain that encodes vowel formants over a wide range of levels and in background noise. The hypothesized code is supported by electrophysiological recordings from the inferior colliculus of awake rabbits. This model provides information for understanding the structure of cross-linguistic vowel spaces, and suggests strategies for automatic formant detection and speech enhancement for listeners with hearing loss. PMID:26464993

  18. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species

    PubMed Central

    Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert

    2016-01-01

    State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718

  19. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species.

    PubMed

    Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert

    2016-01-01

    State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718

  20. Acoustical heat pumping engine

    DOEpatents

    Wheatley, John C.; Swift, Gregory W.; Migliori, Albert

    1983-08-16

    The disclosure is directed to an acoustical heat pumping engine without moving seals. A tubular housing holds a compressible fluid capable of supporting an acoustical standing wave. An acoustical driver is disposed at one end of the housing and the other end is capped. A second thermodynamic medium is disposed in the housing near to but spaced from the capped end. Heat is pumped along the second thermodynamic medium toward the capped end as a consequence both of the pressure oscillation due to the driver and imperfect thermal contact between the fluid and the second thermodynamic medium.