A Constraint-Based Approach to Acquisition of Word-Final Consonant Clusters in Turkish Children
ERIC Educational Resources Information Center
Gokgoz-Kurt, Burcu
2017-01-01
The current study provides a constraint-based analysis of L1 word-final consonant cluster acquisition in Turkish child language, based on the data originally presented by Topbas and Kopkalli-Yavuz (2008). The present analysis was done using [?]+obstruent consonant cluster acquisition. A comparison of Gradual Learning Algorithm (GLA) under…
Japanese Listeners' Perceptions of Phonotactic Violations
ERIC Educational Resources Information Center
Fais, Laurel; Kajikawa, Sachiyo; Werker, Janet; Amano, Shigeaki
2005-01-01
The canonical form for Japanese words is (Consonant)Vowel(Consonant) Vowel[approximately]. However, a regular process of high vowel devoicing between voiceless consonants and word-finally after voiceless consonants results in consonant clusters and word-final consonants, apparent violations of that phonotactic pattern. We investigated Japanese…
Shollenbarger, Amy J; Robinson, Gregory C; Taran, Valentina; Choi, Seo-Eun
2017-10-05
This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant-vowel-consonant-consonant (CVCC) words and nonwords. Forty-nine 1st graders met criteria for 2 dialect groups: AAE and MAE. Three conditions were tested in each rhyme and segmentation task: Real Words No Model, Real Words With a Model, and Nonwords With a Model. The AAE group had significantly more responses that rhymed CVCC words with consonant-vowel-consonant words and segmented CVCC words as consonant-vowel-consonant than the MAE group across all experimental conditions. In the rhyming task, the presence of a model in the real word condition elicited more reduced final cluster responses for both groups. In the segmentation task, the MAE group was at ceiling, so only the AAE group changed across the different stimulus presentations and reduced the final cluster less often when given a model. Rhyming and phoneme segmentation performance can be influenced by a child's dialect when CVCC words are used.
ERIC Educational Resources Information Center
Shollenbarger, Amy J.; Robinson, Gregory C.; Taran, Valentina; Choi, Seo-eun
2017-01-01
Purpose: This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant-vowel-consonant-consonant (CVCC) words and nonwords.…
Reviewing Sonority for Word-Final Sonorant+Obstruent Consonant Cluster Development in Turkish
ERIC Educational Resources Information Center
Topbas, Seyhun; Kopkalli-Yavuz, Handan
2008-01-01
The purpose of this study is to investigate the acquisition patterns of sonorant+obstruent coda clusters in Turkish to determine whether Turkish data support the prediction the Sonority Sequencing Principle (SSP) makes as to which consonant (i.e. C1 or C2) is more likely to be preserved in sonorant+obstruent clusters, and the error patterns of…
Phonological awareness of English by Chinese and Korean bilinguals
NASA Astrophysics Data System (ADS)
Chung, Hyunjoo; Schmidt, Anna; Cheng, Tse-Hsuan
2002-05-01
This study examined non-native speakers phonological awareness of spoken English. Chinese speaking adults, Korean speaking adults, and English speaking adults were tested. The L2 speakers had been in the US for less than 6 months. Chinese and Korean allow no consonant clusters and have limited numbers of consonants allowable in syllable final position, whereas English allows a variety of clusters and various consonants in syllable final position. Subjects participated in eight phonological awareness tasks (4 replacement tasks and 4 deletion tasks) based on English phonology. In addition, digit span was measured. Preliminary analysis indicates that Chinese and Korean speaker errors appear to reflect L1 influences (such as orthography, phonotactic constraints, and phonology). All three groups of speakers showed more difficulty with manipulation of rime than onset, especially with postvocalic nasals. Results will be discussed in terms of syllable structure, L1 influence, and association with short term memory.
Acquisition of /S/ Clusters in English-Speaking Children with Phonological Disorders
ERIC Educational Resources Information Center
Yavas, Mehmet; McLeod, Sharynne
2010-01-01
Two member onset consonant clusters with /s/ as the first member (#sC onsets) behave differently from other double onset consonant clusters in English. Phonological explanations of children's consonant cluster production have been posited to predict children's speech acquisition. The aim of this study was to consider the role of the Sonority…
Spanish Dyslexic Spelling Abilities: The Case of Consonant Clusters
ERIC Educational Resources Information Center
Serrano, Francisca; Defior, Sylvia
2012-01-01
This paper investigates Spanish dyslexic spelling abilities: specifically, the influence of syllabic linguistic structure (simple vs consonant cluster) on children's spelling performance. Consonant clusters are phonologically complex structures, so it was anticipated that there would be lower spelling performance for these syllabic structures than…
Infant Discrimination of a Morphologically Relevant Word-Final Contrast
ERIC Educational Resources Information Center
Fais, Laurel; Kajikawa, Sachiyo; Amano, Shigeaki; Werker, Janet F.
2009-01-01
Six-, 12-, and 18-month-old English-hearing infants were tested on their ability to discriminate nonword forms ending in the final stop consonants /k/ and /t/ from their counterparts with final /s/ added, resulting in final clusters /ks/ and /ts/, in a habituation-dishabituation, looking time paradigm. Infants at all 3 ages demonstrated an ability…
Wiese, Richard; Orzechowska, Paula; Alday, Phillip M.; Ulbrich, Christiane
2017-01-01
Phonological knowledge of a language involves knowledge about which segments can be combined under what conditions. Languages vary in the quantity and quality of licensed combinations, in particular sequences of consonants, with Polish being a language with a large inventory of such combinations. The present paper reports on a two-session experiment in which Polish-speaking adult participants learned nonce words with final consonant clusters. The aim was to study the role of two factors which potentially play a role in the learning of phonotactic structures: the phonological principle of sonority (ordering sound segments within the syllable according to their inherent loudness) and the (non-) existence as a usage-based phenomenon. EEG responses in two different time windows (adversely to behavioral responses) show linguistic processing by native speakers of Polish to be sensitive to both distinctions, in spite of the fact that Polish is rich in sonority-violating clusters. In particular, a general learning effect in terms of an N400 effect was found which was demonstrated to be different for sonority-obeying clusters than for sonority-violating clusters. Furthermore, significant interactions of formedness and session, and of existence and session, demonstrate that both factors, the sonority principle and the frequency pattern, play a role in the learning process. PMID:28119642
ERIC Educational Resources Information Center
Young, Edna Carter; Thompson, Cynthia K.
1987-01-01
The effects of treatment on errors in consonant clusters and in ambisyllabic consonants were investigated in two adults with histories of developmental phonological problems. Results indicated that treatment, consisting of a sound-referenced rebus approach, affected change in production of trained words as well as generalization to untrained words…
Influence of syllable structure on L2 auditory word learning.
Hamada, Megumi; Goya, Hideki
2015-04-01
This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.
ERIC Educational Resources Information Center
Snellings, Patrick; van der Leij, Aryan; Blok, Henk; de Jong, Peter F.
2010-01-01
This study investigated the role of speech perception accuracy and speed in fluent word decoding of reading disabled (RD) children. A same-different phoneme discrimination task with natural speech tested the perception of single consonants and consonant clusters by young but persistent RD children. RD children were slower than chronological age…
English speech acquisition in 3- to 5-year-old children learning Russian and English.
Gildersleeve-Neumann, Christina E; Wright, Kira L
2010-10-01
English speech acquisition in Russian-English (RE) bilingual children was investigated, exploring the effects of Russian phonetic and phonological properties on English single-word productions. Russian has more complex consonants and clusters and a smaller vowel inventory than English. One hundred thirty-seven single-word samples were phonetically transcribed from 14 RE and 28 English-only (E) children, ages 3;3 (years;months) to 5;7. Language and age differences were compared descriptively for phonetic inventories. Multivariate analyses compared phoneme accuracy and error rates between the two language groups. RE children produced Russian-influenced phones in English, including palatalized consonants and trills, and demonstrated significantly higher rates of trill substitution, final devoicing, and vowel errors than E children, suggesting Russian language effects on English. RE and E children did not differ in their overall production complexity, with similar final consonant deletion and cluster reduction error rates, similar phonetic inventories by age, and similar levels of phonetic complexity. Both older language groups were more accurate than the younger language groups. We observed effects of Russian on English speech acquisition; however, there were similarities between the RE and E children that have not been reported in previous studies of speech acquisition in bilingual children. These findings underscore the importance of knowing the phonological properties of both languages of a bilingual child in assessment.
Syllabification of Final Consonant Clusters: A Salient Pronunciation Problem of Kurdish EFL Learners
ERIC Educational Resources Information Center
Keshavarz, Mohammad Hossein
2017-01-01
While there is a plethora of research on pronunciation problems of EFL learners with different L1 backgrounds, published empirical studies on syllabification errors of Iraqi Kurdish EFL learners are scarce. Therefore, to contribute to this line of research, the present study set out to investigate difficulties of this group of learners in the…
Consonant Cluster Acquisition by L2 Thai Speakers
ERIC Educational Resources Information Center
Rungruang, Apichai
2017-01-01
Attempts to account for consonant cluster acquisition are always made into two aspects. One is transfer of the first language (L1), and another is markedness effects on the developmental processes in second language acquisition. This study has continued these attempts by finding out how well Thai university students were able to perceive English…
Asymmetries in the Acquisition of Word-Initial and Word-Final Consonant Clusters
ERIC Educational Resources Information Center
Kirk, Cecilia; Demuth, Katherine
2005-01-01
Effects of negative input for 13 categories of grammatical error were assessed in a longitudinal study of naturalistic adult-child discourse. Two-hour samples of conversational interaction were obtained at two points in time, separated by a lag of 12 weeks, for 12 children (mean age 2;0 at the start). The data were interpreted within the framework…
The phonological abilities of Cantonese-speaking children with hearing loss.
Dodd, B J; So, L K
1994-06-01
Little is known about the acquisition of phonology by children with hearing loss who learn languages other than English. In this study, the phonological abilities of 12 Cantonese-speaking children (ages 4:2 to 6:11) with prelingual hearing impairment are described. All but 3 children had almost complete syllable-initial consonant repertoires; all but 2 had complete syllable-final consonant and vowel repertoires; and only 1 child failed to produce all nine tones. Children's perception of single words was assessed using sets of words that included tone, consonant, and semantic distractors. Although the performance of the subjects was not age appropriate, they nevertheless most often chose the target, with most errors observed for the tone distractor. The phonological rules used included those that characterize the speech of younger hearing children acquiring Cantonese (e.g., cluster reduction, stopping, and deaspiration). However, most children also used at least one unusual phonological rule (e.g., frication, addition, initial consonant deletion, and/or backing). These rules are common in the speech of Cantonese-speaking children diagnosed as phonologically disordered. The influence of the ambient language on children's patterns of phonological errors is discussed.
Phonetic Effects on the Timing of Gestural Coordination in Modern Greek Consonant Clusters
ERIC Educational Resources Information Center
Yip, Jonathan Chung-Kay
2013-01-01
Theoretical approaches to the principles governing the coordination of speech gestures differ in their assessment of the contributions of biomechanical and perceptual pressures on this coordination. Perceptually-oriented accounts postulate that, for consonant-consonant (C1-C2) sequences, gestural timing patterns arise from speakers' sensitivity to…
Investigation into Korean EFL Learners' Acquisition of English /s/ + Consonant Onset Clusters
ERIC Educational Resources Information Center
Choi, Jungyoun
2016-01-01
This paper investigated the phonological acquisition of English /s/ + consonant onset clusters by Korean learners of English as a Foreign Language (EFL) who varied in their levels of proficiency. The data were collected from twenty eighth-graders in a Korean secondary school, who were divided into two groups according to their proficiency: low-…
ERIC Educational Resources Information Center
Faes, Jolien; Gillis, Steven
2017-01-01
In early word productions, the same types of errors are manifest in children with cochlear implants (CI) as in their normally hearing (NH) peers with respect to consonant clusters. However, the incidence of those types and their longitudinal development have not been examined or quantified in the literature thus far. Furthermore, studies on the…
Phoon, Hooi San; Abdullah, Anna Christina; Lee, Lay Wah; Murugaiah, Puvaneswary
2014-05-01
To date, there has been little research done on phonological acquisition in the Malay language of typically developing Malay-speaking children. This study serves to fill this gap by providing a systematic description of Malay consonant acquisition in a large cohort of preschool-aged children between 4- and 6-years-old. In the study, 326 Malay-dominant speaking children were assessed using a picture naming task that elicited 53 single words containing all the primary consonants in Malay. Two main analyses were conducted to study their consonant acquisition: (1) age of customary and mastery production of consonants; and (2) consonant accuracy. Results revealed that Malay children acquired all the syllable-initial and syllable-final consonants before 4;06-years-old, with the exception of syllable-final /s/, /h/ and /l/ which were acquired after 5;06-years-old. The development of Malay consonants increased gradually from 4- to 6 years old, with female children performing better than male children. The accuracy of consonants based on manner of articulation showed that glides, affricates, nasals, and stops were higher than fricatives and liquids. In general, syllable-initial consonants were more accurate than syllable-final consonants while consonants in monosyllabic and disyllabic words were more accurate than polysyllabic words. These findings will provide significant information for speech-language pathologists for assessing Malay-speaking children and designing treatment objectives that reflect the course of phonological development in Malay.
ERIC Educational Resources Information Center
Cho, Taehong; McQueen, James M.
2011-01-01
Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or /k/, deleted or preserved) in…
ERIC Educational Resources Information Center
Chan, Alice Y. W.
2006-01-01
This article discusses the strategies used by Cantonese ESL learners to cope with their problems in pronouncing English initial consonant clusters. A small-scale research study was carried out with six secondary and six university students in Hong Kong, who were asked to perform four speech tasks: the reading of a word list, the description of a…
Acquisition of Japanese contracted sounds in L1 phonology
NASA Astrophysics Data System (ADS)
Tsurutani, Chiharu
2002-05-01
Japanese possesses a group of palatalized consonants, known to Japanese scholars as the contracted sounds, [CjV]. English learners of Japanese appear to treat them initially as consonant + glide clusters, where there is an equivalent [Cj] cluster in English, or otherwise tend to insert an epenthetic vowel [CVjV]. The acquisition of the Japanese contracted sounds by first language (L1) learners has not been widely studied compared with the consonant clusters in English with which they bear a close phonetic resemblance but have quite a different phonological status. This is a study to investigate the L1 acquisition process of the Japanese contracted sounds (a) in order to observe how the palatalization gesture is acquired in Japanese and (b) to investigate differences in the sound acquisition processes of first and second language (L2) learners: Japanese children compared with English learners. To do this, the productions of Japanese children ranging in age from 2.5 to 3.5 years were transcribed and the pattern of misproduction was observed.
Phonological Systems of Speech-Disordered Clients with Positive/Negative Histories of Otitis Media.
ERIC Educational Resources Information Center
Churchill, Janine D.; And Others
1988-01-01
Evaluation of object-naming utterances of articulation-disordered children (ages 3-6) found that subjects with histories of recurrent otitis media during their first 24 months evidenced stridency deletion (in consonant singletons and in consonant clusters) significantly more than did subjects with negative otitis media histories. (Author/DB)
Infants Learn Phonotactic Regularities from Brief Auditory Experience.
ERIC Educational Resources Information Center
Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia
2003-01-01
Two experiments investigated whether novel phonotactic regularities, not present in English, could be acquired by 16.5-month-olds from brief auditory experience. Subjects listened to consonant-vowel-consonant syllables in which particular consonants were artificially restricted to either initial or final position. Findings in a subsequent…
ERIC Educational Resources Information Center
Kim, Minjung; Kim, Soo-Jin; Stoel-Gammon, Carol
2017-01-01
This study investigates the phonological acquisition of Korean consonants using conversational speech samples collected from sixty monolingual typically developing Korean children aged two, three, and four years. Phonemic acquisition was examined for syllable-initial and syllable-final consonants. Results showed that Korean children acquired stops…
The Role of Consonant/Vowel Organization in Perceptual Discrimination
ERIC Educational Resources Information Center
Chetail, Fabienne; Drabs, Virginie; Content, Alain
2014-01-01
According to a recent hypothesis, the CV pattern (i.e., the arrangement of consonant and vowel letters) constrains the mental representation of letter strings, with each vowel or vowel cluster being the core of a unit. Six experiments with the same/different task were conducted to test whether this structure is extracted prelexically. In the…
Markedness in the Perception of L2 English Consonant Clusters
ERIC Educational Resources Information Center
AlMahmoud, Mahmoud S.
2011-01-01
The central goal of this dissertation is to explore the relative perceptibility of vowel epenthesis in English onset clusters by second language learners whose native language is averse to onset clusters. The dissertation examines how audible vowel epenthesis in different onset clusters is, whether this perceptibility varies from one cluster to…
The Effect of Orthography on the Lexical Encoding of Palatalized Consonants in L2 Russian.
Simonchyk, Ala; Darcy, Isabelle
2018-03-01
The current study investigated the potential facilitative or inhibiting effects of orthography on the lexical encoding of palatalized consonants in L2 Russian. We hypothesized that learners with stable knowledge of orthographic and metalinguistic representations of palatalized consonants would display more accurate lexical encoding of the plain/palatalized contrast. The participants of the study were 40 American learners of Russian. Ten Russian native speakers served as a control group. The materials of the study comprised 20 real words, familiar to the participants, with target coronal consonants alternating in word-final and intervocalic positions. The participants performed three tasks: written picture naming, metalinguistic, and auditory word-picture matching. Results showed that learners were not entirely familiar with the grapheme-phoneme correspondences in L2 Russian. Even though they spelled almost all of these familiar Russian words accurately, they were able to identify the plain/palatalized status of the target consonants in these words with about 80% accuracy on a metalinguistic task. The effect of orthography on the lexical encoding was found to be dependent on the syllable position of the target consonants. In intervocalic position, learners erroneously relied on vowels following the target consonants rather than the consonants themselves to encode words with plain/palatalized consonants. In word-final position, although learners possessed the orthographic and metalinguistic knowledge of the difference in the palatalization status of the target consonants-and hence had established some aspects of the lexical representations for the words-those representations appeared to lack in phonological granularity and detail, perhaps due to the lack of perceptual salience.
Segmentation and Representation of Consonant Blends in Kindergarten Children's Spellings
ERIC Educational Resources Information Center
Werfel, Krystal L.; Schuele, C. Melanie
2012-01-01
Purpose: The purpose of this study was to describe the growth of children's segmentation and representation of consonant blends in the kindergarten year and to evaluate the extent to which linguistic features influence segmentation and representation of consonant blends. Specifically, the roles of word position (initial blends, final blends),…
Choosing between Alternative Spellings of Sounds: The Role of Context
ERIC Educational Resources Information Center
Treiman, Rebecca; Kessler, Brett
2016-01-01
We investigated how university students select between alternative spellings of phonemes in written production by asking them to spell nonwords whose final consonants have extended spellings (e.g., ‹ff› for /f/) and simpler spellings (e.g., ‹f› for /f/). Participants' choices of spellings for the final consonant were influenced by whether they…
Bartle-Meyer, Carly J; Goozee, Justine V; Murdoch, Bruce E
2009-02-01
The current study aimed to use electromagnetic articulography (EMA) to investigate the effect of increasing word length on lingual kinematics in acquired apraxia of speech (AOS). Tongue-tip and tongue-back movement was recorded for five speakers with AOS and a concomitant aphasia (mean age = 53.6 years; SD = 12.60) during target consonant production (i.e. /t, s, k/ singletons; /kl, sk/ clusters), for one and two syllable stimuli. The results obtained for each of the participants with AOS were individually compared to those obtained by a control group (n = 12; mean age = 52.08 years; SD = 12.52). Results indicated that the participants with AOS exhibited longer movement durations and, in some instances, larger tongue movements during consonant singletons and consonant cluster constituents embedded within mono- and multisyllabic utterances. Despite this, two participants with AOS exhibited a word length effect that was comparable with the control speakers, and possibly indicative of an intact phonological system.
Describing Phonological Paraphasias in Three Variants of Primary Progressive Aphasia.
Dalton, Sarah Grace Hudspeth; Shultz, Christine; Henry, Maya L; Hillis, Argye E; Richardson, Jessica D
2018-03-01
The purpose of this study was to describe the linguistic environment of phonological paraphasias in 3 variants of primary progressive aphasia (semantic, logopenic, and nonfluent) and to describe the profiles of paraphasia production for each of these variants. Discourse samples of 26 individuals diagnosed with primary progressive aphasia were investigated for phonological paraphasias using the criteria established for the Philadelphia Naming Test (Moss Rehabilitation Research Institute, 2013). Phonological paraphasias were coded for paraphasia type, part of speech of the target word, target word frequency, type of segment in error, word position of consonant errors, type of error, and degree of change in consonant errors. Eighteen individuals across the 3 variants produced phonological paraphasias. Most paraphasias were nonword, followed by formal, and then mixed, with errors primarily occurring on nouns and verbs, with relatively few on function words. Most errors were substitutions, followed by addition and deletion errors, and few sequencing errors. Errors were evenly distributed across vowels, consonant singletons, and clusters, with more errors occurring in initial and medial positions of words than in the final position of words. Most consonant errors consisted of only a single-feature change, with few 2- or 3-feature changes. Importantly, paraphasia productions by variant differed from these aggregate results, with unique production patterns for each variant. These results suggest that a system where paraphasias are coded as present versus absent may be insufficient to adequately distinguish between the 3 subtypes of PPA. The 3 variants demonstrate patterns that may be used to improve phenotyping and diagnostic sensitivity. These results should be integrated with recent findings on phonological processing and speech rate. Future research should attempt to replicate these results in a larger sample of participants with longer speech samples and varied elicitation tasks. https://doi.org/10.23641/asha.5558107.
Influence of Initial and Final Consonants on Vowel Duration in CVC Syllables.
ERIC Educational Resources Information Center
Naeser, Margaret A.
This study investigates the influence of initial and final consonants /p, b, s, z/ on the duration of four vowels /I, i, u, ae/ in 64 CVC syllables uttered by eight speakers of English from the same dialect area. The CVC stimuli were presented to the subjects in a frame sentence from a master tape. Subjects repeated each sentence immediately after…
Effects of Word Position on the Acoustic Realization of Vietnamese Final Consonants.
Tran, Thi Thuy Hien; Vallée, Nathalie; Granjon, Lionel
2018-05-28
A variety of studies have shown differences between phonetic features of consonants according to their prosodic and/or syllable (onset vs. coda) positions. However, differences are not always found, and interactions between the various factors involved are complex and not well understood. Our study compares acoustical characteristics of coda consonants in Vietnamese taking into account their position within words. Traditionally described as monosyllabic, Vietnamese is partially polysyllabic at the lexical level. In this language, tautosyllabic consonant sequences are prohibited, and adjacent consonants are only found at syllable boundaries either within polysyllabic words (CVC.CVC) or across monosyllabic words (CVC#CVC). This study is designed to examine whether or not syllable boundary types (interword vs. intraword) have an effect on the acoustic realization of codas. The results show significant acoustic differences in consonant realizations according to syllable boundary type, suggesting different coarticulation patterns between nuclei and codas. In addition, as Vietnamese voiceless stops are generally unreleased in coda position, with no burst to carry consonantal information, our results show that a vowel's second half contains acoustic cues which are available to aid in the discrimination of place of articulation of the vowel's following consonant. © 2018 S. Karger AG, Basel.
Gagnon, Bernadine; Miozzo, Michele
2017-01-01
Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044
Stress Domain Effects in French Phonology and Phonological Development.
Rose, Yvan; Dos Santos, Christophe
In this paper, we discuss two distinct data sets. The first relates to the so-called allophonic process of closed-syllable laxing in Québec French, which targets final (stressed) vowels even though these vowels are arguably syllabified in open syllables in lexical representations. The second is found in the forms produced by a first language learner of European French, who displays an asymmetry in her production of CVC versus CVCV target (adult) forms. The former display full preservation (with concomitant manner harmony) of both consonants. The latter undergoes deletion of the initial syllable if the consonants are not manner-harmonic in the input. We argue that both patterns can be explained through a phonological process of prosodic strengthening targeting the head of the prosodic domain which, in the contexts described above, yields the incorporation of final consonants into the coda of the stressed syllable.
ERIC Educational Resources Information Center
Ota, Mitsuhiko; Green, Sam J.
2013-01-01
Although it has been often hypothesized that children learn to produce new sound patterns first in frequently heard words, the available evidence in support of this claim is inconclusive. To re-examine this question, we conducted a survival analysis of word-initial consonant clusters produced by three children in the Providence Corpus (0 ; 11-4 ;…
Speech characteristics in a Ugandan child with a rare paramedian craniofacial cleft: a case report.
Van Lierde, K M; Bettens, K; Luyten, A; De Ley, S; Tungotyo, M; Balumukad, D; Galiwango, G; Bauters, W; Vermeersch, H; Hodges, A
2013-03-01
The purpose of this study is to describe the speech characteristics in an English-speaking Ugandan boy of 4.5 years who has a rare paramedian craniofacial cleft (unilateral lip, alveolar, palatal, nasal and maxillary cleft, and associated hypertelorism). Closure of the lip together with the closure of the hard and soft palate (one-stage palatal closure) was performed at the age of 5 months. Objective as well as subjective speech assessment techniques were used. The speech samples were perceptually judged for articulation, intelligibility and nasality. The Nasometer was used for the objective measurement of the nasalance values. The most striking communication problems in this child with the rare craniofacial cleft are an incomplete phonetic inventory, a severely impaired speech intelligibility with the presence of very severe hypernasality, mild nasal emission, phonetic disorders (omission of several consonants, decreased intraoral pressure in explosives, insufficient frication of fricatives and the use of a middorsum palatal stop) and phonological disorders (deletion of initial and final consonants and consonant clusters). The increased objective nasalance values are in agreement with the presence of the audible nasality disorders. The results revealed that several phonetic and phonological articulation disorders together with a decreased speech intelligibility and resonance disorders are present in the child with a rare craniofacial cleft. To what extent a secondary surgery for velopharyngeal insufficiency, combined with speech therapy, will improve speech intelligibility, articulation and resonance characteristics is a subject for further research. The results of such analyses may ultimately serve as a starting point for specific surgical and logopedic treatment that addresses the specific needs of children with rare facial clefts. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Phonetic difficulty and stuttering in English
Howell, Peter; Au-Yeung, James; Yaruss, Scott; Eldridge, Kevin
2007-01-01
Previous work has shown that phonetic difficulty affects older, but not younger, speakers who stutter and that older speakers experience more difficulty on content words than function words. The relationship between stuttering rate and a recently-developed index of phonetic complexity (IPC, Jakielski, 1998) was examined in this study separately for function and content words for speakers in 6-11, 11 plus-18 and 18 plus age groups. The hypothesis that stuttering rate on the content words of older speakers, but not younger speakers, would be related to the IPC score was supported. It is argued that the similarity between results using the IPC scores with a previous analysis that looked at late emerging consonants, consonant strings and multiple syllables (also conducted on function and content words separately), validates the former instrument. In further analyses, the factors that are most likely to lead to stuttering in English and their order of importance were established. The order found was consonant by manner, consonant by place, word length and contiguous consonant clusters. As the effects of phonetic difficulty are evident in teenage and adulthood, at least some of the factors may have an acquired influence on stuttering (rather than an innate universal basis, as the theory behind Jakielski's work suggests). This may be established in future work by doing cross-linguistic comparisons to see which factors operate universally. Disfluency on function words in early childhood appears to be responsive to factors other than phonetic complexity. PMID:17342878
Papers from the Linguistics Laboratory. Working Papers in Linguistics, No. 50.
ERIC Educational Resources Information Center
Ainsworth-Darnell, Kim, Ed.; D'Imperio, Mariapaola, Ed.
Research reports included in this volume of working papers in linguistics are: "Perception of Consonant Clusters and Variable Gap Time" (Mike Cahill); "Near-Merger in Russian Palatalization" (Erin Diehm, Keith Johnson); "Breadth of Focus, Modality, and Prominence Perception in Neapolitan Italian" (Mariapaola…
Intra-oral pressure-based voicing control of electrolaryngeal speech with intra-oral vibrator.
Takahashi, Hirokazu; Nakao, Masayuki; Kikuchi, Yataro; Kaga, Kimitaka
2008-07-01
In normal speech, coordinated activities of intrinsic laryngeal muscles suspend a glottal sound at utterance of voiceless consonants, automatically realizing a voicing control. In electrolaryngeal speech, however, the lack of voicing control is one of the causes of unclear voice, voiceless consonants tending to be misheard as the corresponding voiced consonants. In the present work, we developed an intra-oral vibrator with an intra-oral pressure sensor that detected utterance of voiceless phonemes during the intra-oral electrolaryngeal speech, and demonstrated that an intra-oral pressure-based voicing control could improve the intelligibility of the speech. The test voices were obtained from one electrolaryngeal speaker and one normal speaker. We first investigated on the speech analysis software how a voice onset time (VOT) and first formant (F1) transition of the test consonant-vowel syllables contributed to voiceless/voiced contrasts, and developed an adequate voicing control strategy. We then compared the intelligibility of consonant-vowel syllables among the intra-oral electrolaryngeal speech with and without online voicing control. The increase of intra-oral pressure, typically with a peak ranging from 10 to 50 gf/cm2, could reliably identify utterance of voiceless consonants. The speech analysis and intelligibility test then demonstrated that a short VOT caused the misidentification of the voiced consonants due to a clear F1 transition. Finally, taking these results together, the online voicing control, which suspended the prosthetic tone while the intra-oral pressure exceeded 2.5 gf/cm2 and during the 35 milliseconds that followed, proved efficient to improve the voiceless/voiced contrast.
Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment
ERIC Educational Resources Information Center
Buchwald, Adam; Miozzo, Michele
2012-01-01
Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…
Neural Correlates of Sublexical Processing in Phonological Working Memory
ERIC Educational Resources Information Center
McGettigan, Carolyn; Warren, Jane E.; Eisner, Frank; Marshall, Chloe R.; Shanmugalingam, Pradheep; Scott, Sophie K.
2011-01-01
This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural…
Owen Van Horne, Amanda J.; Green Fager, Melanie
2015-01-01
Purpose Children with specific language impairment (SLI) frequently have difficulty producing the past tense. This study aimed to quantify the relative influence of telicity (i.e., the completedness of an event), verb frequency, and stem final phonemes on the production of past tense by school-age children with SLI and their typically-developing (TD) peers. Method Archival elicited production data from children with SLI between the ages of 6 and 9 and TD peers ages 4 to 8 were reanalyzed. Past tense accuracy was predicted using measures of telicity, verb frequency measures, and properties of the final consonant of the verb stem. Result All children were highly accurate when verbs were telic, the inflected form was frequently heard in the past tense, and the word ended in a sonorant/ non-alveolar consonant. All children were less accurate when verbs were atelic, rarely heard in the past tense, or ended in a word final obstruent or alveolar consonant. SLI status depressed overall accuracy rates, but did not influence how facilitative a given factor was. Conclusion Some factors that have been believed to be useful only when children are first discovering past tense, such as telicity, appear to be influential in later years as well. PMID:25879455
Computational Approach to Musical Consonance and Dissonance
Trulla, Lluis L.; Di Stefano, Nicola; Giuliani, Alessandro
2018-01-01
In sixth century BC, Pythagoras discovered the mathematical foundation of musical consonance and dissonance. When auditory frequencies in small-integer ratios are combined, the result is a harmonious perception. In contrast, most frequency combinations result in audible, off-centered by-products labeled “beating” or “roughness;” these are reported by most listeners to sound dissonant. In this paper, we consider second-order beats, a kind of beating recognized as a product of neural processing, and demonstrate that the data-driven approach of Recurrence Quantification Analysis (RQA) allows for the reconstruction of the order in which interval ratios are ranked in music theory and harmony. We take advantage of computer-generated sounds containing all intervals over the span of an octave. To visualize second-order beats, we use a glissando from the unison to the octave. This procedure produces a profile of recurrence values that correspond to subsequent epochs along the original signal. We find that the higher recurrence peaks exactly match the epochs corresponding to just intonation frequency ratios. This result indicates a link between consonance and the dynamical features of the signal. Our findings integrate a new element into the existing theoretical models of consonance, thus providing a computational account of consonance in terms of dynamical systems theory. Finally, as it considers general features of acoustic signals, the present approach demonstrates a universal aspect of consonance and dissonance perception and provides a simple mathematical tool that could serve as a common framework for further neuro-psychological and music theory research. PMID:29670552
Adult perceptions of phonotactic violations in Japanese
NASA Astrophysics Data System (ADS)
Fais, Laurel; Kajikawa, Sachiyo; Werker, Janet; Amano, Shigeaki
2004-05-01
Adult Japanese speakers ``hear'' epenthetic vowels in productions of Japanese-like words that violate the canonical CVCVCV form by containing internal consonant clusters (CVCCV) [Dupoux et al., J. Exp. Psychol. 25, 1568-1578 (1999)]. Given this finding, this research examined how Japanese adults rated the goodness of Japanese-like words produced without a vowel in the final syllable (CVC), and words produced without vowels in the penultimate and final syllables (CVCC). Furthermore, in some of these contexts, voiceless vowels may appear in fluent, casual Japanese productions, especially in the Kanto dialect, and in some, such voiceless vowels may not appear. Results indicate that both Kanto and Kinki speakers rated CVC productions for contexts in which voiceless vowels are not allowed as the worst; they rated CVC and CVCC contexts in which voiceless vowel productions are allowed as better. In these latter contexts, the CVC words, which result from the loss of one, final, vowel, are judged to be better than the CVCC words, which result from the loss of two (final and penultimate) vowels. These results mirror the relative seriousness of the phonotactic violations and indicate listeners have tacit knowledge of these regularities in their language.
Lidestam, Björn; Rönnberg, Jerker
2016-01-01
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667
Neural Representations Used by Brain Regions Underlying Speech Production
ERIC Educational Resources Information Center
Segawa, Jennifer Anne
2013-01-01
Speech utterances are phoneme sequences but may not always be represented as such in the brain. For instance, electropalatography evidence indicates that as speaking rate increases, gestures within syllables are manipulated separately but those within consonant clusters act as one motor unit. Moreover, speech error data suggest that a syllable's…
Testing for OO-Faithfulness in the Acquisition of Consonant Clusters
ERIC Educational Resources Information Center
Tessier, Anne-Michelle
2012-01-01
This article provides experimental evidence for the claim in Hayes (2004) and McCarthy (1998) that language learners are biased to assume that morphological paradigms should be phonologically-uniform--that is, that derived words should retain all the phonological properties of their bases. The evidence comes from an artificial language…
English Speech Acquisition in 3- to 5-Year-Old Children Learning Russian and English
ERIC Educational Resources Information Center
Gildersleeve-Neumann, Christina E.; Wright, Kira L.
2010-01-01
Purpose: English speech acquisition in Russian-English (RE) bilingual children was investigated, exploring the effects of Russian phonetic and phonological properties on English single-word productions. Russian has more complex consonants and clusters and a smaller vowel inventory than English. Method: One hundred thirty-seven single-word samples…
Stimulus Characteristics of Single-Word Tests of Children's Speech Sound Production
ERIC Educational Resources Information Center
Macrae, Toby
2017-01-01
Purpose: This clinical focus article provides readers with a description of the stimulus characteristics of 12 popular tests of speech sound production. Method: Using significance testing and descriptive analyses, stimulus items were compared in terms of the number of opportunities for production of all consonant singletons, clusters, and rhotic…
The Relationship between Speech Impairment, Phonological Awareness and Early Literacy Development
ERIC Educational Resources Information Center
Harris, Judy; Botting, Nicola; Myers, Lucy; Dodd, Barbara
2011-01-01
Although children with speech impairment are at increased risk for impaired literacy, many learn to read and spell without difficulty. Around half the children with speech impairment have delayed acquisition, making errors typical of a normally developing younger child (e.g. reducing consonant clusters so that "spoon" is pronounced as…
The Word Frequency Effect on Second Language Vocabulary Learning
ERIC Educational Resources Information Center
Koirala, Cesar
2015-01-01
This study examines several linguistic factors as possible contributors to perceived word difficulty in second language learners in an experimental setting. The investigated factors include: (1) frequency of word usage in the first language, (2) word length, (3) number of syllables in a word, and (4) number of consonant clusters in a word. Word…
On Sources of the Word Length Effect in Young Readers
ERIC Educational Resources Information Center
Gagl, Benjamin; Hawelka, Stefan; Wimmer, Heinz
2015-01-01
We investigated how letter length, phoneme length, and consonant clusters contribute to the word length effect in 2nd- and 4th-grade children. They read words from three different conditions: In one condition, letter length increased but phoneme length did not due to multiletter graphemes (H"aus"-B"auch"-S"chach"). In…
Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech
ERIC Educational Resources Information Center
Yip, Michael C.
2016-01-01
Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…
Tremblay, Pascale; Small, Steven L.
2011-01-01
What is the nature of the interface between speech perception and production, where auditory and motor representations converge? One set of explanations suggests that during perception, the motor circuits involved in producing a perceived action are in some way enacting the action without actually causing movement (covert simulation) or sending along the motor information to be used to predict its sensory consequences (i.e., efference copy). Other accounts either reject entirely the involvement of motor representations in perception, or explain their role as being more supportive than integral, and not employing the identical circuits used in production. Using fMRI, we investigated whether there are brain regions that are conjointly active for both speech perception and production, and whether these regions are sensitive to articulatory (syllabic) complexity during both processes, which is predicted by a covert simulation account. A group of healthy young adults (1) observed a female speaker produce a set of familiar words (perception), and (2) observed and then repeated the words (production). There were two types of words, varying in articulatory complexity, as measured by the presence or absence of consonant clusters. The simple words contained no consonant cluster (e.g. “palace”), while the complex words contained one to three consonant clusters (e.g. “planet”). Results indicate that the left ventral premotor cortex (PMv) was significantly active during speech perception and speech production but that activation in this region was scaled to articulatory complexity only during speech production, revealing an incompletely specified efferent motor signal during speech perception. The right planum temporal (PT) was also active during speech perception and speech production, and activation in this region was scaled to articulatory complexity during both production and perception. These findings are discussed in the context of current theories theory of speech perception, with particular attention to accounts that include an explanatory role for mirror neurons. PMID:21664275
Lockart, Rebekah; McLeod, Sharynne
2013-08-01
To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Task 1 (baseline) involved transcribing English words. In Task 2, students transcribed 25 words spoken by a Cantonese adult. An average of 59.1% consonants was transcribed correctly (72.9% when Cantonese-English transfer patterns were allowed). There was higher accuracy on shared English and Cantonese syllable-initial consonants /m,n,f,s,h,j,w,l/ and syllable-final consonants. In Task 3, students identified consonant errors and transcribed 100 words spoken by Cantonese-speaking children under 4 additive conditions: (1) baseline, (2) +adult model, (3) +information about Cantonese phonology, and (4) all variables (2 and 3 were counterbalanced). There was a significant improvement in the students' identification and transcription scores for conditions 2, 3, and 4, with a moderate effect size. Increased skill was not based on listeners' proficiency in speaking another language, perceived transcription skill, musicality, or confidence with multilingual clients. Speech-language pathology students, with no exposure to or specific training in Cantonese, have some skills to identify errors and transcribe Cantonese. Provision of a Cantonese-adult model and information about Cantonese phonology increased students' accuracy in transcribing Cantonese speech.
Yoon, Ji Hye; Jeong, Yong
2018-01-01
Background and Purpose Korean-speaking patients with a brain injury may show agraphia that differs from that of English-speaking patients due to the unique features of Hangul syllabic writing. Each grapheme in Hangul must be arranged from left to right and/or top to bottom within a square space to form a syllable, which requires greater visuospatial abilities than when writing the letters constituting an alphabetic writing system. Among the Hangul grapheme positions within a syllable, the position of a vowel is important because it determines the writing direction and the whole configuration in Korean syllabic writing. Due to the visuospatial characteristics of the Hangul vowel, individuals with early-onset Alzheimer's disease (EOAD) may experiences differences between the difficulties of writing Hangul vowels and consonants due to prominent visuospatial dysfunctions caused by parietal lesions. Methods Eighteen patients with EOAD and 18 age-and-education-matched healthy adults participated in this study. The participants were requested to listen to and write 30 monosyllabic characters that consisted of an initial consonant, medial vowel, and final consonant with a one-to-one phoneme-to-grapheme correspondence. We measured the writing time for each grapheme, the pause time between writing the initial consonant and the medial vowel (P1), and the pause time between writing the medial vowel and the final consonant (P2). Results All grapheme writing and pause times were significantly longer in the EOAD group than in the controls. P1 was also significantly longer than P2 in the EOAD group. Conclusions Patients with EOAD might require a higher judgment ability and longer processing time for determining the visuospatial grapheme position before writing medial vowels. This finding suggests that a longer pause time before writing medial vowels is an early marker of visuospatial dysfunction in patients with EOAD. PMID:29504296
Yoon, Ji Hye; Jeong, Yong; Na, Duk L
2018-04-01
Korean-speaking patients with a brain injury may show agraphia that differs from that of English-speaking patients due to the unique features of Hangul syllabic writing. Each grapheme in Hangul must be arranged from left to right and/or top to bottom within a square space to form a syllable, which requires greater visuospatial abilities than when writing the letters constituting an alphabetic writing system. Among the Hangul grapheme positions within a syllable, the position of a vowel is important because it determines the writing direction and the whole configuration in Korean syllabic writing. Due to the visuospatial characteristics of the Hangul vowel, individuals with early-onset Alzheimer's disease (EOAD) may experiences differences between the difficulties of writing Hangul vowels and consonants due to prominent visuospatial dysfunctions caused by parietal lesions. Eighteen patients with EOAD and 18 age-and-education-matched healthy adults participated in this study. The participants were requested to listen to and write 30 monosyllabic characters that consisted of an initial consonant, medial vowel, and final consonant with a one-to-one phoneme-to-grapheme correspondence. We measured the writing time for each grapheme, the pause time between writing the initial consonant and the medial vowel (P1), and the pause time between writing the medial vowel and the final consonant (P2). All grapheme writing and pause times were significantly longer in the EOAD group than in the controls. P1 was also significantly longer than P2 in the EOAD group. Patients with EOAD might require a higher judgment ability and longer processing time for determining the visuospatial grapheme position before writing medial vowels. This finding suggests that a longer pause time before writing medial vowels is an early marker of visuospatial dysfunction in patients with EOAD. Copyright © 2018 Korean Neurological Association.
An Experimental Approach to Debuccalization and Supplementary Gestures
ERIC Educational Resources Information Center
O'Brien, Jeremy
2012-01-01
Debuccalization is a weakening phenomenon whereby various consonants reduce to laryngeals. Examples include Spanish s-aspiration (s becomes h word-finally) and English t-glottalization (t becomes glottal stop syllable-finally). Previous analyses of debuccalization view it as a lenition process that deletes or manipulates formal phonological…
Spelling in African American Children: The Case of Final Consonant Devoicing
ERIC Educational Resources Information Center
Treiman, Rebecca; Bowman, Margo
2015-01-01
This study examined the effect of dialect variation on children's spelling by using devoicing of final /d/ in African American Vernacular English (AAVE) as a test case. In line with the linguistic interference hypothesis, African American 6-year-olds were significantly poorer at spelling the final "d" of words such as "salad"…
ERIC Educational Resources Information Center
Davidson, Lisa; Wilson, Colin
2016-01-01
Recent research has shown that speakers are sensitive to non-contrastive phonetic detail present in nonnative speech (e.g. Escudero et al. 2012; Wilson et al. 2014). Difficulties in interpreting and implementing unfamiliar phonetic variation can lead nonnative speakers to modify second language forms by vowel epenthesis and other changes. These…
Influence of Syllable Structure on L2 Auditory Word Learning
ERIC Educational Resources Information Center
Hamada, Megumi; Goya, Hideki
2015-01-01
This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…
ERIC Educational Resources Information Center
Sperbeck, Mieko
2010-01-01
The primary aim of this dissertation was to investigate the relationship between speech perception and speech production difficulties among Japanese second language (L2) learners of English, in their learning complex syllable structures. Japanese L2 learners and American English controls were tested in a categorical ABX discrimination task of…
ERIC Educational Resources Information Center
Pouplier, Marianne; Marin, Stefania; Waltl, Susanne
2014-01-01
Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…
Portuguese Lexical Clusters and CVC Sequences in Speech Perception and Production.
Cunha, Conceição
2015-01-01
This paper investigates similarities between lexical consonant clusters and CVC sequences differing in the presence or absence of a lexical vowel in speech perception and production in two Portuguese varieties. The frequent high vowel deletion in the European variety (EP) and the realization of intervening vocalic elements between lexical clusters in Brazilian Portuguese (BP) may minimize the contrast between lexical clusters and CVC sequences in the two Portuguese varieties. In order to test this hypothesis we present a perception experiment with 72 participants and a physiological analysis of 3-dimensional movement data from 5 EP and 4 BP speakers. The perceptual results confirmed a gradual confusion of lexical clusters and CVC sequences in EP, which corresponded roughly to the gradient consonantal overlap found in production. © 2015 S. Karger AG, Basel.
Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.
Shi, Lu-Feng
2017-04-01
Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly fewer errors with /e/ and more errors with word-final /p, k/. Data reported in the present study lead to a twofold conclusion. On the one hand, normal-hearing heritage speakers of Spanish may misidentify English phonemes in patterns different from those of English monolingual listeners. Not all phoneme errors can be readily understood by comparing Spanish and English phonology, suggesting that Spanish heritage speakers differ in performance from other Spanish-English bilingual listeners. On the other hand, the absolute number of errors and the error pattern of most phonemes were comparable between English monolingual listeners and Spanish heritage speakers, suggesting that audiologists may assess word recognition in quiet in the same way for these two groups of listeners, if diagnosis is based on words, not phonemes. American Academy of Audiology
Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker
2016-06-17
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.
Consonants, vowels and tones across Vietnamese dialects.
PhȦm, Ben; McLeod, Sharynne
2016-04-01
Vietnamese is spoken by over 89 million people in Vietnam and it is one of the most commonly spoken languages other than English in the US, Canada and Australia. This study defines between one and nine different dialects of Vietnamese spoken in Vietnam. In Vietnamese schools, children learn Standard Vietnamese which is based on the northern dialect; however, if they live in other regions they may speak a different dialect at home. This paper describes the differences between the consonants, semivowels, vowels, diphthongs and tones for four dialects: Standard, northern, central and southern Vietnamese. The number and type of initial consonants differs per dialect (i.e. Standard = 23, northern = 20, central = 23, southern = 21). For example, the letter "r" is pronounced in the Standard and central dialects as the retroflex /ʐ/, northern dialect as the voiced alveolar fricative /z/ or the trilled /r/ and in the southern dialect as the voiced velar fricative /ɣ/. Additionally, the letter "v" is pronounced in the Standard, northern and central dialects as the voiced bilabial fricative /v/, the southern dialect as the voiced palatal approximant /j/ and in the lower northern dialect (Ninh Binh) as the voiceless bilabial fricative /f/. Similarly, the number of final consonants differs per dialect (i.e. Standard = 6, northern = 10, central = 10, southern = 8). Finally, the number and type of tones differs per dialect (i.e. Standard = 6, northern = 6, central = 5, southern = 5). Understanding differences between Vietnamese dialects is important so that speech-language pathologists and educators provide appropriate services to people who speak Vietnamese.
Lidestam, Björn; Hällgren, Mathias; Rönnberg, Jerker
2014-01-01
This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context. PMID:25085610
Xia, Jing; Xu, Buye; Pentony, Shareka; Xu, Jingjing; Swaminathan, Jayaganesh
2018-03-01
Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.
ERIC Educational Resources Information Center
Russak, Susie; Saiegh-Haddad, Elinor
2017-01-01
This article examines the effect of phonological context (singleton vs. clustered consonants) on full phoneme segmentation in Hebrew first language (L1) and in English second language (L2) among typically reading adults (TR) and adults with reading disability (RD) (n = 30 per group), using quantitative analysis and a fine-grained analysis of…
The General Phonetic Characteristics of Languages. Final Report-1967-1968.
ERIC Educational Resources Information Center
Delattre, Pierre
In this final stage of a series of three linguistic studies conducted at the University of California, Santa Barbara, four topics are presented. The longest is a study of consonant gemination in German, Spanish, French, and American English from acoustic, perceptual, and radiographic points of view. Pharyngeal features are studied in the…
On the Phonetic Consonance in Quranic Verses-Final "Fawa?il"
ERIC Educational Resources Information Center
Aldubai, Nadhim Abdulamalek
2015-01-01
The present research aims to discuss the phonological patterns in Quranic verse-final pauses ("fawa?il") in order to provide an insight into the phonetic network governing the symmetrical and the asymmetrical pauses ("fawa?il") in terms of concordance ("al-nasaq al-?awti"). The data are collected from different parts…
Shi, Lu-Feng; Morozova, Natalia
2012-08-01
Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.
Effects of prosodic boundary on /aC/ sequences: articulatory results
NASA Astrophysics Data System (ADS)
Tabain, Marija
2003-05-01
This study presents EMA (electromagnetic articulography) data on articulation of the vowel /a/ at different prosodic boundaries in French. Three speakers of metropolitan French produced utterances containing the vowel /a/, preceded by /tee/ and followed by one of six consonants /bee dee gee eff ess sh/ (three stops and three fricatives), with different prosodic boundaries intervening between the /a/ and the six different consonants. The prosodic boundaries investigated are the Utterance, the Intonational phrase, the Accentual phrase, and the Word. Data for the Tongue Tip, Tongue Body, and Jaw are presented. The articulatory data presented here were recorded at the same time as the acoustic data presented in Tabain [J. Acoust. Soc. Am. 113, 516-531 (2003)]. Analyses show that there is a strong effect on peak displacement of the vowel according to the prosodic hierarchy, with the stronger prosodic boundaries inducing a much lower Tongue Body and Jaw position than the weaker prosodic boundaries. Durations of both the opening movement into and the closing movement out of the vowel are also affected. Peak velocity of the articulatory movements is also examined, and, contrary to results for phrase-final lengthening, it is found that peak velocity of the opening movement into the vowel tends to increase with the higher prosodic boundaries, together with the increased magnitude of the movement between the consonant and the vowel. Results for the closing movement out of the vowel and into the consonant are not so clear. Since one speaker shows evidence of utterance-level articulatory declension, it is suggested that the competing constraints of articulatory declension and prosodic effects might explain some previous results on phrase-final lengthening.
Aided and Unaided Speech Perception by Older Hearing Impaired Listeners
Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William
2015-01-01
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. PMID:25730423
ERIC Educational Resources Information Center
Khanbeiki, Ruhollah; Abdolmanafi-Rokni, Seyed Jalal
2015-01-01
The present study was aimed at providing the English teachers across Iran with a good and fruitful method of teaching pronunciation. To this end, sixty female intermediate EFL learners were put in three different but equivalent groups of 20 based on the results of a pronunciation pre-test. One of the groups received explicit instruction including…
Individual Differences in the Acquisition of a Complex L2 Phonology: A Training Study
ERIC Educational Resources Information Center
Hanulikova, Adriana; Dediu, Dan; Fang, Zhou; Basnakova, Jana; Huettig, Falk
2012-01-01
Many learners of a foreign language (L2) struggle to correctly pronounce newly learned speech sounds, yet many others achieve this with apparent ease. Here we explored how a training study of learning complex consonant clusters at the very onset of L2 acquisition can inform us about L2 learning in general and individual differences in particular.…
EMA analysis of tongue function in children with dysarthria following traumatic brain injury.
Murdoch, Bruce E; Goozée, Justine V
2003-01-01
To investigate the speed and accuracy of tongue movements exhibited by a sample of children with dysarthria following severe traumatic brain injury (TBI) during speech using electromagnetic articulography (EMA). Four children, aged between 12.75-17.17 years with dysarthria following TBI, were assessed using the AG-100 electromagnetic articulography system (Carstens Medizinelektronik). The movement trajectories of receiver coils affixed to each child's tongue were examined during consonant productions, together with a range of quantitative kinematic parameters. The children's results were individually compared against the mean values obtained by a group of eight control children (mean age of 14.67 years, SD 1.60). All four TBI children were perceived to exhibit reduced rates of speech and increased word durations. Objective EMA analysis revealed that two of the TBI children exhibited significantly longer consonant durations compared to the control group, resulting from different underlying mechanisms relating to speed generation capabilities and distances travelled. The other two TBI children did not exhibit increased initial consonant movement durations, suggesting that the vowels and/or final consonants may have been contributing to the increased word durations. The finding of different underlying articulatory kinematic profiles has important implications for the treatment of speech rate disturbances in children with dysarthria following TBI.
Discrimination of brief speech sounds is impaired in rats with auditory cortex lesions
Porter, Benjamin A.; Rosenthal, Tara R.; Ranasinghe, Kamalini G.; Kilgard, Michael P.
2011-01-01
Auditory cortex (AC) lesions impair complex sound discrimination. However, a recent study demonstrated spared performance on an acoustic startle response test of speech discrimination following AC lesions (Floody et al., 2010). The current study reports the effects of AC lesions on two operant speech discrimination tasks. AC lesions caused a modest and quickly recovered impairment in the ability of rats to discriminate consonant-vowel-consonant speech sounds. This result seems to suggest that AC does not play a role in speech discrimination. However, the speech sounds used in both studies differed in many acoustic dimensions and an adaptive change in discrimination strategy could allow the rats to use an acoustic difference that does not require an intact AC to discriminate. Based on our earlier observation that the first 40 ms of the spatiotemporal activity patterns elicited by speech sounds best correlate with behavioral discriminations of these sounds (Engineer et al., 2008), we predicted that eliminating additional cues by truncating speech sounds to the first 40 ms would render the stimuli indistinguishable to a rat with AC lesions. Although the initial discrimination of truncated sounds took longer to learn, the final performance paralleled rats using full-length consonant-vowel-consonant sounds. After 20 days of testing, half of the rats using speech onsets received bilateral AC lesions. Lesions severely impaired speech onset discrimination for at least one-month post lesion. These results support the hypothesis that auditory cortex is required to accurately discriminate the subtle differences between similar consonant and vowel sounds. PMID:21167211
Effect of Vowel Context on the Recognition of Initial Consonants in Kannada.
Kalaiah, Mohan Kumar; Bhat, Jayashree S
2017-09-01
The present study was carried out to investigate the effect of vowel context on the recognition of Kannada consonants in quiet for young adults. A total of 17 young adults with normal hearing in both ears participated in the study. The stimuli included consonant-vowel syllables, spoken by 12 native speakers of Kannada. Consonant recognition task was carried out as a closed-set (fourteen-alternative forced-choice). The present study showed an effect of vowel context on the perception of consonants. Maximum consonant recognition score was obtained in the /o/ vowel context, followed by the /a/ and /u/ vowel contexts, and then the /e/ context. Poorest consonant recognition score was obtained in the vowel context /i/. Vowel context has an effect on the recognition of Kannada consonants, and the vowel effect was unique for Kannada consonants.
Wagner, Monica; Shafer, Valerie L.; Martin, Brett; Steinschneider, Mitchell
2013-01-01
The effect of exposure to the contextual features of the /pt/ cluster was investigated in native-English and native-Polish listeners using behavioral and event-related potential (ERP) methodology. Both groups experience the /pt/ cluster in their languages, but only the Polish group experiences the cluster in the context of word onset examined in the current experiment. The /st/ cluster was used as an experimental control. ERPs were recorded while participants identified the number of syllables in the second word of nonsense word pairs. The results found that only Polish listeners accurately perceived the /pt/ cluster and perception was reflected within a late positive component of the ERP waveform. Furthermore, evidence of discrimination of /pt/ and /pǝt/ onsets in the neural signal was found even for non-native listeners who could not perceive the difference. These findings suggest that exposure to phoneme sequences in highly specific contexts may be necessary for accurate perception. PMID:22867752
Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training
Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William
2015-01-01
Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material. PMID:25730330
Alexander, Joshua M.
2016-01-01
By varying parameters that control nonlinear frequency compression (NFC), this study examined how different ways of compressing inaudible mid- and/or high-frequency information at lower frequencies influences perception of consonants and vowels. Twenty-eight listeners with mild to moderately severe hearing loss identified consonants and vowels from nonsense syllables in noise following amplification via a hearing aid simulator. Low-pass filtering and the selection of NFC parameters fixed the output bandwidth at a frequency representing a moderately severe (3.3 kHz, group MS) or a mild-to-moderate (5.0 kHz, group MM) high-frequency loss. For each group (n = 14), effects of six combinations of NFC start frequency (SF) and input bandwidth [by varying the compression ratio (CR)] were examined. For both groups, the 1.6 kHz SF significantly reduced vowel and consonant recognition, especially as CR increased; whereas, recognition was generally unaffected if SF increased at the expense of a higher CR. Vowel recognition detriments for group MS were moderately correlated with the size of the second formant frequency shift following NFC. For both groups, significant improvement (33%–50%) with NFC was confined to final /s/ and /z/ and to some VCV tokens, perhaps because of listeners' limited exposure to each setting. No set of parameters simultaneously maximized recognition across all tokens. PMID:26936574
Multiband product rule and consonant identification.
Li, Feipeng; Allen, Jont B
2009-07-01
The multiband product rule, also known as band-independence, is a basic assumption of articulation index and its extension, the speech intelligibility index. Previously Fletcher showed its validity for a balanced mix of 20% consonant-vowel (CV), 20% vowel-consonant (VC), and 60% consonant-vowel-consonant (CVC) sounds. This study repeats Miller and Nicely's version of the hi-/lo-pass experiment with minor changes to study band-independence for the 16 Miller-Nicely consonants. The cut-off frequencies are chosen such that the basilar membrane is evenly divided into 12 segments from 250 to 8000 Hz with the high-pass and low-pass filters sharing the same six cut-off frequencies in the middle. Results show that the multiband product rule is statistically valid for consonants on average. It also applies to subgroups of consonants, such as stops and fricatives, which are characterized by a flat distribution of speech cues along the frequency. It fails for individual consonants.
Using visible speech to train perception and production of speech for individuals with hearing loss.
Massaro, Dominic W; Light, Joanna
2004-04-01
The main goal of this study was to implement a computer-animated talking head, Baldi, as a language tutor for speech perception and production for individuals with hearing loss. Baldi can speak slowly; illustrate articulation by making the skin transparent to reveal the tongue, teeth, and palate; and show supplementary articulatory features, such as vibration of the neck to show voicing and turbulent airflow to show frication. Seven students with hearing loss between the ages of 8 and 13 were trained for 6 hours across 21 weeks on 8 categories of segments (4 voiced vs. voiceless distinctions, 3 consonant cluster distinctions, and 1 fricative vs. affricate distinction). Training included practice at the segment and the word level. Perception and production improved for each of the 7 children. Speech production also generalized to new words not included in the training lessons. Finally, speech production deteriorated somewhat after 6 weeks without training, indicating that the training method rather than some other experience was responsible for the improvement that was found.
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.
Articulation in schoolchildren and adults with neurofibromatosis type 1.
Cosyns, Marjan; Mortier, Geert; Janssens, Sandra; Bogaert, Famke; D'Hondt, Stephanie; Van Borsel, John
2012-01-01
Several authors mentioned the occurrence of articulation problems in the neurofibromatosis type 1 (NF1) population. However, few studies have undertaken a detailed analysis of the articulation skills of NF1 patients, especially in schoolchildren and adults. Therefore, the aim of the present study was to examine in depth the articulation skills of NF1 schoolchildren and adults, both phonetically and phonologically. Speech samples were collected from 43 Flemish NF1 patients (14 children and 29 adults), ranging in age between 7 and 53 years, using a standardized speech test in which all Flemish single speech sounds and most clusters occur in all their permissible syllable positions. Analyses concentrated on consonants only and included a phonetic inventory, a phonetic, and a phonological analysis. It was shown that phonetic inventories were incomplete in 16.28% (7/43) of participants, in which totally correct realizations of the sibilants /ʃ/ and/or /ʒ/ were missing. Phonetic analysis revealed that distortions were the predominant phonetic error type. Sigmatismus stridens, multiple ad- or interdentality, and, in children, rhotacismus non vibrans were frequently observed. From a phonological perspective, the most common error types were substitution and syllable structure errors. Particularly, devoicing, cluster simplification, and, in children, deletion of the final consonant of words were perceived. Further, it was demonstrated that significantly more men than women presented with an incomplete phonetic inventory, and that girls tended to display more articulation errors than boys. Additionally, children exhibited significantly more articulation errors than adults, suggesting that although the articulation skills of NF1 patients evolve positively with age, articulation problems do not resolve completely from childhood to adulthood. As such, the articulation errors made by NF1 adults may be regarded as residual articulation disorders. It can be concluded that the speech of NF1 patients is characterized by mild articulation disorders at an age where this is no longer expected. Readers will be able to describe neurofibromatosis type 1 (NF1) and explain the articulation errors displayed by schoolchildren and adults with this genetic syndrome. © 2011 Elsevier Inc. All rights reserved.
Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry
2015-07-01
Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of consonants and vowels at the onset of lexical acquisition was assessed in French-learning 5-month-olds by testing sensitivity to minimal phonetic changes in their own name. Infants' reactions to mispronunciations revealed sensitivity to vowel but not consonant changes. Vowels were also more salient (on duration and intensity) but less distinct (on spectrally based measures) than consonants. Lastly, vowel (but not consonant) mispronunciation detection was modulated by acoustic factors, in particular spectrally based distance. These results establish that consonant changes do not affect lexical recognition at 5 months, while vowel changes do; the consonant bias observed later in development does not emerge until after 5 months through additional language exposure. © 2014 John Wiley & Sons Ltd.
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called “consonant bias”). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2nd and 4th Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4th Grade children, whereas 2nd graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4th graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading. PMID:24523917
Consonant Acquisition in Young Cochlear Implant Recipients and Their Typically Developing Peers
Jung, Jongmin; Ertmer, David J.
2017-01-01
Purpose Consonant acquisition was examined in 13 young cochlear implant (CI) recipients and 11 typically developing (TD) children. Method A longitudinal research design was implemented to determine the rate and nature of consonant acquisition during the first 2 years of robust hearing experience. Twenty-minute adult–child (typically a parent) interactions were video and audio recorded at 3-month intervals following implantation until 24 months of robust hearing experience was achieved. TD children were similarly recorded between 6 and 24 months of age. Consonants that were produced twice within a 50-utterance sample were considered “established” within a child's consonant inventory. Results Although the groups showed similar trajectories, the CI group produced larger consonant inventories than the TD group at each interval except for 21 and 24 months. A majority of children with CIs also showed more rapid acquisition of consonants and more diverse consonant inventories than TD children. Conclusions These results suggest that early auditory deprivation does not significantly affect consonant acquisition for most CI recipients. Tracking early consonant development appears to be a useful way to assess the effectiveness of cochlear implantation in young recipients. PMID:28474085
Now you hear it, now you don't: vowel devoicing in Japanese infant-directed speech.
Fais, Laurel; Kajikawa, Sachiyo; Amano, Shigeaki; Werker, Janet F
2010-03-01
In this work, we examine a context in which a conflict arises between two roles that infant-directed speech (IDS) plays: making language structure salient and modeling the adult form of a language. Vowel devoicing in fluent adult Japanese creates violations of the canonical Japanese consonant-vowel word structure pattern by systematically devoicing particular vowels, yielding surface consonant clusters. We measured vowel devoicing rates in a corpus of infant- and adult-directed Japanese speech, for both read and spontaneous speech, and found that the mothers in our study preserve the fluent adult form of the language and mask underlying phonological structure by devoicing vowels in infant-directed speech at virtually the same rates as those for adult-directed speech. The results highlight the complex interrelationships among the modifications to adult speech that comprise infant-directed speech, and that form the input from which infants begin to build the eventual mature form of their native language.
Cho, Taehong; McQueen, James M
2011-08-01
Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or / k/, deleted or preserved) in the second word of a two-word phrase with an underlying /l/-C2-/t/ sequence. In Experiment 1 the target-bearing words had contextual lexical-semantic support. Listeners recovered deleted targets as fast and as accurately as preserved targets with both Word and Intonational Phrase (IP) boundaries between the two words. In Experiment 2, contexts were low-pass filtered. Listeners were still able to recover deleted targets as well as preserved targets in IP-boundary contexts, but better with physically-present targets than with deleted targets in Word-boundary contexts. This suggests that the benefit of having target acoustic-phonetic information emerges only when higher-order (contextual and phrase-boundary) information is not available. The strikingly efficient recovery of deleted phonemes with neither acoustic-phonetic cues nor contextual support demonstrates that language-specific phonological knowledge, rather than language-universal perceptual processes which rely on fine-grained phonetic details, is employed when the listener perceives the results of a continuous-speech process in which reduction is phonetically complete.
Does perceived stress mediate the effect of cultural consonance on depression?
Balieiro, Mauro C; Dos Santos, Manoel Antônio; Dos Santos, José Ernesto; Dressler, William W
2011-11-01
The importance of appraisal in the stress process is unquestioned. Experience in the social environment that impacts outcomes such as depression are thought to have these effects because they are appraised as a threat to the individual and overwhelm the individual's capacity to cope. In terms of the nature of social experience that is associated with depression, several recent studies have examined the impact of cultural consonance. Cultural consonance is the degree to which individuals, in their own beliefs and behaviors, approximate the prototypes for belief and behavior encoded in shared cultural models. Low cultural consonance is associated with more depressive symptoms both cross-sectionally and longitudinally. In this paper we ask the question: does perceived stress mediate the effects of cultural consonance on depression? Data are drawn from a longitudinal study of depressive symptoms in the urban community of Ribeirão Preto, Brazil. A sample of 210 individuals was followed for 2 years. Cultural consonance was assessed in four cultural domains, using a mixed-methods research design that integrated techniques of cultural domain analysis with social survey research. Perceived stress was measured with Cohen's Perceived Stress Scale. When cultural consonance was examined separately for each domain, perceived stress partially mediated the impact of cultural consonance in family life and cultural consonance in lifestyle on depressive symptoms. When generalized cultural consonance (combining consonance in all four domains) was examined, there was no evidence of mediation. These results raise questions about how culturally salient experience rises to the level of conscious reflection.
Rødvik, Arne Kirkhorn; von Koss Torkildsen, Janne; Wie, Ona Bø; Storaker, Marit Aarvaag; Silvola, Juha Tapio
2018-04-17
The purpose of this systematic review and meta-analysis was to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually deaf users of multichannel cochlear implants (CIs) tested with consonant-vowel-consonant and vowel-consonant-vowel nonsense syllables. Six electronic databases were searched for peer-reviewed articles reporting consonant and vowel identification scores in CI users measured by nonsense words. Relevant studies were independently assessed and screened by 2 reviewers. Consonant and vowel identification scores were presented in forest plots and compared between studies in a meta-analysis. Forty-seven articles with 50 studies, including 647 participants, thereof 581 postlingually deaf and 66 prelingually deaf, met the inclusion criteria of this study. The mean performance on vowel identification tasks for the postlingually deaf CI users was 76.8% (N = 5), which was higher than the mean performance for the prelingually deaf CI users (67.7%; N = 1). The mean performance on consonant identification tasks for the postlingually deaf CI users was higher (58.4%; N = 44) than for the prelingually deaf CI users (46.7%; N = 6). The most common consonant confusions were found between those with same manner of articulation (/k/ as /t/, /m/ as /n/, and /p/ as /t/). The mean performance on consonant identification tasks for the prelingually and postlingually deaf CI users was found. There were no statistically significant differences between the scores for prelingually and postlingually deaf CI users. The consonants that were incorrectly identified were typically confused with other consonants with the same acoustic properties, namely, voicing, duration, nasality, and silent gaps. A univariate metaregression model, although not statistically significant, indicated that duration of implant use in postlingually deaf adults predict a substantial portion of their consonant identification ability. As there is no ceiling effect, a nonsense syllable identification test may be a useful addition to the standard test battery in audiology clinics when assessing the speech perception of CI users.
Phase locked neural activity in the human brainstem predicts preference for musical consonance.
Bones, Oliver; Hopkins, Kathryn; Krishnan, Ananthanarayan; Plack, Christopher J
2014-05-01
When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to a single harmonic series (the 'harmonicity' of the chord) predicts the perceived consonance (how pleasant and stable the chord sounds; McDermott, Lehr, & Oxenham, 2010). The distinction between consonance and dissonance is central to Western musical form. Harmonicity is represented in the temporal firing patterns of populations of brainstem neurons. The current study investigates the role of brainstem temporal coding of harmonicity in the perception of consonance. Individual preference for consonant over dissonant chords was measured using a rating scale for pairs of simultaneous notes. In order to investigate the effects of cochlear interactions, notes were presented in two ways: both notes to both ears or each note to different ears. The electrophysiological frequency following response (FFR), reflecting sustained neural activity in the brainstem synchronised to the stimulus, was also measured. When both notes were presented to both ears the perceptual distinction between consonant and dissonant chords was stronger than when the notes were presented to different ears. In the condition in which both notes were presented to the both ears additional low-frequency components, corresponding to difference tones resulting from nonlinear cochlear processing, were observable in the FFR effectively enhancing the neural harmonicity of consonant chords but not dissonant chords. Suppressing the cochlear envelope component of the FFR also suppressed the additional frequency components. This suggests that, in the case of consonant chords, difference tones generated by interactions between notes in the cochlea enhance the perception of consonance. Furthermore, individuals with a greater distinction between consonant and dissonant chords in the FFR to individual harmonics had a stronger preference for consonant over dissonant chords. Overall, the results provide compelling evidence for the role of neural temporal coding in the perception of consonance, and suggest that the representation of harmonicity in phase locked neural firing drives the perception of consonance. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
The Morphophonemics of Japanese Verbal Conjugation: An Autosegmental Account.
ERIC Educational Resources Information Center
Tsujimura, Natsuko; Davis, Stuart
Problems emerging from previous analyses of epenthesis in Japanese verbal endings are discussed and a crucial relationship between epenthesis and assimilation is argued. The focus is on the occurrence of /i/-epenthesis with certain root-final consonants. The analysis, which incorporates the view that assimilation is accomplished by means of…
Teaching Pronunciation in the Learner-Centered Classroom.
ERIC Educational Resources Information Center
Lin, Hsiang-Pao; And Others
Specific tools and techniques to help students of English as a Second Language overcome pronunciation problems are presented. The selection of problems addressed is based on the frequency and seriousness of errors that many native Chinese-speaking learners produce. Ways to resolve various problems (e.g., missing final consonants, misplaced stress…
Crespo-Bojorque, Paola; Toro, Juan M
2016-05-01
Consonance is a salient perceptual feature in harmonic music associated with pleasantness. Besides being deeply rooted in how we experience music, research suggests consonant intervals are more easily processed than dissonant intervals. In the present work we explore from a comparative perspective if such processing advantage extends to more complex tasks such as the detection of abstract rules. We ran experiments on rule learning over consonant and dissonant intervals with nonhuman animals and human participants. Results show differences across species regarding the extent to which they benefit from differences in consonance. Animals learn abstract rules with the same ease independently of whether they are implemented over consonant intervals (Experiment 1), dissonant intervals (Experiment 2), or over a combination of them (Experiment 3). Humans, on the contrary, learn an abstract rule better when it is implemented over consonant (Experiment 4) than over dissonant intervals (Experiment 5). Moreover, their performance improves when there is a mapping between abstract categories defining a rule and consonant and dissonant intervals (Experiments 6 and 7). Results suggest that for humans, consonance might be used as a perceptual anchor for other cognitive processes as to facilitate the detection of abstract patterns. Lacking extensive experience with harmonic stimuli, nonhuman animals tested here do not seem to benefit from a processing advantage for consonant intervals. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The privileged status of locality in consonant harmony
Finley, Sara
2011-01-01
While the vast majority of linguistic processes apply locally, consonant harmony appears to be an exception. In this phonological process, consonants share the same value of a phonological feature, such as secondary place of articulation. In sibilant harmony, [s] and [ʃ] (‘sh’) alternate such that if a word contains the sound [ʃ], all [s] sounds become [ʃ]. This can apply locally as a first-order or non-locally as a second-order pattern. In the first-order case, no consonants intervene between the two sibilants (e.g., [pisasu], [piʃaʃu]). In second-order case, a consonant may intervene (e.g., [sipasu], [ʃipaʃu]). The fact that there are languages that allow second-order non-local agreement of consonant features has led some to question whether locality constraints apply to consonant harmony. This paper presents the results from two artificial grammar learning experiments that demonstrate the privileged role of locality constraints, even in patterns that allow second-order non-local interactions. In Experiment 1, we show that learners do not extend first-order non-local relationships in consonant harmony to second-order nonlocal relationships. In Experiment 2, we show that learners will extend a consonant harmony pattern with second-order long distance relationships to a consonant harmony with first-order long distance relationships. Because second-order non-local application implies first-order non-local application, but first-order non-local application does not imply second-order non-local application, we establish that local constraints are privileged even in consonant harmony. PMID:21686094
Structural Generalizations over Consonants and Vowels in 11-Month-Old Infants
ERIC Educational Resources Information Center
Pons, Ferran; Toro, Juan M.
2010-01-01
Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we…
Acoustical study of the development of stop consonants in children
NASA Astrophysics Data System (ADS)
Imbrie, Annika K.
2003-10-01
This study focuses on the acoustic patterns of stop consonants and adjacent vowels as they develop in young children (ages 26-33) over a six month period. The acoustic properties that are being measured for stop consonants include spectra of bursts, frication noise and aspiration noise, and formant movements. Additionally, acoustic landmarks are labeled for measurements of durations of events determined by these landmarks. These acoustic measurements are being interpreted in terms of the supraglottal, laryngeal, and respiratory actions that give rise to them. Preliminary data show that some details of the child's gestures are still far from achieving the adult pattern. The burst of frication noise at the release tends to be shorter than adult values, and often consists of multiple bursts. From the burst spectrum, the place of articulation appears to be normal. Finally, coordination of closure of the glottis and release of the primary articulator is still quite variable, as is apparent from a large standard deviation in VOT. Analysis of longitudinal data on young children will result in better models of the development of the coordination of articulation, phonation, and respiration for motor speech production. [Work supported by NIH Grants Nos. DC00038 and DC00075.
Acoustical study of the development of stop consonants in children
NASA Astrophysics Data System (ADS)
Imbrie, Annika K.
2004-05-01
This study focuses on the acoustic patterns of stop consonants and adjacent vowels as they develop in young children (ages 2.6-3.3) over a 6-month period. The acoustic properties that are being measured for stop consonants include spectra of bursts, frication noise and aspiration noise, and formant movements. Additionally, acoustic landmarks are labeled for measurements of durations of events determined by these landmarks. These acoustic measurements are being interpreted in terms of the supraglottal, laryngeal, and respiratory actions that give rise to them. Preliminary data show that some details of the child's gestures are still far from achieving the adult pattern. The burst of frication noise at the release tends to be shorter than adult values, and often consists of multiple bursts, possibly due to greater compliance of the active articulator. From the burst spectrum, the place of articulation appears to be normal. Finally, coordination of closure of the glottis and release of the primary articulator is still quite variable, as is apparent from a large standard deviation in VOT. Analysis of longitudinal data on young children will result in better models of the development of motor speech production. [Work supported by NIH Grants DC00038 and DC00075.
A mathematical model of medial consonant identification by cochlear implant users.
Svirsky, Mario A; Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi
2011-04-01
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.
Differential processing of consonants and vowels in lexical access through reading.
New, Boris; Araújo, Verónica; Nazzi, Thierry
2008-12-01
Do consonants and vowels have the same importance during reading? Recently, it has been proposed that consonants play a more important role than vowels for language acquisition and adult speech processing. This proposal has started receiving developmental support from studies showing that infants are better at processing specific consonantal than vocalic information while learning new words. This proposal also received support from adult speech processing. In our study, we directly investigated the relative contributions of consonants and vowels to lexical access while reading by using a visual masked-priming lexical decision task. Test items were presented following four different primes: identity (e.g., for the word joli, joli), unrelated (vabu), consonant-related (jalu), and vowel-related (vobi). Priming was found for the identity and consonant-related conditions, but not for the vowel-related condition. These results establish the privileged role of consonants during lexical access while reading.
A mathematical model of medial consonant identification by cochlear implant users
Svirsky, Mario A.; Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi
2011-01-01
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects’ ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects’ consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech. PMID:21476674
Moradi, Shahram; Lidestam, Björn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Rönnberg, Jerker
2017-09-18
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.
Ota, Mitsuhiko; Green, Sam J
2013-06-01
Although it has been often hypothesized that children learn to produce new sound patterns first in frequently heard words, the available evidence in support of this claim is inconclusive. To re-examine this question, we conducted a survival analysis of word-initial consonant clusters produced by three children in the Providence Corpus (0 ; 11-4 ; 0). The analysis took account of several lexical factors in addition to lexical input frequency, including the age of first production, production frequency, neighborhood density and number of phonemes. The results showed that lexical input frequency was a significant predictor of the age at which the accuracy level of cluster production in each word first reached 80%. The magnitude of the frequency effect differed across cluster types. Our findings indicate that some of the between-word variance found in the development of sound production can indeed be attributed to the frequency of words in the child's ambient language.
ERIC Educational Resources Information Center
Campbell, Tasha M.
2017-01-01
This dissertation explores Spanish nominal plural formation from a morphophonological perspective. The primary objective is to better understand heritage bilinguals' (HBs') phonological categorization of the morphological element of number in their heritage language. This is done by way of picture-naming elicitation tasks of consonant-final nouns…
TESL Reporter, Vol. 3, Nos. 1-4.
ERIC Educational Resources Information Center
Pack, Alice C., Ed.
Four issues of "TESL Reporter" are presented. Contents include the following articles: "Feedback: An Anti-Madeirization Compound" by Henry M. Schaafsma; "Using the Personal Pronoun 'I' as a Compound Subject" by G. Pang and D. Chu; "The Consonant'L' in Initial and Final Positions" by Maybelle Chong; "Sentence Expansion for the Elementary Level" by…
French Liaison: Linguistic and Sociolinguistic Influences on Speech Perception
ERIC Educational Resources Information Center
Dautricourt, Robin Guillaume
2010-01-01
French liaison is a phonological process that takes place when an otherwise silent word-final consonant is pronounced before a following vowel-initial word. It is a process that has been evolving for centuries, and whose patterns of realization are influenced by a wide range of interacting linguistic and social factors. French speakers therefore…
Brennan, Marc A; Lewis, Dawna; McCreery, Ryan; Kopun, Judy; Alexander, Joshua M
2017-10-01
Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL). To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL. Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification. Fourteen children (8-16 yr) and 14 adults (19-65 yr) with mild-to-severe SNHL. Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure. Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT. Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age. Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds. American Academy of Audiology
Non-Adjacent Consonant Sequence Patterns in English Target Words during the First-Word Period
ERIC Educational Resources Information Center
Aoyama, Katsura; Davis, Barbara L.
2017-01-01
The goal of this study was to investigate non-adjacent consonant sequence patterns in target words during the first-word period in infants learning American English. In the spontaneous speech of eighteen participants, target words with a Consonant-Vowel-Consonant (C[subscript 1]VC[subscript 2]) shape were analyzed. Target words were grouped into…
The Perceptibility of Duration in the Phonetics and Phonology of Contrastive Consonant Length
ERIC Educational Resources Information Center
Hansen, Benjamin Bozzell
2012-01-01
This dissertation investigates the hypothesis that the more vowel-like a consonant is, the more difficult it is for listeners to classify it as geminate or singleton. A perceptual account of this observation holds that more vowel-like consonants lack clear markers to signal the beginning and ending of the consonant, so listeners don't perceive the…
Bentin, S; Mouchetant-Rostaing, Y; Giard, M H; Echallier, J F; Pernier, J
1999-05-01
The aim of the present study was to examine the time course and scalp distribution of electrophysiological manifestations of the visual word recognition mechanism. Event-related potentials (ERPs) elicited by visually presented lists of words were recorded while subjects were involved in a series of oddball tasks. The distinction between the designated target and nontarget stimuli was manipulated to induce a different level of processing in each session (visual, phonological/phonetic, phonological/lexical, and semantic). The ERPs of main interest in this study were those elicited by nontarget stimuli. In the visual task the targets were twice as big as the nontargets. Words, pseudowords, strings of consonants, strings of alphanumeric symbols, and strings of forms elicited a sharp negative peak at 170 msec (N170); their distribution was limited to the occipito-temporal sites. For the left hemisphere electrode sites, the N170 was larger for orthographic than for nonorthographic stimuli and vice versa for the right hemisphere. The ERPs elicited by all orthographic stimuli formed a clearly distinct cluster that was different from the ERPs elicited by nonorthographic stimuli. In the phonological/phonetic decision task the targets were words and pseudowords rhyming with the French word vitrail, whereas the nontargets were words, pseudowords, and strings of consonants that did not rhyme with vitrail. The most conspicuous potential was a negative peak at 320 msec, which was similarly elicited by pronounceable stimuli but not by nonpronounceable stimuli. The N320 was bilaterally distributed over the middle temporal lobe and was significantly larger over the left than over the right hemisphere. In the phonological/lexical processing task we compared the ERPs elicited by strings of consonants (among which words were selected), pseudowords (among which words were selected), and by words (among which pseudowords were selected). The most conspicuous potential in these tasks was a negative potential peaking at 350 msec (N350) elicited by phonologically legal but not by phonologically illegal stimuli. The distribution of the N350 was similar to that of the N320, but it was broader and including temporo-parietal areas that were not activated in the "rhyme" task. Finally, in the semantic task the targets were abstract words, and the nontargets were concrete words, pseudowords, and strings of consonants. The negative potential in this task peaked at 450 msec. Unlike the lexical decision, the negative peak in this task significantly distinguished not only between phonologically legal and illegal words but also between meaningful (words) and meaningless (pseudowords) phonologically legal structures. The distribution of the N450 included the areas activated in the lexical decision task but also areas in the fronto-central regions. The present data corroborated the functional neuroanatomy of word recognition systems suggested by other neuroimaging methods and described their timecourse, supporting a cascade-type process that involves different but interconnected neural modules, each responsible for a different level of processing word-related information.
Phonetic Aspects of Children's Elicited Word Revisions.
ERIC Educational Resources Information Center
Paul-Brown, Diane; Yeni-Komshian, Grace H.
A study of the phonetic changes occurring when a speaker attempts to revise an unclear word for a listener focuses on changes made in the sound segment duration to maximize differences between phonemes. In the study, five-year-olds were asked by adults to revise words differing in voicing of initial and final stop consonants; a control group of…
Twenty-Four-Month-Olds' Perception of Word-Medial Onsets and Codas
ERIC Educational Resources Information Center
Wang, Yuanyuan; Seidl, Amanda
2016-01-01
Recent work has shown that children have detailed phonological representations of consonants at both word-initial and word-final edges. Nonetheless, it remains unclear whether onsets and codas are equally represented by young learners since word edges are isomorphic with syllable edges in this work. The current study sought to explore toddler's…
Perceptual invariance of coarticulated vowels over variations in speaking rate.
Stack, Janet W; Strange, Winifred; Jenkins, James J; Clarke, William D; Trent, Sonja A
2006-04-01
This study examined the perception and acoustics of a large corpus of vowels spoken in consonant-vowel-consonant syllables produced in citation-form (lists) and spoken in sentences at normal and rapid rates by a female adult. Listeners correctly categorized the speaking rate of sentence materials as normal or rapid (2% errors) but did not accurately classify the speaking rate of the syllables when they were excised from the sentences (25% errors). In contrast, listeners accurately identified the vowels produced in sentences spoken at both rates when presented the sentences and when presented the excised syllables blocked by speaking rate or randomized. Acoustical analysis showed that formant frequencies at syllable midpoint for vowels in sentence materials showed "target undershoot" relative to citation-form values, but little change over speech rate. Syllable durations varied systematically with vowel identity, speaking rate, and voicing of final consonant. Vowel-inherent-spectral-change was invariant in direction of change over rate and context for most vowels. The temporal location of maximum F1 frequency further differentiated spectrally adjacent lax and tense vowels. It was concluded that listeners were able to utilize these rate- and context-independent dynamic spectrotemporal parameters to identify coarticulated vowels, even when sentential information about speaking rate was not available.
Non Linear Assessment of Musical Consonance
NASA Astrophysics Data System (ADS)
Trulla, Lluis Lligoña; Guiliani, Alessandro; Zimatore, Giovanna; Colosimo, Alfredo; Zbilut, Joseph P.
2005-12-01
The position of intervals and the degree of musical consonance can be objectively explained by temporal series formed by mixing two pure sounds covering an octave. This result is achieved by means of Recurrence Quantification Analysis (RQA) without considering neither overtones nor physiological hypotheses. The obtained prediction of a consonance can be considered a novel solution to Galileo's conjecture on the nature of consonance. It constitutes an objective link between musical performance and listeners' hearing activity..
Kuwaiti Arabic: acquisition of singleton consonants.
Ayyad, Hadeel Salama; Bernhardt, B May; Stemberger, Joseph P
2016-09-01
Arabic, a Semitic language of the Afro-Asiatic variety, has a rich consonant inventory. Previous studies on Arabic phonological acquisition have focused primarily on dialects in Jordan and Egypt. Because Arabic varies considerably across regions, information is also needed for other dialects. To determine acquisition benchmarks for singleton consonants for Kuwaiti Arabic-speaking 4-year-olds. Participants were 80 monolingual Kuwaiti Arabic-speaking children divided into two age groups: 46-54 and 55-62 months. Post-hoc, eight children were identified as possibly at risk for protracted phonological development. A native Kuwaiti Arabic speaker audio-recorded and transcribed single-word speech samples (88 words) that tested consonants across word positions within a variety of word lengths and structures. Transcription reliability (point-to-point) was 95% amongst the authors, and 87% with an external consultant. Three acquisition levels were designated that indicated the proportion of children with no mismatches ('errors') for a given consonant: 90%+ of children, 75-89%, fewer than 75%. Mismatch patterns were described in terms of a phonological feature framework previously described in the literature. The Kuwaiti 4-year-olds produced many singleton consonants accurately, including pharyngeals and uvulars. Although the older age group had fewer manner and laryngeal mismatches than the younger age group, consonants still developing at age 5 included coronal fricatives and affricates, trilled /r/ and some uvularized consonants ('emphatics'). The possible at-risk group showed mastery of fewer consonants than the other children. By feature category, place mismatches were the most common, primarily de-emphasis and lack of contrast for [coronal, grooved] (distinguishing alveolar from interdental fricatives). Manner mismatches were next most common: the most frequent substitutions were [+lateral] [l] or other rhotics for /r/, and stops for fricatives. Laryngeal mismatches were few, and involved partial or full devoicing. Group differences generally reflected proportions of mismatches rather than types. Compared with studies for Jordanian and Egyptian Arabic, Kuwaiti 4-year-olds showed a somewhat more advanced consonant inventory than same age peers, especially with respect to uvulars, pharyngeals and uvularized (emphatic) consonants. Similar to the other studies, consonant categories yet to master were: [+trilled] /r/, coronal fricative feature [grooved], [+voiced] fricatives /ʕ, z/ and the affricate /d͡͡ʒ/ and some emphatics. Common mismatch patterns generally accorded with previous studies. This study provides criterion reference benchmarks for Kuwaiti Arabic consonant singleton acquisition in 4-year-olds. © 2016 Royal College of Speech and Language Therapists.
The basis of musical consonance as revealed by congenital amusia
Cousineau, Marion; McDermott, Josh H.; Peretz, Isabelle
2012-01-01
Some combinations of musical notes sound pleasing and are termed “consonant,” but others sound unpleasant and are termed “dissonant.” The distinction between consonance and dissonance plays a central role in Western music, and its origins have posed one of the oldest and most debated problems in perception. In modern times, dissonance has been widely believed to be the product of “beating”: interference between frequency components in the cochlea that has been believed to be more pronounced in dissonant than consonant sounds. However, harmonic frequency relations, a higher-order sound attribute closely related to pitch perception, has also been proposed to account for consonance. To tease apart theories of musical consonance, we tested sound preferences in individuals with congenital amusia, a neurogenetic disorder characterized by abnormal pitch perception. We assessed amusics’ preferences for musical chords as well as for the isolated acoustic properties of beating and harmonicity. In contrast to control subjects, amusic listeners showed no preference for consonance, rating the pleasantness of consonant chords no higher than that of dissonant chords. Amusics also failed to exhibit the normally observed preference for harmonic over inharmonic tones, nor could they discriminate such tones from each other. Despite these abnormalities, amusics exhibited normal preferences and discrimination for stimuli with and without beating. This dissociation indicates that, contrary to classic theories, beating is unlikely to underlie consonance. Our results instead suggest the need to integrate harmonicity as a foundation of music preferences, and illustrate how amusia may be used to investigate normal auditory function. PMID:23150582
Production and Perception of Temporal Patterns in Native and Non-Native Speech
Bent, Tessa; Bradlow, Ann R.; Smith, Bruce L.
2012-01-01
Two experiments examined production and perception of English temporal patterns by native and non-native participants. Experiment 1 indicated that native and non-native (L1 = Chinese) talkers differed significantly in their production of one English duration pattern (i.e., vowel lengthening before voiced versus voiceless consonants) but not another (i.e., tense versus lax vowels). Experiment 2 tested native and non-native listener identification of words that differed in voicing of the final consonant by the native and non-native talkers whose productions were substantially different in experiment 1. Results indicated that differences in native and non-native intelligibility may be partially explained by temporal pattern differences in vowel duration although other cues such as presence of stop releases and burst duration may also contribute. Additionally, speech intelligibility depends on shared phonetic knowledge between talkers and listeners rather than only on accuracy relative to idealized production norms. PMID:18679042
Vergara-Martínez, Marta; Perea, Manuel; Marín, Alejandro; Carreiras, Manuel
2011-09-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in a lexical decision task. The stimuli were displayed under different conditions in a masked priming paradigm with a 50-ms SOA: (i) identity/baseline condition e.g., chocolate-CHOCOLATE); (ii) vowels-delayed condition (e.g., choc_l_te-CHOCOLATE); (iii) consonants-delayed condition (cho_o_ate-CHOCOLATE); (iv) consonants-transposed condition (cholocate-CHOCOLATE); (v) vowels-transposed condition (chocalote-CHOCOLATE), and (vi) unrelated condition (editorial-CHOCOLATE). Results showed earlier ERP effects and longer reaction times for the delayed-letter compared to the transposed-letter conditions. Furthermore, at early stages of processing, consonants may play a greater role during letter identity processing. Differences between vowels and consonants regarding letter position assignment are discussed in terms of a later phonological level involved in lexical retrieval. Copyright © 2010 Elsevier Inc. All rights reserved.
Measuring Musical Consonance and Dissonance
ERIC Educational Resources Information Center
LoPresto, Michael C.
2015-01-01
Most combinations of musical tones are perceived as either "consonant," "pleasing" to the human ear, or "dissonant," which is "not pleasing." Despite being largely subjective in nature, sensations of consonance and dissonance can be quantified and then compared to the judgments of human subjects. The…
Spencer, Caroline; Weber-Fox, Christine
2014-09-01
In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9-5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. At the end of this activity the reader will be able to: (1) describe the current status of nonlinguistic and linguistic predictors for recovery and persistence of stuttering; (2) summarize current evidence regarding the potential value of consonant cluster articulation and nonword repetition abilities in helping to predict stuttering outcome in preschool children; (3) discuss the current findings in relation to potential implications for theories of developmental stuttering; (4) discuss the current findings in relation to potential considerations for the evaluation and treatment of developmental stuttering. Copyright © 2014 Elsevier Inc. All rights reserved.
Bartle, Carly J; Goozée, Justine V; Murdoch, Bruce E
2007-03-01
The effect of increasing word length on the articulatory dynamics (i.e. duration, distance, maximum acceleration, maximum deceleration, and maximum velocity) of consonant production in acquired apraxia of speech was investigated using electromagnetic articulography (EMA). Tongue-tip and tongue-back movement of one apraxic patient was recorded using the AG-200 EMA system during word-initial consonant productions in one, two, and three syllable words. Significantly deviant articulatory parameters were recorded for each of the target consonants during one, two, and three syllables words. Word length effects were most evident during the release phase of target consonant productions. The results are discussed with respect to theories of speech motor control as they relate to AOS.
Wang, M D; Reed, C M; Bilger, R C
1978-03-01
It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.
Individual differences reveal the basis of consonance.
McDermott, Josh H; Lehr, Andriana J; Oxenham, Andrew J
2010-06-08
Some combinations of musical notes are consonant (pleasant), whereas others are dissonant (unpleasant), a distinction central to music. Explanations of consonance in terms of acoustics, auditory neuroscience, and enculturation have been debated for centuries. We utilized individual differences to distinguish the candidate theories. We measured preferences for musical chords as well as nonmusical sounds that isolated particular acoustic factors--specifically, the beating and the harmonic relationships between frequency components, two factors that have long been thought to potentially underlie consonance. Listeners preferred stimuli without beats and with harmonic spectra, but across more than 250 subjects, only the preference for harmonic spectra was consistently correlated with preferences for consonant over dissonant chords. Harmonicity preferences were also correlated with the number of years subjects had spent playing a musical instrument, suggesting that exposure to music amplifies preferences for harmonic frequencies because of their musical importance. Harmonic spectra are prominent features of natural sounds, and our results indicate that they also underlie the perception of consonance. 2010 Elsevier Ltd. All rights reserved.
Dressler, William W; Balieiro, Mauro C; Ribeiro, Rosane P; Dos Santos, José Ernesto
2016-06-01
In this article, we examine the distribution of a marker of immune system stimulation-C-reactive protein-in urban Brazil. Social relationships are associated with immunostimulation, and we argue that cultural dimensions of social support, assessed by cultural consonance, are important in this process. Cultural consonance is the degree to which individuals, in their own beliefs and behaviors, approximate shared cultural models. A measure of cultural consonance in social support, based on a cultural consensus analysis regarding sources and patterns of social support in Brazil, was developed. In a survey of 258 persons, the association of cultural consonance in social support and C-reactive protein was examined, controlling for age, sex, body mass index, low-density lipoprotein cholesterol, depressive symptoms, and a social network index. Lower cultural consonance in social support was associated with higher C-reactive protein. Implications of these results for future research are discussed. © 2016 by the American Anthropological Association.
Mapping the cortical representation of speech sounds in a syllable repetition task.
Markiewicz, Christopher J; Bohland, Jason W
2016-11-01
Speech repetition relies on a series of distributed cortical representations and functional pathways. A speaker must map auditory representations of incoming sounds onto learned speech items, maintain an accurate representation of those items in short-term memory, interface that representation with the motor output system, and fluently articulate the target sequence. A "dorsal stream" consisting of posterior temporal, inferior parietal and premotor regions is thought to mediate auditory-motor representations and transformations, but the nature and activation of these representations for different portions of speech repetition tasks remains unclear. Here we mapped the correlates of phonetic and/or phonological information related to the specific phonemes and syllables that were heard, remembered, and produced using a series of cortical searchlight multi-voxel pattern analyses trained on estimates of BOLD responses from individual trials. Based on responses linked to input events (auditory syllable presentation), predictive vowel-level information was found in the left inferior frontal sulcus, while syllable prediction revealed significant clusters in the left ventral premotor cortex and central sulcus and the left mid superior temporal sulcus. Responses linked to output events (the GO signal cueing overt production) revealed strong clusters of vowel-related information bilaterally in the mid to posterior superior temporal sulcus. For the prediction of onset and coda consonants, input-linked responses yielded distributed clusters in the superior temporal cortices, which were further informative for classifiers trained on output-linked responses. Output-linked responses in the Rolandic cortex made strong predictions for the syllables and consonants produced, but their predictive power was reduced for vowels. The results of this study provide a systematic survey of how cortical response patterns covary with the identity of speech sounds, which will help to constrain and guide theoretical models of speech perception, speech production, and phonological working memory. Copyright © 2016 Elsevier Inc. All rights reserved.
How culture shapes the body: cultural consonance and body mass in urban Brazil.
Dressler, William W; Oths, Kathryn S; Balieiro, Mauro C; Ribeiro, Rosane P; Dos Santos, José Ernesto
2012-01-01
The aim of this article is to develop a model of how culture shapes the body, based on two studies conducted in urban Brazil. Research was conducted in 1991 and 2001 in four socioeconomically distinct neighborhoods. First, cultural domain analyses were conducted with samples of key informants. The cultural domains investigated included lifestyle, social support, family life, national identity, and food. Cultural consensus analysis was used to confirm shared knowledge in each domain and to derive measures of cultural consonance. Cultural consonance assesses how closely an individual matches the cultural consensus model for each domain. Second, body composition, cultural consonance, and related variables were assessed in community surveys. Multiple regression analysis was used to examine the association of cultural consonance and body composition, controlling for standard covariates and competing explanatory variables. In 1991, in a survey of 260 individuals, cultural consonance had a curvilinear association with the body mass index that differed for men and women, controlling for sociodemographic and dietary variables. In 2001, in a survey of 267 individuals, cultural consonance had a linear association with abdominal circumference that differed for men and women, controlling for sociodemographic and dietary variables. In general, as cultural consonance increases, body mass index and abdominal circumference decline, more strongly for women than men. As individuals, in their own beliefs and behaviors, more closely approximate shared cultural models in socially salient domains, body composition also more closely approximates the cultural prototype of the body. Copyright © 2012 Wiley Periodicals, Inc.
Cousineau, Marion; Bidelman, Gavin M.; Peretz, Isabelle; Lehmann, Alexandre
2015-01-01
Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception. PMID:26720000
Enhancing Vowel Discrimination Using Constructed Spelling
ERIC Educational Resources Information Center
Stewart, Katherine; Hayashi, Yusuke; Saunders, Kathryn
2010-01-01
In a computerized task, an adult with intellectual disabilities learned to construct consonant-vowel-consonant words in the presence of corresponding spoken words. During the initial assessment, the participant demonstrated high accuracy on one word group (containing the vowel-consonant units "it" and "un") but low accuracy on the other group…
Production of Consonants by Prelinguistically Deaf Children with Cochlear Implants
ERIC Educational Resources Information Center
Bouchard, Marie-Eve Gaul; Le Normand, Marie-Therese; Cohen, Henri
2007-01-01
Consonant production following the sensory restoration of audition was investigated in 22 prelinguistically deaf French children who received cochlear implants. Spontaneous speech productions were recorded at 6, 12, and 18 months post-surgery and consonant inventories were derived from both glossable and non-glossable phones using two acquisition…
Palatalization in Romanian: Experimental and Theoretical Approaches
ERIC Educational Resources Information Center
Spinu, Laura
2010-01-01
Within the larger context of the Romance languages, Romanian stands alone in exhibiting a surface contrast between plain and palatalized consonants (that is, consonants with a secondary palatal articulation). While the properties of secondary palatalization are well known for language families in which the set of palatalized consonants is…
A Vowel Is a Vowel: Generalizing Newly Learned Phonotactic Constraints to New Contexts
ERIC Educational Resources Information Center
Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia
2010-01-01
Adults can learn novel phonotactic constraints from brief listening experience. We investigated the representations underlying phonotactic learning by testing generalization to syllables containing new vowels. Adults heard consonant-vowel-consonant study syllables in which particular consonants were artificially restricted to the onset or coda…
An EPG Study of Palatal Consonants in Two Australian Languages
ERIC Educational Resources Information Center
Tabain, Marija; Fletcher, Janet; Butcher, Andrew
2011-01-01
This study presents EPG (electro-palatographic) data on (alveo-)palatal consonants from two Australian languages, Arrernte and Warlpiri. (Alveo-)palatal consonants are phonemic for stop, lateral and nasal manners of articulation in both languages, and are laminal articulations. However, in Arrernte, these lamino-(alveo-)palatals contrast with…
Bones, Oliver; Plack, Christopher J
2015-03-04
When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological "frequency-following response." The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding. Copyright © 2015 Bones and Plack.
Locus equations and coarticulation in three Australian languages.
Graetzer, Simone; Fletcher, Janet; Hajek, John
2015-02-01
Locus equations were applied to F2 data for bilabial, alveolar, retroflex, palatal, and velar plosives in three Australian languages. In addition, F2 variance at the vowel-consonant boundary, and, by extension, consonantal coarticulatory sensitivity, was measured. The locus equation slopes revealed that there were place-dependent differences in the magnitude of vowel-to-consonant coarticulation. As in previous studies, the non-coronal (bilabial and velar) consonants tended to be associated with the highest slopes, palatal consonants tended to be associated with the lowest slopes, and alveolar and retroflex slopes tended to be low to intermediate. Similarly, F2 variance measurements indicated that non-coronals displayed greater coarticulatory sensitivity to adjacent vowels than did coronals. Thus, both the magnitude of vowel-to-consonant coarticulation and the magnitude of consonantal coarticulatory sensitivity were seen to vary inversely with the magnitude of consonantal articulatory constraint. The findings indicated that, unlike results reported previously for European languages such as English, anticipatory vowel-to-consonant coarticulation tends to exceed carryover coarticulation in these Australian languages. Accordingly, on the F2 variance measure, consonants tended to be more sensitive to the coarticulatory effects of the following vowel. Prosodic prominence of vowels was a less significant factor in general, although certain language-specific patterns were observed.
The Pedagogical Use of Mobile Speech Synthesis (TTS): Focus on French Liaison
ERIC Educational Resources Information Center
Liakin, Denis; Cardoso, Walcir; Liakina, Natallia
2017-01-01
We examine the impact of the pedagogical use of mobile TTS on the L2 acquisition of French liaison, a process by which a word-final consonant is pronounced at the beginning of the following word if the latter is vowel-initial (e.g. peti/t.a/mi = > peti[ta]mi "boyfriend"). The study compares three groups of L2 French students learning…
Indifference to dissonance in native Amazonians reveals cultural variation in music perception.
McDermott, Josh H; Schultz, Alan F; Undurraga, Eduardo A; Godoy, Ricardo A
2016-07-28
by biology remains debated. One widely discussed phenomenon is that some combinations of notes are perceived by Westerners as pleasant, or consonant, whereas others are perceived as unpleasant,or dissonant. The contrast between consonance and dissonance is central to Western music and its origins have fascinated scholars since the ancient Greeks. Aesthetic responses to consonance are commonly assumed by scientists to have biological roots, and thus to be universally present in humans. Ethnomusicologists and composers, in contrast, have argued that consonance is a creation of Western musical culture. The issue has remained unresolved, partly because little is known about the extent of cross-cultural variation in consonance preferences. Here we report experiments with the Tsimane'--a native Amazonian society with minimal exposure to Western culture--and comparison populations in Bolivia and the United States that varied in exposure to Western music. Participants rated the pleasantness of sounds. Despite exhibiting Western-like discrimination abilities and Western-like aesthetic responses to familiar sounds and acoustic roughness, the Tsimane' rated consonant and dissonant chords and vocal harmonies as equally pleasant. By contrast, Bolivian city- and town-dwellers exhibited significant preferences for consonance,albeit to a lesser degree than US residents. The results indicate that consonance preferences can be absent in cultures sufficiently isolated from Western music, and are thus unlikely to reflect innate biases or exposure to harmonic natural sounds. The observed variation in preferences is presumably determined by exposure to musical harmony, suggesting that culture has a dominant role in shaping aesthetic responses to music.
Voice Onset Time Production in Speakers with Alzheimer's Disease
ERIC Educational Resources Information Center
Baker, Julie; Ryalls, Jack; Brice, Alejandro; Whiteside, Janet
2007-01-01
In the present study, voice onset time (VOT) measurements were compared between a group of individuals with moderate Alzheimer's disease (AD) and a group of healthy age- and gender-matched peers. Participants read a list of consonant-vowel-consonant (CVC) words, which included the six stop consonants. The VOT measurements were made from…
Cross-Linguistic Differences in the Immediate Serial Recall of Consonants versus Vowels
ERIC Educational Resources Information Center
Kissling, Elizabeth M.
2012-01-01
The current study investigated native English and native Arabic speakers' phonological short-term memory for sequences of consonants and vowels. Phonological short-term memory was assessed in immediate serial recall tasks conducted in Arabic and English for both groups. Participants (n = 39) heard series of six consonant-vowel syllables and wrote…
ERIC Educational Resources Information Center
Kurowski, Kathleen M.; Blumstein, Sheila E.; Palumbo, Carole L.; Waldstein, Robin S.; Burton, Martha W.
2007-01-01
The present study investigated the articulatory implementation deficits of Broca's and Wernicke's aphasics and their potential neuroanatomical correlates. Five Broca's aphasics, two Wernicke's aphasics, and four age-matched normal speakers produced consonant-vowel-(consonant) real word tokens consisting of [m, n] followed by [i, e, a, o, u]. Three…
ERIC Educational Resources Information Center
Knobel, Mark; Caramazza, Alfonso
2007-01-01
Caramazza et al. [Caramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing of consonants and vowels. "Nature," 403(6768), 428-430.] report two patients who exhibit a double dissociation between consonants and vowels in speech production. The patterning of this double dissociation cannot be explained by appealing to…
ERIC Educational Resources Information Center
Kambuziya, Aliyeh Kord-e Zafaranlu; Dehghan, Masoud
2011-01-01
This paper investigates epenthesis process in Persian to catch some results in relating to vowel and consonant insertion in Persian lexicon. This survey has a close relationship to the description of epenthetic consonants and the conditions in which these consonants are used. Since no word in Persian may begin with a vowel, so that hiatus can't be…
ERIC Educational Resources Information Center
Tamura, Shunsuke; Ito, Kazuhito; Hirose, Nobuyuki; Mori, Shuji
2018-01-01
Purpose: The purpose of this study was to investigate the psychophysical boundary used for categorization of voiced-voiceless stop consonants in native Japanese speakers. Method: Twelve native Japanese speakers participated in the experiment. The stimuli were synthetic stop consonant-vowel stimuli varying in voice onset time (VOT) with…
Double Consonants in English: Graphemic, Morphological, Prosodic and Etymological Determinants
ERIC Educational Resources Information Center
Berg, Kristian
2016-01-01
What determines consonant doubling in English? This question is pursued by using a large lexical database to establish systematic correlations between spelling, phonology and morphology. The main insights are: Consonant doubling is most regular at morpheme boundaries. It can be described in graphemic terms alone, i.e. without reference to…
ERIC Educational Resources Information Center
Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry
2015-01-01
Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of…
Predictions interact with missing sensory evidence in semantic processing areas.
Scharinger, Mathias; Bendixen, Alexandra; Herrmann, Björn; Henry, Molly J; Mildner, Toralf; Obleser, Jonas
2016-02-01
Human brain function draws on predictive mechanisms that exploit higher-level context during lower-level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704-716, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Gerlach, Sharon Ruth
2010-01-01
This dissertation examines three processes affecting consonants in child speech: harmony (long-distance assimilation) involving major place features as in "coat" [kouk]; long-distance metathesis as in "cup" [p[wedge]k]; and initial consonant deletion as in "fish" [is]. These processes are unattested in adult phonology, leading to proposals for…
The Role of Geminates in Infants' Early Word Production and Word-Form Recognition
ERIC Educational Resources Information Center
Vihman, Marilyn; Majoran, Marinella
2017-01-01
Infants learning languages with long consonants, or geminates, have been found to "overselect" and "overproduce" these consonants in early words and also to commonly omit the word-initial consonant. A production study with thirty Italian children recorded at 1;3 and 1;9 strongly confirmed both of these tendencies. To test the…
Vocalization Rate and Consonant Production in Toddlers at High and Low Risk for Autism
ERIC Educational Resources Information Center
Chenausky, Karen; Nelson, Charles, III.; Tager-Flusberg, Helen
2017-01-01
Background: Previous work has documented lower vocalization rate and consonant acquisition delays in toddlers with autism spectrum disorder (ASD). We investigated differences in these variables at 12, 18, and 24 months in toddlers at high and low risk for ASD. Method: Vocalization rate and number of different consonants were obtained from speech…
Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora
ERIC Educational Resources Information Center
Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Halle, Pierre; Goldstein, Louis M.
2012-01-01
Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and…
Orthography affects second language speech: Double letters and geminate production in English.
Bassetti, Bene
2017-11-01
Second languages (L2s) are often learned through spoken and written input, and L2 orthographic forms (spellings) can lead to non-native-like pronunciation. The present study investigated whether orthography can lead experienced learners of English L2 to make a phonological contrast in their speech production that does not exist in English. Double consonants represent geminate (long) consonants in Italian but not in English. In Experiment 1, native English speakers and English L2 speakers (Italians) were asked to read aloud English words spelled with a single or double target consonant letter, and consonant duration was compared. The English L2 speakers produced the same consonant as shorter when it was spelled with a single letter, and longer when spelled with a double letter. Spelling did not affect consonant duration in native English speakers. In Experiment 2, effects of orthographic input were investigated by comparing 2 groups of English L2 speakers (Italians) performing a delayed word repetition task with or without orthographic input; the same orthographic effects were found in both groups. These results provide arguably the first evidence that L2 orthographic forms can lead experienced L2 speakers to make a contrast in their L2 production that does not exist in the language. The effect arises because L2 speakers are affected by the interaction between the L2 orthographic form (number of letters), and their native orthography-phonology mappings, whereby double consonant letters represent geminate consonants. These results have important implications for future studies investigating the effects of orthography on native phonology and for L2 phonological development models. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Cannito, Michael P; Chorna, Lesya B; Kahane, Joel C; Dworkin, James P
2014-05-01
This study evaluated the hypotheses that sentence production by speakers with adductor (AD) and abductor (AB) spasmodic dysphonia (SD) may be differentially influenced by consonant voicing and manner features, in comparison with healthy, matched, nondysphonic controls. This was a prospective, single blind study, using a between-groups, repeated measures design for the independent variables of perceived voice quality and sentence duration. Sixteen subjects with ADSD and 10 subjects with ABSD, as well as 26 matched healthy controls produced four short, simple sentences that were systematically loaded with voiced or voiceless consonants of either obstruant or continuant manner categories. Experienced voice clinicians, who were "blind" as to speakers' group affixations, used visual analog scaling to judge the overall voice quality of each sentence. Acoustic sentence durations were also measured. Speakers with ABSD or ADSD demonstrated significantly poorer than normal voice quality on all sentences. Speakers with ABSD exhibited longer than normal duration for voiceless consonant sentences. Speakers with ADSD had poorer voice quality for voiced than for voiceless consonant sentences. Speakers with ABSD had longer durations for voiceless than for voiced consonant sentences. The two subtypes of SD exhibit differential performance on the basis of consonant voicing in short, simple sentences; however, each subgroup manifested voicing-related differences on a different variable (voice quality vs sentence duration). Findings suggest different underlying pathophysiological mechanisms for ABSD and ADSD. Findings also support inclusion of short, simple sentences containing voiced or voiceless consonants as part of the diagnostic protocol for SD, with measurement of sentence duration in addition to judments of voice quality severity. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Albustanji, Yusuf M; Albustanji, Mahmoud M; Hegazi, Mohamed M; Amayreh, Mousa M
2014-10-01
The purpose of this study was to assess prevalence and types of consonant production errors and phonological processes in Saudi Arabic-speaking children with repaired cleft lip and palate, and to determine the relationship between frequency of errors on one hand and the type of the cleft. Possible relationship between age, gender and frequency of errors was also investigated. Eighty Saudi children with repaired cleft lip and palate aged 6-15 years (mean 6.7 years), underwent speech, language, and hearing evaluation. The diagnosis of articulation deficits was based on the results of an Arabic articulation test. Phonological processes were reported based on the productivity scale of a minimum 20% of occurrence. Diagnosis of nasality was based on a 5-point scale that reflects severity from 0 through 4. All participants underwent intraoral examination, informal language assessment, and hearing evaluation to assess their speech and language abilities. The Chi-Square test for independence was used to analyze the results of consonant production as a function of type of CLP and age. Out of 80 participants with CLP, 21 participants had normal articulation and resonance, 59 of participants (74%) showed speech abnormalities. Twenty-one of these 59 participants showed only articulation errors; 17 showed only hypernasality; and 21 showed both articulation and resonance deficits. CAs were observed in 20 participant. The productive phonological processes were consonant backing, final consonant deletion, gliding, and stopping. At age 6 and older, 37% of participants had persisting hearing loss. Despite early age at time of surgery (mean 6.7 months) for the studied CLP participants in this study, a substantial number of them demonstrated articulation errors and hypernasality. The results showed desirable findings for diverse languages. It is especially interesting to consider the prevalence of glottal stops and pharyngeal fricatives in a population for whom these sound are phonemic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel
2011-01-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…
ERIC Educational Resources Information Center
Recasens, Daniel
2015-01-01
Purpose: The goal of this study was to ascertain the effect of changes in stress and speech rate on vowel coarticulation in vowel-consonant-vowel sequences. Method: Data on second formant coarticulatory effects as a function of changing /i/ versus /a/ were collected for five Catalan speakers' productions of vowel-consonant-vowel sequences with the…
ERIC Educational Resources Information Center
Jin, Su-Hyun; Liu, Chang
2014-01-01
Purpose: The purpose of this study was to examine the intelligibility of English consonants and vowels produced by Chinese-native (CN), and Korean-native (KN) students enrolled in American universities. Method: 16 English-native (EN), 32 CN, and 32 KN speakers participated in this study. The intelligibility of 16 American English consonants and 16…
Cognitive interference can be mitigated by consonant music and facilitated by dissonant music.
Masataka, Nobuo; Perlovsky, Leonid
2013-01-01
Debates on the origins of consonance and dissonance in music have a long history. While some scientists argue that consonance judgments are an acquired competence based on exposure to the musical-system-specific knowledge of a particular culture, others favor a biological explanation for the observed preference for consonance. Here we provide experimental confirmation that this preference plays an adaptive role in human cognition: it reduces cognitive interference. The results of our experiment reveal that exposure to a Mozart minuet mitigates interference, whereas, conversely, when the music is modified to consist of mostly dissonant intervals the interference effect is intensified.
Cognitive interference can be mitigated by consonant music and facilitated by dissonant music
Masataka, Nobuo; Perlovsky, Leonid
2013-01-01
Debates on the origins of consonance and dissonance in music have a long history. While some scientists argue that consonance judgments are an acquired competence based on exposure to the musical-system-specific knowledge of a particular culture, others favor a biological explanation for the observed preference for consonance. Here we provide experimental confirmation that this preference plays an adaptive role in human cognition: it reduces cognitive interference. The results of our experiment reveal that exposure to a Mozart minuet mitigates interference, whereas, conversely, when the music is modified to consist of mostly dissonant intervals the interference effect is intensified. PMID:23778307
Mild Dissonance Preferred Over Consonance in Single Chord Perception
Eerola, Tuomas
2016-01-01
Previous research on harmony perception has mainly been concerned with horizontal aspects of harmony, turning less attention to how listeners perceive psychoacoustic qualities and emotions in single isolated chords. A recent study found mild dissonances to be more preferred than consonances in single chord perception, although the authors did not systematically vary register and consonance in their study; these omissions were explored here. An online empirical experiment was conducted where participants (N = 410) evaluated chords on the dimensions of Valence, Tension, Energy, Consonance, and Preference; 15 different chords were played with piano timbre across two octaves. The results suggest significant differences on all dimensions across chord types, and a strong correlation between perceived dissonance and tension. The register and inversions contributed to the evaluations significantly, nonmusicians distinguishing between triadic inversions similarly to musicians. The mildly dissonant minor ninth, major ninth, and minor seventh chords were rated highest for preference, regardless of musical sophistication. The role of theoretical explanations such as aggregate dyadic consonance, the inverted-U hypothesis, and psychoacoustic roughness, harmonicity, and sharpness will be discussed to account for the preference of mild dissonance over consonance in single chord perception. PMID:27433333
Perea, Manuel; Acha, Joana
2009-02-01
Recently, a number of input coding schemes (e.g., SOLAR model, SERIOL model, open-bigram model, overlap model) have been proposed that capture the transposed-letter priming effect (i.e., faster response times for jugde-JUDGE than for jupte-JUDGE). In their current version, these coding schemes do not assume any processing differences between vowels and consonants. However, in a lexical decision task, Perea and Lupker (2004, JML; Lupker, Perea, & Davis, 2008, L&CP) reported that transposed-letter priming effects occurred for consonant transpositions but not for vowel transpositions. This finding poses a challenge for these recently proposed coding schemes. Here, we report four masked priming experiments that examine whether this consonant/vowel dissociation in transposed-letter priming is task-specific. In Experiment 1, we used a lexical decision task and found a transposed-letter priming effect only for consonant transpositions. In Experiments 2-4, we employed a same-different task - a task which taps early perceptual processes - and found a robust transposed-letter priming effect that did not interact with consonant/vowel status. We examine the implications of these findings for the front-end of the models of visual word recognition.
Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.
Bidelman, Gavin M; Grall, Jeremy
2014-11-01
Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes. Copyright © 2014 Elsevier Inc. All rights reserved.
Dickinson, Ann-Marie; Baker, Richard; Siciliano, Catherine; Munro, Kevin J
2014-10-01
To identify which training approach, if any, is most effective for improving perception of frequency-compressed speech. A between-subject design using repeated measures. Forty young adults with normal hearing were randomly allocated to one of four groups: a training group (sentence or consonant) or a control group (passive exposure or test-only). Test and training material differed in terms of material and speaker. On average, sentence training and passive exposure led to significantly improved sentence recognition (11.0% and 11.7%, respectively) compared with the consonant training group (2.5%) and test-only group (0.4%), whilst, consonant training led to significantly improved consonant recognition (8.8%) compared with the sentence training group (1.9%), passive exposure group (2.8%), and test-only group (0.8%). Sentence training led to improved sentence recognition, whilst consonant training led to improved consonant recognition. This suggests learning transferred between speakers and material but not stimuli. Passive exposure to sentence material led to an improvement in sentence recognition that was equivalent to gains from active training. This suggests that it may be possible to adapt passively to frequency-compressed speech.
Perception of resyllabification in French.
Gaskell, M Gareth; Spinelli, Elsa; Meunier, Fanny
2002-07-01
In three experiments, we examined the effects of phonological resyllabification processes on the perception of French speech. Enchainment involves the resyllabification of a word-final consonant across a syllable boundary (e.g., in chaque avion, the /k/ crosses the syllable boundary to become syllable initial). Liaison involves a further process of realization of a latent consonant, alongside resyllabification (e.g., the /t/ in petit avion). If the syllable is a dominant unit of perception in French (Mehler, Dommergues, Frauenfelder, & Segui, 1981), these processes should cause problems for recognition of the following word. A cross-modal priming experiment showed no cost attached to either type of resyllabification in terms of reduced activation of the following word. Furthermore, word- and sequence-monitoring experiments again showed no cost and suggested that the recognition of vowel-initial words may be facilitated when they are preceded by a word that had undergone resyllabification through enchainment or liaison. We examine the sources of information that could underpin facilitation and propose a refinement of the syllable's role in the perception of French speech.
Psychophysical basis for consonant musical intervals
NASA Astrophysics Data System (ADS)
Resnick, L.
1981-06-01
A suggestion is made to explain the acceptance of certain musical intervals as consonant and others as dissonant. The proposed explanation involves the relation between the time required to perceive a definite pitch and the period of a complex tone. If the former time is greater than the latter, the tone is consonant; otherwise it is dissonant. A quantitative examination leads to agreement with empirical data.
Different Timescales for the Neural Coding of Consonant and Vowel Sounds
Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.
2013-01-01
Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334
Bernhardt, B May; Hanson, R; Perez, D; Ávila, C; Lleó, C; Stemberger, J P; Carballo, G; Mendoza, E; Fresneda, D; Chávez-Peón, M
2015-01-01
Research on children's word structure development is limited. Yet, phonological intervention aims to accelerate the acquisition of both speech-sounds and word structure, such as word length, stress or shapes in CV sequences. Until normative studies and meta-analyses provide in-depth information on this topic, smaller investigations can provide initial benchmarks for clinical purposes. To provide preliminary reference data for word structure development in a variety of Spanish with highly restricted coda use: Granada Spanish (similar to many Hispano-American varieties). To be clinically applicable, such data would need to show differences by age, developmental typicality and word structure complexity. Thus, older typically developing (TD) children were expected to show higher accuracy than younger children and those with protracted phonological development (PPD). Complex or phonologically marked forms (e.g. multisyllabic words, clusters) were expected to be late developing. Participants were 59 children aged 3-5 years in Granada, Spain: 30 TD children, and 29 with PPD and no additional language impairments. Single words were digitally recorded by a native Spanish speaker using a 103-word list and transcribed by native Spanish speakers, with confirmation by a second transcriber team and acoustic analysis. The program Phon 1.5 provided quantitative data. In accordance with expectations, the TD and older age groups had better-established word structures than the younger children and those with PPD. Complexity was also relevant: more structural mismatches occurred in multisyllabic words, initial unstressed syllables and clusters. Heterosyllabic consonant sequences were more accurate than syllable-initial sequences. The most common structural mismatch pattern overall was consonant deletion, with syllable deletion most common in 3-year-olds and children with PPD. The current study provides preliminary reference data for word structure development in a Spanish variety with restricted coda use, both by age and types of word structures. Between ages 3 and 5 years, global measures (whole word match, word shape match) distinguished children with typical versus protracted phonological development. By age 4, children with typical development showed near-mastery of word structures, whereas 4- and 5-year-olds with PPD continued to show syllable deletion and cluster reduction, especially in multisyllabic words. The results underline the relevance of multisyllabic words and words with clusters in Spanish phonological assessment and the utility of word structure data for identification of protracted phonological development. © 2014 Royal College of Speech and Language Therapists.
Comesaña, Montserrat; Soares, Ana P; Marcet, Ana; Perea, Manuel
2016-11-01
In skilled adult readers, transposed-letter effects (jugde-JUDGE) are greater for consonant than for vowel transpositions. These differences are often attributed to phonological rather than orthographic processing. To examine this issue, we employed a scenario in which phonological involvement varies as a function of reading experience: A masked priming lexical decision task with 50-ms primes in adult and developing readers. Indeed, masked phonological priming at this prime duration has been consistently reported in adults, but not in developing readers (Davis, Castles, & Iakovidis, 1998). Thus, if consonant/vowel asymmetries in letter position coding with adults are due to phonological influences, transposed-letter priming should occur for both consonant and vowel transpositions in developing readers. Results with adults (Experiment 1) replicated the usual consonant/vowel asymmetry in transposed-letter priming. In contrast, no signs of an asymmetry were found with developing readers (Experiments 2-3). However, Experiments 1-3 did not directly test the existence of phonological involvement. To study this question, Experiment 4 manipulated the phonological prime-target relationship in developing readers. As expected, we found no signs of masked phonological priming. Thus, the present data favour an interpretation of the consonant/vowel dissociation in letter position coding as due to phonological rather than orthographic processing. © 2016 The British Psychological Society.
Chen, Fei; Loizou, Philipos C.
2012-01-01
Recent evidence suggests that spectral change, as measured by cochlea-scaled entropy (CSE), predicts speech intelligibility better than the information carried by vowels or consonants in sentences. Motivated by this finding, the present study investigates whether intelligibility indices implemented to include segments marked with significant spectral change better predict speech intelligibility in noise than measures that include all phonetic segments paying no attention to vowels/consonants or spectral change. The prediction of two intelligibility measures [normalized covariance measure (NCM), coherence-based speech intelligibility index (CSII)] is investigated using three sentence-segmentation methods: relative root-mean-square (RMS) levels, CSE, and traditional phonetic segmentation of obstruents and sonorants. While the CSE method makes no distinction between spectral changes occurring within vowels/consonants, the RMS-level segmentation method places more emphasis on the vowel-consonant boundaries wherein the spectral change is often most prominent, and perhaps most robust, in the presence of noise. Higher correlation with intelligibility scores was obtained when including sentence segments containing a large number of consonant-vowel boundaries than when including segments with highest entropy or segments based on obstruent/sonorant classification. These data suggest that in the context of intelligibility measures the type of spectral change captured by the measure is important. PMID:22559382
Bidelman, Gavin M.; Heinz, Michael G.
2011-01-01
Human listeners prefer consonant over dissonant musical intervals and the perceived contrast between these classes is reduced with cochlear hearing loss. Population-level activity of normal and impaired model auditory-nerve (AN) fibers was examined to determine (1) if peripheral auditory neurons exhibit correlates of consonance and dissonance and (2) if the reduced perceptual difference between these qualities observed for hearing-impaired listeners can be explained by impaired AN responses. In addition, acoustical correlates of consonance-dissonance were also explored including periodicity and roughness. Among the chromatic pitch combinations of music, consonant intervals∕chords yielded more robust neural pitch-salience magnitudes (determined by harmonicity∕periodicity) than dissonant intervals∕chords. In addition, AN pitch-salience magnitudes correctly predicted the ordering of hierarchical pitch and chordal sonorities described by Western music theory. Cochlear hearing impairment compressed pitch salience estimates between consonant and dissonant pitch relationships. The reduction in contrast of neural responses following cochlear hearing loss may explain the inability of hearing-impaired listeners to distinguish musical qualia as clearly as normal-hearing individuals. Of the neural and acoustic correlates explored, AN pitch salience was the best predictor of behavioral data. Results ultimately show that basic pitch relationships governing music are already present in initial stages of neural processing at the AN level. PMID:21895089
Fogerty, Daniel
2014-01-01
The present study investigated the importance of overall segment amplitude and intrinsic segment amplitude modulation of consonants and vowels to sentence intelligibility. Sentences were processed according to three conditions that replaced consonant or vowel segments with noise matched to the long-term average speech spectrum. Segments were replaced with (1) low-level noise that distorted the overall sentence envelope, (2) segment-level noise that restored the overall syllabic amplitude modulation of the sentence, and (3) segment-modulated noise that further restored faster temporal envelope modulations during the vowel. Results from the first experiment demonstrated an incremental benefit with increasing resolution of the vowel temporal envelope. However, amplitude modulations of replaced consonant segments had a comparatively minimal effect on overall sentence intelligibility scores. A second experiment selectively noise-masked preserved vowel segments in order to equate overall performance of consonant-replaced sentences to that of the vowel-replaced sentences. Results demonstrated no significant effect of restoring consonant modulations during the interrupting noise when existing vowel cues were degraded. A third experiment demonstrated greater perceived sentence continuity with the preservation or addition of vowel envelope modulations. Overall, results support previous investigations demonstrating the importance of vowel envelope modulations to the intelligibility of interrupted sentences. PMID:24606291
Plack, Christopher J.
2015-01-01
When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or “consonance”. Complex frequency ratios, on the other hand, evoke feelings of tension or “dissonance”. Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological “frequency-following response.” The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding. PMID:25740534
Koda, Hiroki; Basile, Muriel; Olivier, Marion; Remeuf, Kevin; Nagumo, Sumiharu; Blois-Heulin, Catherine; Lemasson, Alban
2013-08-01
The central position and universality of music in human societies raises the question of its phylogenetic origin. One of the most important properties of music involves harmonic musical intervals, in response to which humans show a spontaneous preference for consonant over dissonant sounds starting from early human infancy. Comparative studies conducted with organisms at different levels of the primate lineage are needed to understand the evolutionary scenario under which this phenomenon emerged. Although previous research found no preference for consonance in a New World monkey species, the question remained opened for Old World monkeys. We used an experimental paradigm based on a sensory reinforcement procedure to test auditory preferences for consonant sounds in Campbell's monkeys (Cercopithecus campbelli campbelli), an Old World monkey species. Although a systematic preference for soft (70 dB) over loud (90 dB) control white noise was found, Campbell's monkeys showed no preference for either consonant or dissonant sounds. The preference for soft white noise validates our noninvasive experimental paradigm, which can be easily reused in any captive facility to test for auditory preferences. This would suggest that human preference for consonant sounds is not systematically shared with New and Old World monkeys. The sensitivity for harmonic musical intervals emerged probably very late in the primate lineage.
Hashemi, Nassim; Ghorbani, Ali; Soleymani, Zahra; Kamali, Mohmmad; Ahmadi, Zohreh Ziatabar; Mahmoudian, Saeid
2018-07-01
Auditory discrimination of speech sounds is an important perceptual ability and a precursor to the acquisition of language. Auditory information is at least partially necessary for the acquisition and organization of phonological rules. There are few standardized behavioral tests to evaluate phonemic distinctive features in children with or without speech and language disorders. The main objective of the present study was the development, validity, and reliability of the Persian version of auditory word discrimination test (P-AWDT) for 4-8-year-old children. A total of 120 typical children and 40 children with speech sound disorder (SSD) participated in the present study. The test comprised of 160 monosyllabic paired-words distributed in the Forms A-1 and the Form A-2 for the initial consonants (80 words) and the Forms B-1 and the Form B-2 for the final consonants (80 words). Moreover, the discrimination of vowels was randomly included in all forms. Content validity was calculated and 50 children repeated the test twice with two weeks of interval (test-retest reliability). Further analysis was also implemented including validity, intraclass correlation coefficient (ICC), Cronbach's alpha (internal consistency), age groups, and gender. The content validity index (CVI) and the test-retest reliability of the P-AWDT were achieved 63%-86% and 81%-96%, respectively. Moreover, the total Cronbach's alpha for the internal consistency was estimated relatively high (0.93). Comparison of the mean scores of the P-AWDT in the typical children and the children with SSD revealed a significant difference. The results revealed that the group with SSD had greater severity of deficit than the typical group in auditory word discrimination. In addition, the difference between the age groups was statistically significant, especially in 4-4.11-year-old children. The performance of the two gender groups was relatively same. The comparison of the P-AWDT scores between the typical children and the children with SSD demonstrated differences in the capabilities of auditory phonological discrimination in both initial and final positions. It supposed that the P-AWDT meets the appropriate validity and reliability criteria. The P-AWDT test can be utilized to measure the distinctive features of phonemes, the auditory discrimination of initial and final consonants and middle vowels of words in 4-8-year-old typical children and children with SSD. Copyright © 2018. Published by Elsevier B.V.
Kim, Kwang S; Max, Ludo
2014-01-01
To estimate the contributions of feedforward vs. feedback control systems in speech articulation, we analyzed the correspondence between initial and final kinematics in unperturbed tongue and jaw movements for consonant-vowel (CV) and vowel-consonant (VC) syllables. If movement extents and endpoints are highly predictable from early kinematic information, then the movements were most likely completed without substantial online corrections (feedforward control); if the correspondence between early kinematics and final amplitude or position is low, online adjustments may have altered the planned trajectory (feedback control) (Messier and Kalaska, 1999). Five adult speakers produced CV and VC syllables with high, mid, or low vowels while movements of the tongue and jaw were tracked electromagnetically. The correspondence between the kinematic parameters peak acceleration or peak velocity and movement extent as well as between the articulators' spatial coordinates at those kinematic landmarks and movement endpoint was examined both for movements across different target distances (i.e., across vowel height) and within target distances (i.e., within vowel height). Taken together, results suggest that jaw and tongue movements for these CV and VC syllables are mostly under feedforward control but with feedback-based contributions. One type of feedback-driven compensatory adjustment appears to regulate movement duration based on variation in peak acceleration. Results from a statistical model based on multiple regression are presented to illustrate how the relative strength of these feedback contributions can be estimated.
Getting the beat: entrainment of brain activity by musical rhythm and pleasantness.
Trost, Wiebke; Frühholz, Sascha; Schön, Daniele; Labbé, Carolina; Pichon, Swann; Grandjean, Didier; Vuilleumier, Patrik
2014-12-01
Rhythmic entrainment is an important component of emotion induction by music, but brain circuits recruited during spontaneous entrainment of attention by music and the influence of the subjective emotional feelings evoked by music remain still largely unresolved. In this study we used fMRI to test whether the metric structure of music entrains brain activity and how music pleasantness influences such entrainment. Participants listened to piano music while performing a speeded visuomotor detection task in which targets appeared time-locked to either strong or weak beats. Each musical piece was presented in both a consonant/pleasant and dissonant/unpleasant version. Consonant music facilitated target detection and targets presented synchronously with strong beats were detected faster. FMRI showed increased activation of bilateral caudate nucleus when responding on strong beats, whereas consonance enhanced activity in attentional networks. Meter and consonance selectively interacted in the caudate nucleus, with greater meter effects during dissonant than consonant music. These results reveal that the basal ganglia, involved both in emotion and rhythm processing, critically contribute to rhythmic entrainment of subcortical brain circuits by music. Copyright © 2014 Elsevier Inc. All rights reserved.
Lin, Mengxi; Francis, Alexander L
2014-11-01
Both long-term native language experience and immediate linguistic expectations can affect listeners' use of acoustic information when making a phonetic decision. In this study, a Garner selective attention task was used to investigate differences in attention to consonants and tones by American English-speaking listeners (N = 20) and Mandarin Chinese-speaking listeners hearing speech in either American English (N = 17) or Mandarin Chinese (N = 20). To minimize the effects of lexical differences and differences in the linguistic status of pitch across the two languages, stimuli and response conditions were selected such that all tokens constitute legitimate words in both languages and all responses required listeners to make decisions that were linguistically meaningful in their native language. Results showed that regardless of ambient language, Chinese listeners processed consonant and tone in a combined manner, consistent with previous research. In contrast, English listeners treated tones and consonants as perceptually separable. Results are discussed in terms of the role of sub-phonemic differences in acoustic cues across language, and the linguistic status of consonants and pitch contours in the two languages.
Consonant-recognition patterns and self-assessment of hearing handicap.
Hustedde, C G; Wiley, T L
1991-12-01
Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory--Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal-hearing and hearing-impaired listeners.
Analysis of Spanish consonant recognition in 8-talker babble.
Moreno-Torres, Ignacio; Otero, Pablo; Luna-Ramírez, Salvador; Garayzábal Heinze, Elena
2017-05-01
This paper presents the results of a closed-set recognition task for 80 Spanish consonant-vowel sounds (16 C × 5 V, spoken by 2 talkers) in 8-talker babble (-6, -2, +2 dB). A ranking of resistance to noise was obtained using the signal detection d' measure, and confusion patterns were analyzed using a graphical method (confusion graphs). The resulting ranking indicated the existence of three resistance groups: (1) high resistance: /ʧ, s, ʝ/; (2) mid resistance: /r, l, m, n/; and (3) low resistance: /t, θ, x, ɡ, b, d, k, f, p/. Confusions involved mostly place of articulation and voicing errors, and occurred especially among consonants in the same resistance group. Three perceptual confusion groups were identified: the three low-energy fricatives (i.e., /f, θ, x/), the six stops (i.e., /p, t, k, b, d, ɡ/), and three consonants with clear formant structure (i.e., /m, n, l/). The factors underlying consonant resistance and confusion patterns are discussed. The results are compared with data from other languages.
Speech sound disorders in a community study of preschool children.
McLeod, Sharynne; Harrison, Linda J; McAllister, Lindy; McCormack, Jane
2013-08-01
To undertake a community (nonclinical) study to describe the speech of preschool children who had been identified by parents/teachers as having difficulties "talking and making speech sounds" and compare the speech characteristics of those who had and had not accessed the services of a speech-language pathologist (SLP). Stage 1: Parent/teacher concern regarding the speech skills of 1,097 4- to 5-year-old children attending early childhood centers was documented. Stage 2a: One hundred forty-three children who had been identified with concerns were assessed. Stage 2b: Parents returned questionnaires about service access for 109 children. The majority of the 143 children (86.7%) achieved a standard score below the normal range for the percentage of consonants correct (PCC) on the Diagnostic Evaluation of Articulation and Phonology (Dodd, Hua, Crosbie, Holm, & Ozanne, 2002). Consonants produced incorrectly were consistent with the late-8 phonemes ( Shriberg, 1993). Common phonological patterns were fricative simplification (82.5%), cluster simplification (49.0%)/reduction (19.6%), gliding (41.3%), and palatal fronting (15.4%). Interdental lisps on /s/ and /z/ were produced by 39.9% of the children, dentalization of other sibilants by 17.5%, and lateral lisps by 13.3%. Despite parent/teacher concern, only 41/109 children had contact with an SLP. These children were more likely to be unintelligible to strangers, to express distress about their speech, and to have a lower PCC and a smaller consonant inventory compared to the children who had no contact with an SLP. A significant number of preschool-age children with speech sound disorders (SSD) have not had contact with an SLP. These children have mild-severe SSD and would benefit from SLP intervention. Integrated SLP services within early childhood communities would enable earlier identification of SSD and access to intervention to reduce potential educational and social impacts affiliated with SSD.
González-García, Nadia; Rendón, Pablo L
2017-05-23
The neural correlates of consonance and dissonance perception have been widely studied, but not the neural correlates of consonance and dissonance production. The most straightforward manner of musical production is singing, but, from an imaging perspective, it still presents more challenges than listening because it involves motor activity. The accurate singing of musical intervals requires integration between auditory feedback processing and vocal motor control in order to correctly produce each note. This protocol presents a method that permits the monitoring of neural activations associated with the vocal production of consonant and dissonant intervals. Four musical intervals, two consonant and two dissonant, are used as stimuli, both for an auditory discrimination test and a task that involves first listening to and then reproducing given intervals. Participants, all female vocal students at the conservatory level, were studied using functional Magnetic Resonance Imaging (fMRI) during the performance of the singing task, with the listening task serving as a control condition. In this manner, the activity of both the motor and auditory systems was observed, and a measure of vocal accuracy during the singing task was also obtained. Thus, the protocol can also be used to track activations associated with singing different types of intervals or with singing the required notes more accurately. The results indicate that singing dissonant intervals requires greater participation of the neural mechanisms responsible for the integration of external feedback from the auditory and sensorimotor systems than does singing consonant intervals.
Impaired Perception of Sensory Consonance and Dissonance in Cochlear Implant Users.
Caldwell, Meredith T; Jiradejvong, Patpong; Limb, Charles J
2016-03-01
In light of previous research demonstrating poor pitch perception in cochlear implant (CI) users, we hypothesized that the presence of consonant versus dissonant chord accompaniment in real-world musical stimuli would not impact subjective assessment of degree of pleasantness in CI users. Consonance/dissonance are perceptual features of harmony resulting from pitch relationships between simultaneously presented musical notes. Generally, consonant sounds are perceived as pleasant and dissonant ones as unpleasant. CI users exhibit impairments in pitch perception, making music listening difficult and often unenjoyable. To our knowledge, consonance/dissonance perception has not been studied in the CI population. Twelve novel melodies were created for this study. By altering the harmonic structures of the accompanying chords, we created three permutations of varying dissonance for each melody (36 stimuli in all). Ten CI users and 12 NH listeners provided Likert scale ratings from -5 (very unpleasant) to +5 (very pleasant) for each of the stimuli. A two-way ANOVA showed main effects for Dissonance Level and Subject Type as well as a two-way interaction between the two. Pairwise comparisons indicated that NH stimuli pleasantness ratings decreased with increasing dissonance, whereas CI ratings did not. NH pleasantness ratings were consistently lower than CI ratings. For CI users, consonant versus dissonant chord accompaniment had no significant impact on whether a melody was considered pleasant or unpleasant. This finding may be partially responsible for the decreased enjoyment of many CI users during music perception and is another manifestation of impaired pitch perception in CI users.
The perception of syllable affiliation of singleton stops in repetitive speech.
de Jong, Kenneth J; Lim, Byung-Jin; Nagao, Kyoko
2004-01-01
Stetson (1951) noted that repeating singleton coda consonants at fast speech rates makes them be perceived as onset consonants affiliated with a following vowel. The current study documents the perception of rate-induced resyllabification, as well as what temporal properties give rise to the perception of syllable affiliation. Stimuli were extracted from a previous study of repeated stop + vowel and vowel + stop syllables (de Jong, 2001a, 2001b). Forced-choice identification tasks show that slow repetitions are clearly distinguished. As speakers increase rate, they reach a point after which listeners disagree as to the affiliation of the stop. This pattern is found for voiced and voiceless consonants using different stimulus extraction techniques. Acoustic models of the identifications indicate that the sudden shift in syllabification occurs with the loss of an acoustic hiatus between successive syllables. Acoustic models of the fast rate identifications indicate various other qualities, such as consonant voicing, affect the probability that the consonants will be perceived as onsets. These results indicate a model of syllabic affiliation where specific juncture-marking aspects of the signal dominate parsing, and in their absence other differences provide additional, weaker cues to syllabic affiliation.
An acoustic study of nasal consonants in three Central Australian languages.
Tabain, Marija; Butcher, Andrew; Breen, Gavan; Beare, Richard
2016-02-01
This study presents nasal consonant data from 21 speakers of three Central Australian languages: Arrernte, Pitjantjatjara and Warlpiri. The six nasals considered are bilabial /m/, dental /n/, alveolar /n/, retroflex /ɳ/, alveo-palatal /ɲ/, and velar /ŋ/. Nasal formant and bandwidth values are examined, as are the locations of spectral minima. Several differences are found between the bilabial /m/ and the velar /ŋ/, and also the palatal /ɲ/. The remaining coronal nasals /n n ɳ/ are not well differentiated within the nasal murmur, but their average bandwidths are lower than for the other nasal consonants. Broader spectral shape measures (Centre of Gravity and Standard Deviation) are also considered, and comparisons are made with data for stops and laterals in these languages based on the same spectral measures. It is suggested that nasals are not as easily differentiated using the various measures examined here as are stops and laterals. It is also suggested that existing models of nasal consonants do not fully account for the observed differences between the various nasal places of articulation; and that oral formants, in addition to anti-formants, contribute substantially to the output spectrum of nasal consonants.
Relationship between consonant recognition in noise and hearing threshold.
Yoon, Yang-soo; Allen, Jont B; Gooler, David M
2012-04-01
Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, HT (f), grouping HI listeners with HT (f) is widely practiced. In this article, the relationship between consonant recognition and HT (f) is considered over a range of signal-to-noise ratios (SNRs). Confusion matrices (CMs) from 25 HI ears were generated in response to 16 consonant-vowel syllables presented at 6 different SNRs. Individual differences scaling (INDSCAL) was applied to both feature-based matrices and CMs in order to evaluate the relationship between HT (f) and consonant recognition among HI listeners. The results showed no predictive relationship between the percent error scores (Pe) and HT (f) across SNRs. The multiple regression models showed that the HT (f) accounted for 39% of the total variance of the slopes of the Pe. Feature-based INDSCAL analysis showed consistent grouping of listeners across SNRs, but not in terms of HT (f). Systematic relationship between measures was also not defined by CM-based INDSCAL analysis across SNRs. HT (f) did not account for the majority of the variance (39%) in consonant recognition in noise when the complete body of the CM was considered.
Lexical representation of novel L2 contrasts
NASA Astrophysics Data System (ADS)
Hayes-Harb, Rachel; Masuda, Kyoko
2005-04-01
There is much interest among psychologists and linguists in the influence of the native language sound system on the acquisition of second languages (Best, 1995; Flege, 1995). Most studies of second language (L2) speech focus on how learners perceive and produce L2 sounds, but we know of only two that have considered how novel sound contrasts are encoded in learners' lexical representations of L2 words (Pallier et al., 2001; Ota et al., 2002). In this study we investigated how native speakers of English encode Japanese consonant quantity contrasts in their developing Japanese lexicons at different stages of acquisition (Japanese contrasts singleton versus geminate consonants but English does not). Monolingual English speakers, native English speakers learning Japanese for one year, and native speakers of Japanese were taught a set of Japanese nonwords containing singleton and geminate consonants. Subjects then performed memory tasks eliciting perception and production data to determine whether they encoded the Japanese consonant quantity contrast lexically. Overall accuracy in these tasks was a function of Japanese language experience, and acoustic analysis of the production data revealed non-native-like patterns of differentiation of singleton and geminate consonants among the L2 learners of Japanese. Implications for theories of L2 speech are discussed.
Are vowel errors influenced by consonantal context in the speech of persons with aphasia?
NASA Astrophysics Data System (ADS)
Gelfer, Carole E.; Bell-Berti, Fredericka; Boyle, Mary
2004-05-01
The literature suggests that vowels and consonants may be affected differently in the speech of persons with conduction aphasia (CA) or nonfluent aphasia with apraxia of speech (AOS). Persons with CA have shown similar error rates across vowels and consonants, while those with AOS have shown more errors for consonants than vowels. These data have been interpreted to suggest that consonants have greater gestural complexity than vowels. However, recent research [M. Boyle et al., Proc. International Cong. Phon. Sci., 3265-3268 (2003)] does not support this interpretation: persons with AOS and CA both had a high proportion of vowel errors, and vowel errors almost always occurred in the context of consonantal errors. To examine the notion that vowels are inherently less complex than consonants and are differentially affected in different types of aphasia, vowel production in different consonantal contexts for speakers with AOS or CA was examined. The target utterances, produced in carrier phrases, were bVC and bV syllables, allowing us to examine whether vowel production is influenced by consonantal context. Listener judgments were obtained for each token, and error productions were grouped according to the intended utterance and error type. Acoustical measurements were made from spectrographic displays.
Vowel bias in Danish word-learning: processing biases are language-specific.
Højen, Anders; Nazzi, Thierry
2016-01-01
The present study explored whether the phonological bias favoring consonants found in French-learning infants and children when learning new words (Havy & Nazzi, 2009; Nazzi, 2005) is language-general, as proposed by Nespor, Peña and Mehler (2003), or varies across languages, perhaps as a function of the phonological or lexical properties of the language in acquisition. To do so, we used the interactive word-learning task set up by Havy and Nazzi (2009), teaching Danish-learning 20-month-olds pairs of phonetically similar words that contrasted either on one of their consonants or one of their vowels, by either one or two phonological features. Danish was chosen because it has more vowels than consonants, and is characterized by extensive consonant lenition. Both phenomena could disfavor a consonant bias. Evidence of word-learning was found only for vocalic information, irrespective of whether one or two phonological features were changed. The implication of these findings is that the phonological biases found in early lexical processing are not language-general but develop during language acquisition, depending on the phonological or lexical properties of the native language. © 2015 John Wiley & Sons Ltd.
Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin
2006-01-01
In tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually lateralized to the right hemisphere. We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger preattentive response, as revealed by whole-head electric recordings of the mismatch negativity, in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern. Given the distinct acoustic features between a lexical tone and a consonant, this opposite lateralization pattern suggests the dependence of hemisphere dominance mainly on acoustic cues before speech input is mapped into a semantic representation in the processing stream. PMID:17159136
Influence of consonant frequency on Icelandic-speaking children's speech acquisition.
Másdóttir, Thóra; Stokes, Stephanie F
2016-04-01
A developmental hierarchy of phonetic feature complexity has been proposed, suggesting that later emerging sounds have greater articulatory complexity than those learned earlier. The aim of this research was to explore this hierarchy in a relatively unexplored language, Icelandic. Twenty-eight typically-developing Icelandic-speaking children were tested at 2;4 and 3;4 years. Word-initial and word-medial phonemic inventories and a phonemic implicational hierarchy are described. The frequency of occurrence of Icelandic consonants in the speech of 2;4 and 3;4 year old children was, from most to least frequent, n, s, t, p, r, m, l, k, f, ʋ, j, ɵ, h, kʰ, c, [Formula: see text], ɰ, pʰ, tʰ, cʰ, ç, [Formula: see text], [Formula: see text], [Formula: see text]. Consonant frequency was a strong predictor of consonant accuracy at 2;4 months (r(23) = -0.75), but the effect was weaker at 3;4 months (r(23) = -0.51). Acquisition of /c/, /[Formula: see text]/ and /l/ occurred earlier, relative to English, Swedish, Dutch and German. A frequency-bound practice effect on emerging consonants is proposed to account for the early emergence of /c/, /[Formula: see text]/ and /l/ in Icelandic.
ERIC Educational Resources Information Center
Rochette, Claude; Simard, Claude
A study of the phonetic combination of a constrictive consonant (specifically, [f], [v], and [r]) and a vowel in French using x-ray and oscillograph technology focused on the speed and process of articulation between the consonant and the vowel. The study considered aperture size, nasality, labiality, and accent. Articulation of a total of 407…
Vocal effort modulates the motor planning of short speech structures
NASA Astrophysics Data System (ADS)
Taitz, Alan; Shalom, Diego E.; Trevisan, Marcos A.
2018-05-01
Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.
NASA Astrophysics Data System (ADS)
Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin
2011-08-01
Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.
Onomatopoeias: a new perspective around space, image schemas and phoneme clusters.
Catricalà, Maria; Guidi, Annarita
2015-09-01
Onomatopoeias (
Consonant and Vowel Processing in Word Form Segmentation: An Infant ERP Study.
Von Holzen, Katie; Nishibayashi, Leo-Lyuki; Nazzi, Thierry
2018-01-31
Segmentation skill and the preferential processing of consonants (C-bias) develop during the second half of the first year of life and it has been proposed that these facilitate language acquisition. We used Event-related brain potentials (ERPs) to investigate the neural bases of early word form segmentation, and of the early processing of onset consonants, medial vowels, and coda consonants, exploring how differences in these early skills might be related to later language outcomes. Our results with French-learning eight-month-old infants primarily support previous studies that found that the word familiarity effect in segmentation is developing from a positive to a negative polarity at this age. Although as a group infants exhibited an anterior-localized negative effect, inspection of individual results revealed that a majority of infants showed a negative-going response (Negative Responders), while a minority showed a positive-going response (Positive Responders). Furthermore, all infants demonstrated sensitivity to onset consonant mispronunciations, while Negative Responders demonstrated a lack of sensitivity to vowel mispronunciations, a developmental pattern similar to previous literature. Responses to coda consonant mispronunciations revealed neither sensitivity nor lack of sensitivity. We found that infants showing a more mature, negative response to newly segmented words compared to control words (evaluating segmentation skill) and mispronunciations (evaluating phonological processing) at test also had greater growth in word production over the second year of life than infants showing a more positive response. These results establish a relationship between early segmentation skills and phonological processing (not modulated by the type of mispronunciation) and later lexical skills.
Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus.
Foo, Francine; King-Stephens, David; Weber, Peter; Laxer, Kenneth; Parvizi, Josef; Knight, Robert T
2016-01-01
The auditory cortex is well-known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG) signals directly from the lateral surface of either the left or right temporal lobe of eight patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70-150 Hz) frequency range within the superior temporal gyrus (STG) and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75 and 200 ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords, and a positive correlation between changes in γhigh power and the degree of stimulus roughness was observed in both hemispheres. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity toward dissonance.
Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus
Foo, Francine; King-Stephens, David; Weber, Peter; Laxer, Kenneth; Parvizi, Josef; Knight, Robert T.
2016-01-01
The auditory cortex is well-known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG) signals directly from the lateral surface of either the left or right temporal lobe of eight patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70–150 Hz) frequency range within the superior temporal gyrus (STG) and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75 and 200 ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords, and a positive correlation between changes in γhigh power and the degree of stimulus roughness was observed in both hemispheres. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity toward dissonance. PMID:27148011
Dressler, William W; Balieiro, Mauro C; Ribeiro, Rosane P; Dos Santos, José Ernesto
2009-01-01
In this study in urban Brazil we examine, as a predictor of depressive symptoms, the interaction between a single nucleotide polymorphism in the 2A receptor in the serotonin system (-1438G/A) and cultural consonance in family life, a measure of the degree to which an individual perceives her family as corresponding to a widely shared cultural model of the prototypical family. A community sample of 144 adults was followed over a 2-year-period. Cultural consonance in family life was assessed by linking individuals' perceptions of their own families with a shared cultural model of the family derived from cultural consensus analysis. The -1438G/A polymorphism in the 2A serotonin receptor was genotyped using a standard protocol for DNA extracted from leukocytes. Covariates included age, sex, socioeconomic status, and stressful life events. Cultural consonance in family life was prospectively associated with depressive symptoms. In addition, the interaction between genotype and cultural consonance in family life was significant. For individuals with the A/A variant of the -1438G/A polymorphism of the 2A receptor gene, the effect of cultural consonance in family life on depressive symptoms over a 2-year-period was larger (beta = -0.533, P < 0.01) than those effects for individuals with either the G/A (beta = -0.280, P < 0.10) or G/G (beta = -0.272, P < 0.05) variants. These results are consistent with a process in which genotype moderates the effects of culturally meaningful social experience on depressive symptoms. (c) 2008 Wiley-Liss, Inc.
Are consonant intervals music to their ears? Spontaneous acoustic preferences in a nonhuman primate.
McDermott, Josh; Hauser, Marc
2004-12-01
Humans find some sounds more pleasing than others; such preferences may underlie our enjoyment of music. To gain insight into the evolutionary origins of these preferences, we explored whether they are present in other animals. We designed a novel method to measure the spontaneous sound preferences of cotton-top tamarins, a species that has been extensively tested for other perceptual abilities. Animals were placed in a V-shaped maze, and their position within the maze controlled their auditory environment. One sound was played when they were in one branch of the maze, and a different sound for the opposite branch; no food was delivered during testing. We used the proportion of time spent in each branch as a measure of preference. The first two experiments were designed as tests of our method. In Experiment 1, we used loud and soft white noise as stimuli; all animals spent most of their time on the side with soft noise. In Experiment 2, tamarins spent more time on the side playing species-specific feeding chirps than on the side playing species-specific distress calls. Together, these two experiments suggest that the method is effective, providing a spontaneous measure of preference. In Experiment 3, however, subjects showed no preference for consonant over dissonant intervals. Finally, tamarins showed no preference in Experiment 4 for a screeching sound (comparable to fingernails on a blackboard) over amplitude-matched white noise. In contrast, humans showed clear preferences for the consonant intervals of Experiment 3 and the white noise of Experiment 4 using the same stimuli and a similar method. We conclude that tamarins' preferences differ qualitatively from those of humans. The preferences that support our capacity for music may, therefore, be unique among the primates, and could be music-specific adaptations.
Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart
2013-01-01
Purpose To determine whether children with dyslexia (DYS) are more affected than age-matched average readers (AR) by talker and intonation variability when perceiving speech in noise. Method Thirty-four DYS and 25 AR children were tested on their perception of consonants in naturally-produced consonant-vowel (CV) tokens in multi-talker babble. Twelve CVs were presented for identification in four conditions varying in the degree of talker and intonation variability. Consonant place (/bi/-/di/) and voicing (/bi/-/pi/) discrimination was investigated with the same conditions. Results DYS children made slightly more identification errors than AR children but only for conditions with variable intonation. Errors were more frequent for a subset of consonants, generally weakly-encoded for AR children, for tokens with intonation patterns (steady and rise-fall) that occur infrequently in connected discourse. In discrimination tasks, which have a greater memory and cognitive load, DYS children scored lower than AR children across all conditions. Conclusions Unusual intonation patterns had a disproportionate (but small) effect on consonant intelligibility in noise for DYS children but adding talker variability did not. DYS children do not appear to have a general problem in perceiving speech in degraded conditions, which makes it unlikely that they lack robust phonological representations. PMID:22761322
Fritz, Thomas Hans; Renders, Wiske; Müller, Karsten; Schmude, Paul; Leman, Marc; Turner, Robert; Villringer, Arno
2013-10-01
Helmholtz himself speculated about a role of the cochlea in the perception of musical dissonance. Here we indirectly investigated this issue, assessing the valence judgment of musical stimuli with variable consonance/dissonance and presented diotically (exactly the same dissonant signal was presented to both ears) or dichotically (a consonant signal was presented to each ear--both consonant signals were rhythmically identical but differed by a semitone in pitch). Differences in brain organisation underlying inter-subject differences in the percept of dichotically presented dissonance were determined with voxel-based morphometry. Behavioral results showed that diotic dissonant stimuli were perceived as more unpleasant than dichotically presented dissonance, indicating that interactions within the cochlea modulated the valence percept during dissonance. However, the behavioral data also suggested that the dissonance percept did not depend crucially on the cochlea, but also occurred as a result of binaural integration when listening to dichotic dissonance. These results also showed substantial between-participant variations in the valence response to dichotic dissonance. These differences were in a voxel-based morphometry analysis related to differences in gray matter density in the inferior colliculus, which strongly substantiated a key role of the inferior colliculus in consonance/dissonance representation in humans. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Nonhomogeneous transfer reveals specificity in speech motor learning.
Rochet-Capellan, Amélie; Richer, Lara; Ostry, David J
2012-03-01
Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning.
Nonhomogeneous transfer reveals specificity in speech motor learning
Rochet-Capellan, Amélie; Richer, Lara
2012-01-01
Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning. PMID:22190628
The role of tone and segmental information in visual-word recognition in Thai.
Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira
2017-07-01
Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.
Stippekohl, Bastian; Winkler, Markus H; Walter, Bertram; Kagerer, Sabine; Mucha, Ronald F; Pauli, Paul; Vaitl, Dieter; Stark, Rudolf
2012-01-01
An important feature of addiction is the high drug craving that may promote the continuation of consumption. Environmental stimuli classically conditioned to drug-intake have a strong motivational power for addicts and can elicit craving. However, addicts differ in the attitudes towards their own consumption behavior: some are content with drug taking (consonant users) whereas others are discontent (dissonant users). Such differences may be important for clinical practice because the experience of dissonance might enhance the likelihood to consider treatment. This fMRI study investigated in smokers whether these different attitudes influence subjective and neural responses to smoking stimuli. Based on self-characterization, smokers were divided into consonant and dissonant smokers. These two groups were presented smoking stimuli and neutral stimuli. Former studies have suggested differences in the impact of smoking stimuli depending on the temporal stage of the smoking ritual they are associated with. Therefore, we used stimuli associated with the beginning (BEGIN-smoking-stimuli) and stimuli associated with the terminal stage (END-smoking-stimuli) of the smoking ritual as distinct stimulus categories. Stimulus ratings did not differ between both groups. Brain data showed that BEGIN-smoking-stimuli led to enhanced mesolimbic responses (amygdala, hippocampus, insula) in dissonant compared to consonant smokers. In response to END-smoking-stimuli, dissonant smokers showed reduced mesocortical responses (orbitofrontal cortex, subcallosal cortex) compared to consonant smokers. These results suggest that smoking stimuli with a high incentive value (BEGIN-smoking-stimuli) are more appetitive for dissonant than consonant smokers at least on the neural level. To the contrary, smoking stimuli with low incentive value (END-smoking-stimuli) seem to be less appetitive for dissonant smokers than consonant smokers. These differences might be one reason why dissonant smokers experience difficulties in translating their attitudes into an actual behavior change.
Stippekohl, Bastian; Winkler, Markus H.; Walter, Bertram; Kagerer, Sabine; Mucha, Ronald F.; Pauli, Paul; Vaitl, Dieter; Stark, Rudolf
2012-01-01
An important feature of addiction is the high drug craving that may promote the continuation of consumption. Environmental stimuli classically conditioned to drug-intake have a strong motivational power for addicts and can elicit craving. However, addicts differ in the attitudes towards their own consumption behavior: some are content with drug taking (consonant users) whereas others are discontent (dissonant users). Such differences may be important for clinical practice because the experience of dissonance might enhance the likelihood to consider treatment. This fMRI study investigated in smokers whether these different attitudes influence subjective and neural responses to smoking stimuli. Based on self-characterization, smokers were divided into consonant and dissonant smokers. These two groups were presented smoking stimuli and neutral stimuli. Former studies have suggested differences in the impact of smoking stimuli depending on the temporal stage of the smoking ritual they are associated with. Therefore, we used stimuli associated with the beginning (BEGIN-smoking-stimuli) and stimuli associated with the terminal stage (END-smoking-stimuli) of the smoking ritual as distinct stimulus categories. Stimulus ratings did not differ between both groups. Brain data showed that BEGIN-smoking-stimuli led to enhanced mesolimbic responses (amygdala, hippocampus, insula) in dissonant compared to consonant smokers. In response to END-smoking-stimuli, dissonant smokers showed reduced mesocortical responses (orbitofrontal cortex, subcallosal cortex) compared to consonant smokers. These results suggest that smoking stimuli with a high incentive value (BEGIN-smoking-stimuli) are more appetitive for dissonant than consonant smokers at least on the neural level. To the contrary, smoking stimuli with low incentive value (END-smoking-stimuli) seem to be less appetitive for dissonant smokers than consonant smokers. These differences might be one reason why dissonant smokers experience difficulties in translating their attitudes into an actual behavior change. PMID:23155368
Theoretical Aspects of Speech Production.
ERIC Educational Resources Information Center
Stevens, Kenneth N.
1992-01-01
This paper on speech production in children and youth with hearing impairments summarizes theoretical aspects, including the speech production process, sound sources in the vocal tract, vowel production, and consonant production. Examples of spectra for several classes of vowel and consonant sounds in simple syllables are given. (DB)
Hisagi, Miwako; Shafer, Valerie L.; Strange, Winifred; Sussman, Elyse S.
2015-01-01
This study examined automaticity of discrimination of a Japanese length contrast for consonants (miʃi vs. miʃʃi) in native (Japanese) and non-native (American-English) listeners using behavioral measures and the event-related potential (ERP) mismatch negativity (MMN). Attention to the auditory input was manipulated either away from the auditory input via a visual oddball task (Visual Attend), or to the input by asking the listeners to count auditory deviants (Auditory Attend). Results showed a larger MMN when attention was focused on the consonant contrast than away from it for both groups. The MMN was larger for consonant duration increments than decrements. No difference in MMN between the language groups was observed, but the Japanese listeners did show better behavioral discrimination than the American English listeners. In addition, behavioral responses showed a weak, but significant correlation with MMN amplitude. These findings suggest that both acoustic-phonetic properties and phonological experience affects automaticity of speech processing. PMID:26119918
Presentation of words to separate hemispheres prevents interword illusory conjunctions.
Liederman, J; Sohn, Y S
1999-03-01
We tested the hypothesis that division of inputs between the hemispheres could prevent interword letter migrations in the form of illusory conjunctions. The task was to decide whether a centrally-presented consonant-vowel-consonant (CVC) target word matched one of four CVC words presented to a single hemisphere or divided between the hemispheres in a subsequent test display. During half of the target-absent trials, known as conjunction trials, letters from two separate words (e.g., "tag" and "cop") in the test display could be mistaken for a target word (e.g., "top"). For the other half of the target-absent trails, the test display did not match any target consonants (Experiment 1, N = 16) or it matched one target consonant (Experiment 2, N = 29), the latter constituting true "feature" trials. Bi- as compared to unihemispheric presentation significantly reduced the number of conjunction, but not feature, errors. Illusory conjunctions did not occur when the words were presented to separate hemispheres.
Visual Influences on Perception of Speech and Nonspeech Vocal-Tract Events
Brancazio, Lawrence; Best, Catherine T.; Fowler, Carol A.
2009-01-01
We report four experiments designed to determine whether visual information affects judgments of acoustically-specified nonspeech events as well as speech events (the “McGurk effect”). Previous findings have shown only weak McGurk effects for nonspeech stimuli, whereas strong effects are found for consonants. We used click sounds that serve as consonants in some African languages, but that are perceived as nonspeech by American English listeners. We found a significant McGurk effect for clicks presented in isolation that was much smaller than that found for stop-consonant-vowel syllables. In subsequent experiments, we found strong McGurk effects, comparable to those found for English syllables, for click-vowel syllables, and weak effects, comparable to those found for isolated clicks, for excised release bursts of stop consonants presented in isolation. We interpret these findings as evidence that the potential contributions of speech-specific processes on the McGurk effect are limited, and discuss the results in relation to current explanations for the McGurk effect. PMID:16922061
The influence of the level formants on the perception of synthetic vowel sounds
NASA Astrophysics Data System (ADS)
Kubzdela, Henryk; Owsianny, Mariuz
A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.
Beautemps, D; Badin, P; Bailly, G
2001-05-01
The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.
The Unexpected Side-Effects of Dissonance
ERIC Educational Resources Information Center
Bodner, Ehud; Gilboa, Avi; Amir, Dorit
2007-01-01
The effects of dissonant and consonant music on cognitive performance were examined. Situational dissonance and consonance were also tested and determined as the state where one's opinion is contrasted or matched with the majority's opinion, respectively. Subjects performed several cognitive tasks while listening to a melody arranged dissonantly,…
[Velopharyngeal closure pattern and speech performance among submucous cleft palate patients].
Heng, Yin; Chunli, Guo; Bing, Shi; Yang, Li; Jingtao, Li
2017-06-01
To characterize the velopharyngeal closure patterns and speech performance among submucous cleft palate patients. Patients with submucous cleft palate visiting the Department of Cleft Lip and Palate Surgery, West China Hospital of Stomatology, Sichuan University between 2008 and 2016 were reviewed. Outcomes of subjective speech evaluation including velopharyngeal function, consonant articulation, and objective nasopharyngeal endoscopy including the mobility of soft palate, pharyngeal walls were retrospectively analyzed. A total of 353 cases were retrieved in this study, among which 138 (39.09%) demonstrated velopharyngeal competence, 176 (49.86%) velopharyngeal incompetence, and 39 (11.05%) marginal velopharyngeal incompetence. A total of 268 cases were subjected to nasopharyngeal endoscopy examination, where 167 (62.31%) demonstrated circular closure pattern, 89 (33.21%) coronal pattern, and 12 (4.48%) sagittal pattern. Passavant's ridge existed in 45.51% (76/167) patients with circular closure and 13.48% (12/89) patients with coronal closure. Among the 353 patients included in this study, 137 (38.81%) presented normal articulation, 124 (35.13%) consonant elimination, 51 (14.45%) compensatory articulation, 36 (10.20%) consonant weakening, 25 (7.08%) consonant replacement, and 36 (10.20%) multiple articulation errors. Circular closure was the most prevalent velopharyngeal closure pattern among patients with submucous cleft palate, and high-pressure consonant deletion was the most common articulation abnormality. Articulation error occurred more frequently among patients with a low velopharyngeal closure rate.
Consonance in Information System Projects: A Relationship Marketing Perspective
ERIC Educational Resources Information Center
Lin, Pei-Ying
2010-01-01
Different stakeholders in the information system project usually have different perceptions and expectations of the projects. There is seldom consistency in the stakeholders' evaluations of the project outcome. Thus the outcomes of information system projects are usually disappointing to one or more stakeholders. Consonance is a process that can…
Factors Influencing Consonant Acquisition in Brazilian Portuguese-Speaking Children
ERIC Educational Resources Information Center
Ceron, Marizete Ilha; Gubiani, Marileda Barichello; de Oliveira, Camila Rosa; Keske-Soares, Márcia
2017-01-01
Purpose: We sought to provide valid and reliable data on the acquisition of consonant sounds in speakers of Brazilian Portuguese. Method: The sample comprised 733 typically developing monolingual speakers of Brazilian Portuguese (ages 3;0-8;11 [years;months]). The presence of surface speech error patterns, the revised percentage consonants…
Palatalization and Intrinsic Prosodic Vowel Features in Russian
ERIC Educational Resources Information Center
Ordin, Mikhail
2011-01-01
The presented study is aimed at investigating the interaction of palatalization and intrinsic prosodic features of the vowel in CVC (consonant+vowel+consonant) syllables in Russian. The universal nature of intrinsic prosodic vowel features was confirmed with the data from the Russian language. It was found that palatalization of the consonants…
Relationship between Consonant Recognition in Noise and Hearing Threshold
ERIC Educational Resources Information Center
Yoon, Yang-soo; Allen, Jont B.; Gooler, David M.
2012-01-01
Purpose: Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, "HT" (f), grouping HI listeners with "HT" (f) is widely practiced. In this article, the relationship between consonant recognition and "HT" (f) is…
ERIC Educational Resources Information Center
Bennett, Ruth, Ed.; And Others
This modified alphabet booklet belongs to a series of bilingual instructional materials in Hupa and English. The booklet begins with a Hupa Unifon alphabet chart giving the symbols used to reproduce the most simple version of the sounds in the Hupa language. Nearly 200 basic vocabulary words and phrases are given. A Hupa consonant is followed by…
ERIC Educational Resources Information Center
Vanden Bergh, Bruce G.; And Others
A study was conducted to determine if brand names that begin with consonants called "plosives" (B, C, D, G, K, P, and T) are more readily recalled and recognized than names that begin with other consonants or vowels. Additionally, the study investigated the relationship between name length and memorability, ability to associate names…
Hemispheric Differences in Processing Handwritten Cursive
ERIC Educational Resources Information Center
Hellige, Joseph B.; Adamson, Maheen M.
2007-01-01
Hemispheric asymmetry was examined for native English speakers identifying consonant-vowel-consonant (CVC) non-words presented in standard printed form, in standard handwritten cursive form or in handwritten cursive with the letters separated by small gaps. For all three conditions, fewer errors occurred when stimuli were presented to the right…
Variation in /?/ Outcomes in the Speech of U.S
ERIC Educational Resources Information Center
Figueroa, Nicholas James
2017-01-01
This dissertation investigated the speech productions of the implosive -r consonant by U.S.-born Puerto Rican and Dominican Heritage Language Spanish speakers in New York. The following main research questions were addressed: 1) Do heritage language Caribbean Spanish speakers evidence the same variation with the /?/ consonant in the implosive…
Linking working memory and long-term memory: a computational model of the learning of new words.
Jones, Gary; Gobet, Fernand; Pine, Julian M
2007-11-01
The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory (LTM) has been proposed. In this paper, we present a computational model of children's vocabulary acquisition (EPAM-VOC) that specifies how phonological working memory and LTM interact. The model learns phoneme sequences, which are stored in LTM and mediate how much information can be held in working memory. The model's behaviour is compared with that of children in a new study of NWR, conducted in order to ensure the same nonword stimuli and methodology across ages. EPAM-VOC shows a pattern of results similar to that of children: performance is better for shorter nonwords and for wordlike nonwords, and performance improves with age. EPAM-VOC also simulates the superior performance for single consonant nonwords over clustered consonant nonwords found in previous NWR studies. EPAM-VOC provides a simple and elegant computational account of some of the key processes involved in the learning of new words: it specifies how phonological working memory and LTM interact; makes testable predictions; and suggests that developmental changes in NWR performance may reflect differences in the amount of information that has been encoded in LTM rather than developmental changes in working memory capacity.
Cued Dichotic Listening with Right-Handed, Left-Handed, Bilingual and Learning-Disabled Children.
ERIC Educational Resources Information Center
Obrzut, John E.; And Others
This study used cued dichotic listening to investigate differences in language lateralization among right-handed (control), left handed, bilingual, and learning disabled children. Subjects (N=60) ranging in age from 7-13 years were administered a consonant-vowel-consonant dichotic paradigm with three experimental conditions (free recall, directed…
Speech-Language Pathologists' Knowledge of Tongue/Palate Contact for Consonants
ERIC Educational Resources Information Center
McLeod, Sharynne
2011-01-01
Speech-language pathologists (SLPs) rely on knowledge of tongue placement to assess and provide intervention. A total of 175 SLPs who worked with children with speech sound disorders (SSDs) drew coronal diagrams of tongue/palate contact for 24 English consonants. Comparisons were made between their responses and typical English-speaking adults'…
ERIC Educational Resources Information Center
Marcer, D.; And Others
1977-01-01
Compares the rates of forgetting of five-item sequences of acoustically similar and dissimilar consonants and words in the absence of proactive and retroactive interference in order to test whether within sequence similarity rather than stimulus length would have a greater influence on retention. (Author/RK)
Infants' Discrimination of Consonants: Interplay between Word Position and Acoustic Saliency
ERIC Educational Resources Information Center
Archer, Stephanie L.; Zamuner, Tania; Engel, Kathleen; Fais, Laurel; Curtin, Suzanne
2016-01-01
Research has shown that young infants use contrasting acoustic information to distinguish consonants. This has been used to argue that by 12 months, infants have homed in on their native language sound categories. However, this ability seems to be positionally constrained, with contrasts at the beginning of words (onsets) discriminated earlier.…
The Mechanics of Fingerspelling: Analyzing Ethiopian Sign Language
ERIC Educational Resources Information Center
Duarte, Kyle
2010-01-01
Ethiopian Sign Language utilizes a fingerspelling system that represents Amharic orthography. Just as each character of the Amharic abugida encodes a consonant-vowel sound pair, each sign in the Ethiopian Sign Language fingerspelling system uses handshape to encode a base consonant, as well as a combination of timing, placement, and orientation to…
The Effect of Anatomic Factors on Tongue Position Variability during Consonants
ERIC Educational Resources Information Center
Rudy, Krista; Yunusova, Yana
2013-01-01
Purpose: This study sought to investigate the effect of palate morphology and anthropometric measures of the head on positional variability of the tongue during consonants. Method: An electromagnetic tracking system was used to record tongue movements of 21 adults. Each talker produced a series of symmetrical VCV syllables containing one of the…
Vowel and Consonant Lessening: A Study of Articulating Reductions and Their Relations to Genders
ERIC Educational Resources Information Center
Lin, Grace Hui Chin; Chien, Paul Shih Chieh
2011-01-01
Using English as a global communicating tool makes Taiwanese people have to speak in English in diverse international situations. However, consonants and vowels in English are not all effortless for them to articulate. This phonological reduction study explores concepts about phonological (articulating system) approximation. From Taiwanese folks'…
ERIC Educational Resources Information Center
Becker, Frank; Reinvang, Ivar
2007-01-01
This study used the event-related brain potential mismatch negativity (MMN) to investigate preconscious discrimination of harmonically rich tones (differing in duration) and consonant-vowel syllables (differing in the initial consonant) in aphasia. Eighteen Norwegian aphasic patients, examined on average 3 months after brain injury, were compared…
Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing
ERIC Educational Resources Information Center
Wolf, Gail Marie
2016-01-01
This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…
Consonant Inventories in the Spontaneous Speech of Young Children: A Bootstrapping Procedure
ERIC Educational Resources Information Center
Van Severen, Lieve; Van Den Berg, Renate; Molemans, Inge; Gillis, Steven
2012-01-01
Consonant inventories are commonly drawn to assess the phonological acquisition of toddlers. However, the spontaneous speech data that are analysed often vary substantially in size and composition. Consequently, comparisons between children and across studies are fundamentally hampered. This study aims to examine the effect of sample size on the…
Psychoacoustic Assessment of Speech Communication Systems. The Diagnostic Discrimination Test.
ERIC Educational Resources Information Center
Grether, Craig Blaine
The present report traces the rationale, development and experimental evaluation of the Diagnostic Discrimination Test (DDT). The DDT is a three-choice test of consonant discriminability of the perceptual/acoustic dimensions of consonant phonemes within specific vowel contexts. The DDT was created and developed in an attempt to provide a…
Frequency, Gradience, and Variation in Consonant Insertion
ERIC Educational Resources Information Center
An, Young-ran
2010-01-01
This dissertation addresses the extent to which linguistic behavior can be described in terms of the projection of patterns from existing lexical items, through an investigation of Korean reduplication. Korean has a productive pattern of reduplication in which a consonant is inserted in a vowel-initial base, illustrated by forms such as "alok"--"t…
On the role of perception in shaping phonological assimilation rules.
Hura, S L; Lindblom, B; Diehl, R L
1992-01-01
Assimilation of nasals to the place of articulation of following consonants is a common and natural process among the world's languages. Recent phonological theory attributes this naturalness to the postulated geometry of articulatory features and the notion of spreading (McCarthy, 1988). Others view assimilation as a result of perception (Ohala, 1990), or as perceptually tolerated articulatory simplification (Kohler, 1990). Kohler notes that certain consonant classes (such as nasals and stops) are more likely than other classes (such as fricatives) to undergo place assimilation to a following consonant. To explain this pattern, he proposes that assimilation tends not to occur when the members of a consonant class are relatively distinctive perceptually, such that their articulatory reduction would be particularly salient. This explanation, of course, presupposes that the stops and nasals which undergo place assimilation are less distinctive than fricatives, which tend not to assimilate. We report experimental results that confirm Kohler's perceptual assumption: In the context of a following word initial stop, fricatives were less confusable than nasals or unreleased stops. We conclude, in agreement with Ohala and Kohler, that perceptual factors are likely to shape phonological assimilation rules.
When Less is More: Feedback, Priming, and the Pseudoword Superiority Effect
Massol, Stéphanie; Midgley, Katherine J.; Holcomb, Phillip J.; Grainger, Jonathan
2011-01-01
The present study combined masked priming with electrophysiological recordings to investigate orthographic priming effects with nonword targets. Targets were pronounceable nonwords (e.g., STRENG) or consonant strings (e.g., STRBNG), that both differed from a real word by a single letter substitution (STRONG). Targets were preceded by related primes that could be the same as the target (e.g., streng – STRENG, strbng-STRBNG) or the real word neighbor of the target (e.g., strong – STRENG, strong-STRBNG). Independently of priming, pronounceable nonwords were associated with larger negativities than consonant strings, starting at 290 ms post-target onset. Overall, priming effects were stronger and more long-lasting with pronounceable nonwords than consonant strings. However, consonant string targets showed an early effect of word neighbor priming in the absence of an effect of repetition priming, whereas pronounceable nonwords showed both repetition and word neighbor priming effects in the same time window. This pattern of priming effects is taken as evidence for feedback from whole-word orthographic representations activated by the prime stimulus that influences bottom-up processing of prelexical representations during target processing. PMID:21354110
Is Attention Shared Between the Ears?1
Shiffrin, Richard M.; Pisoni, David B.; Castaneda-Mendez, Kicab
2012-01-01
This study tests the locus of attention during selective listening for speech-like stimuli. Can processing be differentially allocated to the two ears? Two conditions were used. The simultaneous condition involved one of four randomly chosen stop-consonants being presented to one of the ears chosen at random. The sequential condition involved two intervals; in the first S listened to the right ear; in the second S listened to the left ear. One of the four consonants was presented to an attended ear during one of these intervals. Experiment I used no distracting stimuli. Experiment II utilized a distracting consonant not confusable with any of the four target consonants. This distractor was always presented to any ear not containing a target. In both experiments, simultaneous and sequential performance were essentially identical, despite the need for attention sharing between the two ears during the simultaneous condition. We conclude that selective attention does not occur during perceptual processing of speech sounds presented to the two ears. We suggest that attentive effects arise in short-term memory following processing. PMID:23226838
Stop and Fricative Devoicing in European Portuguese, Italian and German.
Pape, Daniel; Jesus, Luis M T
2015-06-01
This paper describes a cross-linguistic production study of devoicing for European Portuguese (EP), Italian, and German. We recorded all stops and fricatives in four vowel contexts and two word positions. We computed the devoicing of the time-varying patterns throughout the stop and fricative duration. Our results show that regarding devoicing behaviour, EP is more similar to German than Italian. While Italian shows almost no devoicing of all phonologically voiced consonants, both EP and German show strong and consistent devoicing through the entire consonant. Differences in consonant position showed no effect for EP and Italian, but were significantly different for German. The height of the vowel context had an effect for German and EP. For EP, we showed that a more posterior place of articulation and low vowel context lead to significantly more devoicing. However, in contrast to German, we could not find an influence of consonant position on devoicing. The high devoicing for all phonologically voiced stops and fricatives and the vowel context influence are a surprising new result. With respect to voicing maintenance, EP is more like German than other Romance languages.
Bach Is the Father of Harmony: Revealed by a 1/f Fluctuation Analysis across Musical Genres.
Wu, Dan; Kendrick, Keith M; Levitin, Daniel J; Li, Chaoyi; Yao, Dezhong
2015-01-01
Harmony is a fundamental attribute of music. Close connections exist between music and mathematics since both pursue harmony and unity. In music, the consonance of notes played simultaneously partly determines our perception of harmony; associates with aesthetic responses; and influences the emotion expression. The consonance could be considered as a window to understand and analyze harmony. Here for the first time we used a 1/f fluctuation analysis to investigate whether the consonance fluctuation structure in music with a wide range of composers and genres followed the scale free pattern that has been found for pitch, melody, rhythm, human body movements, brain activity, natural images and geographical features. We then used a network graph approach to investigate which composers were the most influential both within and across genres. Our results showed that patterns of consonance in music did follow scale-free characteristics, suggesting that this feature is a universally evolved one in both music and the living world. Furthermore, our network analysis revealed that Bach's harmony patterns were having the most influence on those used by other composers, followed closely by Mozart.
Crespo-Bojorque, Paola; Toro, Juan M
2015-02-01
Traditionally, physical features in musical chords have been proposed to be at the root of consonance perception. Alternatively, recent studies suggest that different types of experience modulate some perceptual foundations for musical sounds. The present study tested whether the mechanisms involved in the perception of consonance are present in an animal with no extensive experience with harmonic stimuli and a relatively limited vocal repertoire. In Experiment 1, rats were trained to discriminate consonant from dissonant chords and tested to explore whether they could generalize such discrimination to novel chords. In Experiment 2, we tested if rats could discriminate between chords differing only in their interval ratios and generalize them to different octaves. To contrast the observed pattern of results, human adults were tested with the same stimuli in Experiment 3. Rats successfully discriminated across chords in both experiments, but they did not generalize to novel items in either Experiment 1 or Experiment 2. On the contrary, humans not only discriminated among both consonance-dissonance categories, and among sets of interval ratios, they also generalized their responses to novel items. These results suggest that experience with harmonic sounds may be required for the construction of categories among stimuli varying in frequency ratios. However, the discriminative capacity observed in rats suggests that at least some components of auditory processing needed to distinguish chords based on their interval ratios are shared across species. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Development of a Serial Order in Speech Constrained by Articulatory Coordination
Oohashi, Hiroki; Watanabe, Hama; Taga, Gentaro
2013-01-01
Universal linguistic constraints seem to govern the organization of sound sequences in words. However, our understanding of the origin and development of these constraints is incomplete. One possibility is that the development of neuromuscular control of articulators acts as a constraint for the emergence of sequences in words. Repetitions of the same consonant observed in early infancy and an increase in variation of consonantal sequences over months of age have been interpreted as a consequence of the development of neuromuscular control. Yet, it is not clear how sequential coordination of articulators such as lips, tongue apex and tongue dorsum constrains sequences of labial, coronal and dorsal consonants in words over the course of development. We examined longitudinal development of consonant-vowel-consonant(-vowel) sequences produced by Japanese children between 7 and 60 months of age. The sequences were classified according to places of articulation for corresponding consonants. The analyses of individual and group data show that infants prefer repetitive and fronting articulations, as shown in previous studies. Furthermore, we reveal that serial order of different places of articulations within the same organ appears earlier and then gradually develops, whereas serial order of different articulatory organs appears later and then rapidly develops. In the same way, we also analyzed the sequences produced by English children and obtained similar developmental trends. These results suggest that the development of intra- and inter-articulator coordination constrains the acquisition of serial orders in speech with the complexity that characterizes adult language. PMID:24223827
Specht, Karsten; Baumgartner, Florian; Stadler, Jörg; Hugdahl, Kenneth; Pollmann, Stefan
2014-01-01
To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA) and voice-onset time (VOT) differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI) study takes advantage of the superior spatial resolution and high sensitivity of ultra-high-field 7 T MRI. Subjects were attentively listening to consonant–vowel (CV) syllables with an alveolar or bilabial stop-consonant and either a short or long VOT. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the CV syllables. This was however modulated strongest by PoA such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale (PT) onto the right auditory cortex (AC) during the processing of alveolar CV syllables. Furthermore, the connectivity result indicated also a directed information flow from the right to the left AC, and further to the left PT for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right ACs, with the left PT as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the CV syllables. PMID:24966841
Lohmander, A; Willadsen, E; Persson, C; Henningsson, G; Bowden, M; Hutters, B
2009-07-01
To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcomes across five included languages were developed and tested. PARTICIPANTS AND MATERIALS: Randomly selected video recordings of 10 5-year-old children from each language (n = 50) were included in the project. Speech material consisted of test consonants in single words, connected speech, and syllable chains with nasal consonants. Five experienced speech and language pathologists participated as observers. Narrow phonetic transcription of test consonants translated into cleft speech characteristics, ordinal scale rating of resonance, and perceived velopharyngeal closure (VPC). A velopharyngeal composite score (VPC-sum) was extrapolated from raw data. Intra-agreement comparisons were performed. Range for intra-agreement for consonant analysis was 53% to 89%, for hypernasality on high vowels in single words the range was 20% to 80%, and the agreement between the VPC-sum and the overall rating of VPC was 78%. Pooling data of speakers of different languages in the same trial and comparing speech outcome across trials seems possible if the assessment of speech concerns consonants and is confined to speech units that are phonetically similar across languages. Agreed conventions and rules are important. A composite variable for perceptual assessment of velopharyngeal function during speech seems usable; whereas, the method for hypernasality evaluation requires further testing.
González-García, Nadia; González, Martha A; Rendón, Pablo L
2016-07-15
Relationships between musical pitches are described as either consonant, when associated with a pleasant and harmonious sensation, or dissonant, when associated with an inharmonious feeling. The accurate singing of musical intervals requires communication between auditory feedback processing and vocal motor control (i.e. audio-vocal integration) to ensure that each note is produced correctly. The objective of this study is to investigate the neural mechanisms through which trained musicians produce consonant and dissonant intervals. We utilized 4 musical intervals (specifically, an octave, a major seventh, a fifth, and a tritone) as the main stimuli for auditory discrimination testing, and we used the same interval tasks to assess vocal accuracy in a group of musicians (11 subjects, all female vocal students at conservatory level). The intervals were chosen so as to test for differences in recognition and production of consonant and dissonant intervals, as well as narrow and wide intervals. The subjects were studied using fMRI during performance of the interval tasks; the control condition consisted of passive listening. Singing dissonant intervals as opposed to singing consonant intervals led to an increase in activation in several regions, most notably the primary auditory cortex, the primary somatosensory cortex, the amygdala, the left putamen, and the right insula. Singing wide intervals as opposed to singing narrow intervals resulted in the activation of the right anterior insula. Moreover, we also observed a correlation between singing in tune and brain activity in the premotor cortex, and a positive correlation between training and activation of primary somatosensory cortex, primary motor cortex, and premotor cortex during singing. When singing dissonant intervals, a higher degree of training correlated with the right thalamus and the left putamen. Our results indicate that singing dissonant intervals requires greater involvement of neural mechanisms associated with integrating external feedback from auditory and sensorimotor systems than singing consonant intervals, and it would then seem likely that dissonant intervals are intoned by adjusting the neural mechanisms used for the production of consonant intervals. Singing wide intervals requires a greater degree of control than singing narrow intervals, as it involves neural mechanisms which again involve the integration of internal and external feedback. Copyright © 2016 Elsevier B.V. All rights reserved.
An Analysis of the Most Frequently Occurring Words in Spoken American English.
ERIC Educational Resources Information Center
Plant, Geoff
1999-01-01
A study analyzed frequency of occurrence of consonants, vowels, and diphthongs, syllabic structure of the words, and segmental structure of the 311 monosyllabic words of 500 words that occur most frequently in English. Three mannerisms of articulation accounted for nearly 75 percent of all consonant occurrences: stops, semi-vowels, and nasals.…
Perception of Non-Native Consonant Length Contrast: The Role of Attention in Phonetic Processing
ERIC Educational Resources Information Center
Porretta, Vincent J.; Tucker, Benjamin V.
2015-01-01
The present investigation examines English speakers' ability to identify and discriminate non-native consonant length contrast. Three groups (L1 English No-Instruction, L1 English Instruction, and L1 Finnish control) performed a speeded forced-choice identification task and a speeded AX discrimination task on Finnish non-words (e.g.…
Mismatch Responses to Lexical Tone, Initial Consonant, and Vowel in Mandarin-Speaking Preschoolers
ERIC Educational Resources Information Center
Lee, Chia-Ying; Yen, Huei-ling; Yeh, Pei-wen; Lin, Wan-Hsuan; Cheng, Ying-Ying; Tzeng, Yu-Lin; Wu, Hsin-Chi
2012-01-01
The present study investigates how age, phonological saliency, and deviance size affect the presence of mismatch negativity (MMN) and positive mismatch response (P-MMR). This work measured the auditory mismatch responses to Mandarin lexical tones, initial consonants, and vowels in 4- to 6-year-old preschoolers using the multiple-deviant oddball…
Level 2 Foundation Units. Key Stage 3: National Strategy.
ERIC Educational Resources Information Center
Department for Education and Skills, London (England).
These foundation units are aimed at pupils working within Level 2 entry to Year 7. They are designed to remind pupils what they know and take them forward. The units also will teach phonics knowledge from consonant-vowel-consonant (CVC) words to long vowel phonemes. The writing units focus on developing the following skills: understanding what a…
The Relative Position Priming Effect Depends on Whether Letters Are Vowels or Consonants
ERIC Educational Resources Information Center
Dunabeitia, Jon Andoni; Carreiras, Manuel
2011-01-01
The relative position priming effect is a type of subset priming in which target word recognition is facilitated as a consequence of priming the word with some of its letters, maintaining their relative position (e.g., "csn" as a prime for "casino"). Five experiments were conducted to test whether vowel-only and consonant-only…
The Labial-Coronal Effect Revisited: Japanese Adults Say Pata, but Hear Tapa
ERIC Educational Resources Information Center
Tsuji, Sho; Gomez, Nayeli Gonzalez; Medina, Victoria; Nazzi, Thierry; Mazuka, Reiko
2012-01-01
The labial-coronal effect has originally been described as a bias to initiate a word with a labial consonant-vowel-coronal consonant (LC) sequence. This bias has been explained with constraints on the human speech production system, and its perceptual correlates have motivated the suggestion of a perception-production link. However, previous…
Consonants and Vowels: Different Roles in Early Language Acquisition
ERIC Educational Resources Information Center
Hochmann, Jean-Remy; Benavides-Varela, Silvia; Nespor, Marina; Mehler, Jacques
2011-01-01
Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor…
Perceptual Confusions of American-English Vowels and Consonants by Native Arabic Bilinguals
ERIC Educational Resources Information Center
Shafiro, Valeriy; Levy, Erika S.; Khamis-Dakwar, Reem; Kharkhurin, Anatoliy
2013-01-01
This study investigated the perception of American-English (AE) vowels and consonants by young adults who were either (a) early Arabic-English bilinguals whose native language was Arabic or (b) native speakers of the English dialects spoken in the United Arab Emirates (UAE), where both groups were studying. In a closed-set format, participants…
Perception of Consonants in Reverberation and Noise by Adults Fitted with Bimodal Devices
ERIC Educational Resources Information Center
Mason, Michelle; Kokkinakis, Kostas
2014-01-01
Purpose: The purpose of this study was to evaluate the contribution of a contralateral hearing aid to the perception of consonants, in terms of voicing, manner, and place-of-articulation cues in reverberation and noise by adult cochlear implantees aided by bimodal fittings. Method: Eight postlingually deafened adult cochlear implant (CI) listeners…
Learning about Spelling Sequences: The Role of Onsets and Rimes in Analogies in Reading.
ERIC Educational Resources Information Center
Goswami, Usha
1991-01-01
In one experiment, children learned more about consonant blends at the onset than at the end of words. In a second experiment, children learned more about rhyming vowel-consonant blend sequences at the end of words than those at the beginning of words, where the vowel extended the onset. (BC)
The Effects of Background Noise on Dichotic Listening to Consonant-Vowel Syllables
ERIC Educational Resources Information Center
Sequeira, Sarah Dos Santos; Specht, Karsten; Hamalainen, Heikki; Hugdahl, Kenneth
2008-01-01
Lateralization of verbal processing is frequently studied with the dichotic listening technique, yielding a so called right ear advantage (REA) to consonant-vowel (CV) syllables. However, little is known about how background noise affects the REA. To address this issue, we presented CV-syllables either in silence or with traffic background noise…
Strategies for the Production of Spanish Stop Consonants by Native Speakers of English.
ERIC Educational Resources Information Center
Zampini, Mary L.
A study examined patterns in production of Spanish voiced and voiceless stop consonants by native English speakers, focusing on the interaction between two acoustic cues of stops: voice closure interval and voice onset time (VOT). The study investigated whether learners acquire the appropriate phonetic categories with regard to these stops and if…
Consonant Accuracy after Severe Pediatric Traumatic Brain Injury: A Prospective Cohort Study
ERIC Educational Resources Information Center
Campbell, Thomas F.; Dollaghan, Christine; Janosky, Janine; Rusiewicz, Heather Leavy; Small, Steven L.; Dick, Frederic; Vick, Jennell; Adelson, P. David
2013-01-01
Purpose: The authors sought to describe longitudinal changes in Percentage of Consonants Correct--Revised (PCC-R) after severe pediatric traumatic brain injury (TBI), to compare the odds of normal-range PCC-R in children injured at older and younger ages, and to correlate predictor variables and PCC-R outcomes. Method: In 56 children injured…
Treisman, A; Souther, J
1986-02-01
When attention is divided among four briefly exposed syllables, subjects mistakenly detect targets whose letters are present in the display but in the wrong combinations. These illusory conjunctions are somewhat more frequent when the target is a word and when the distractors are nonwords, but the effects of lexical status are small, and no longer reach significance in free report of the same displays. Search performance is further impaired if the nonwords are unpronounceable consonant strings rather than consonant-vowel-consonant strings, but the decrement is due to missed targets rather than to increased conjunction errors. The results are discussed in relation to feature-integration theory and to current models of word perception.
Pilot Non-Conformance to Alerting System Commands
NASA Technical Reports Server (NTRS)
Pritchett, Amy R.; Hansman, R. John
1997-01-01
Instances of pilot non-conformance to alerting system commands have been identified in previous studies. Pilot non-conformance changes the final behavior of the system, and therefore may reduce actual performance from that anticipated. A simulator study has examined pilot non-conformance, using the task of collision avoidance during closely spaced parallel approaches as a case study. Consonance between the display and the alerting system was found to significantly improve subject agreement with automatic alerts. Based on these results, a more general discussion of the factors involved in pilot conformance is given, and design guidelines for alerting systems are given.
Effects of blocking and presentation on the recognition of word and nonsense syllables in noise
NASA Astrophysics Data System (ADS)
Benkí, José R.
2003-10-01
Listener expectations may have significant effects on spoken word recognition, modulating word similarity effects from the lexicon. This study investigates the effect of blocking by lexical status on the recognition of word and nonsense syllables in noise. 240 phonemically matched word and nonsense CVC syllables [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84, 101-108 (1988)] were presented to listeners at different S/N ratios for identification. In the mixed condition, listeners were presented with blocks containing both words and nonwords, while listeners in the blocked condition were presented with the trials in blocks containing either words or nonwords. The targets were presented in isolation with 50 ms of preceding and following noise. Preliminary results indicate no effect of blocking on accuracy for either word or nonsense syllables; results from neighborhood density analyses will be presented. Consistent with previous studies, a j-factor analysis indicates that words are perceived as containing at least 0.5 fewer independent units than nonwords in both conditions. Relative to previous work on syllables presented in a frame sentence [Benkí, J. Acoust. Soc. Am. 113, 1689-1705 (2003)], initial consonants were perceived significantly less accurately, while vowels and final consonants were perceived at comparable rates.
Neural-scaled entropy predicts the effects of nonlinear frequency compression on speech perception
Rallapalli, Varsha H.; Alexander, Joshua M.
2015-01-01
The Neural-Scaled Entropy (NSE) model quantifies information in the speech signal that has been altered beyond simple gain adjustments by sensorineural hearing loss (SNHL) and various signal processing. An extension of Cochlear-Scaled Entropy (CSE) [Stilp, Kiefte, Alexander, and Kluender (2010). J. Acoust. Soc. Am. 128(4), 2112–2126], NSE quantifies information as the change in 1-ms neural firing patterns across frequency. To evaluate the model, data from a study that examined nonlinear frequency compression (NFC) in listeners with SNHL were used because NFC can recode the same input information in multiple ways in the output, resulting in different outcomes for different speech classes. Overall, predictions were more accurate for NSE than CSE. The NSE model accurately described the observed degradation in recognition, and lack thereof, for consonants in a vowel-consonant-vowel context that had been processed in different ways by NFC. While NSE accurately predicted recognition of vowel stimuli processed with NFC, it underestimated them relative to a low-pass control condition without NFC. In addition, without modifications, it could not predict the observed improvement in recognition for word final /s/ and /z/. Findings suggest that model modifications that include information from slower modulations might improve predictions across a wider variety of conditions. PMID:26627780
Raud Westberg, Liisi; Höglund Santamarta, Lena; Karlsson, Jenny; Nyberg, Jill; Neovius, Erik; Lohmander, Anette
2017-10-25
The aim of this study was to describe speech at 1, 1;6 and 3 years of age in children born with unilateral cleft lip and palate (UCLP) and relate the findings to operation method and amount of early intervention received. A prospective trial of children born with UCLP operated with a one-stage (OS) palatal repair at 12 months or a two-stage repair (TS) with soft palate closure at 3-4 months and hard palate closure at 12 months was undertaken (Scandcleft). At 1 and 1;6 years the place and manner of articulation and number of different consonants produced in babbling were reported in 33 children. At three years of age percentage consonants correct adjusted for age (PCC-A) and cleft speech errors were assessed in 26 of the 33 children. Early intervention was not provided as part of the trial but according to the clinical routine and was extracted from patient records. At age 3, the mean PCC-A was 68% and 46% of the children produced articulation errors with no significant difference between the two groups. At one year there was a significantly higher occurrence of oral stops and anterior place consonants in the TS group. There were significant correlations between the consonant production between one and three years of age, but not with amount of early intervention received. The TS method was beneficial for consonant production at age 1, but not shown at 1;6 or 3 years. Behaviourally based early intervention still needs to be evaluated.
The Role of the Auditory Brainstem in Processing Musically Relevant Pitch
Bidelman, Gavin M.
2013-01-01
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294
Coarticulation in Catalan Dark ["l"] and the Alveolar Trill: General Implications for Sound Change
ERIC Educational Resources Information Center
Recasens, Daniel
2013-01-01
Coarticulation data for Catalan reveal that, while being less sensitive to vowel effects at the consonant period, the alveolar trill [r] exerts more prominent effects than [dark "l"] on both adjacent [a] and [i]. This coarticulatory pattern may be related to strict manner demands on the production of the trill. Both consonants also differ…
Changes Over Time in Global Foreign Accent and Liquid Identifiability and Accuracy.
ERIC Educational Resources Information Center
Riney, Timothy J.; Flege, James E.
1998-01-01
Assessed global foreign accent in sentences and production of two English consonants by Japanese college students during their freshman and senior years (T1, T2). Auditory evaluations by native English-speaking listeners were used to determine to what extent the consonants produced could be identified as intended at T1 and T2; and whether the two…
ERIC Educational Resources Information Center
Woynaroski, Tiffany; Watson, Linda; Gardner, Elizabeth; Newsom, Cassandra R.; Keceli-Kaysili, Bahar; Yoder, Paul J.
2016-01-01
Diversity of key consonants used in communication (DKCC) is a value-added predictor of expressive language growth in initially preverbal children with autism spectrum disorder (ASD). Studying the predictors of DKCC growth in young children with ASD might inform treatment of this under-studied aspect of prelinguistic development. Eighty-seven…
ERIC Educational Resources Information Center
Hoover, Eric C.; Souza, Pamela E.; Gallun, Frederick J.
2012-01-01
Purpose: The benefits of amplitude compression in hearing aids may be limited by distortion resulting from rapid gain adjustment. To evaluate this, it is convenient to quantify distortion by using a metric that is sensitive to the changes in the processed signal that decrease consonant recognition, such as the Envelope Difference Index (EDI;…
ERIC Educational Resources Information Center
Zajac, David J.; Weissler, Mark C.
2004-01-01
Two studies were conducted to evaluate short-latency vocal tract air pressure responses to sudden pressure bleeds during production of voiceless bilabial stop consonants. It was hypothesized that the occurrence of respiratory reflexes would be indicated by distinct patterns of responses as a function of bleed magnitude. In Study 1, 19 adults…
The Prosodic Licensing of Coda Consonants in Early Speech: Interactions with Vowel Length
ERIC Educational Resources Information Center
Miles, Kelly; Yuen, Ivan; Cox, Felicity; Demuth, Katherine
2016-01-01
English has a word-minimality requirement that all open-class lexical items must contain at least two moras of structure, forming a bimoraic foot (Hayes, 1995).Thus, a word with either a long vowel, or a short vowel and a coda consonant, satisfies this requirement. This raises the question of when and how young children might learn this…
ERIC Educational Resources Information Center
van Severen, Lieve; Gillis, Joris J. M.; Molemans, Inge; van den Berg, Renate; De Maeyer, Sven; Gillis, Steven
2013-01-01
The impact of input frequency (IF) and functional load (FL) of segments in the ambient language on the acquisition order of word-initial consonants is investigated. Several definitions of IF/FL are compared and implemented. The impact of IF/FL and their components are computed using a longitudinal corpus of interactions between thirty…
ERIC Educational Resources Information Center
Folker, Joanne E.; Murdoch, Bruce E.; Cahill, Louise M.; Delatycki, Martin B.; Corben, Louise A.; Vogel, Adam P.
2011-01-01
Articulatory kinematics were investigated using electromagnetic articulography (EMA) in four dysarthric speakers with Friedreich's ataxia (FRDA). Specifically, tongue-tip and tongue-back movements were recorded by the AG-200 EMA system during production of the consonants t and k as produced within a sentence utterance and during a rapid syllable…
On Pitch Lowering Not Linked to Voicing: Nguni and Shona Group Depressors
ERIC Educational Resources Information Center
Downing, Laura J.
2009-01-01
This paper tests how well two theories of tone-segment interactions account for the lowering effect of so-called depressor consonants on tone in languages of the Shona and Nguni groups of Southern Bantu. I show that single source theories, which propose that pitch lowering is inextricably linked to consonant voicing, as they are reflexes of the…
Harmonic Domains and Synchronization in Typically and Atypically Developing Hebrew-Speaking Children
ERIC Educational Resources Information Center
Bat-El, Outi
2009-01-01
This paper presents a comparative study of typical and atypical consonant harmony (onset-onset place harmony), with emphasis on (i) the size of the harmonic domain, (ii) the position of the harmonic domain within the prosodic word, and (iii) the maximal size of the prosodic word that exhibits consonant harmony. The data, drawn from typically and…
ERIC Educational Resources Information Center
Haapala, Sini; Niemitalo-Haapola, Elina; Raappana, Antti; Kujala, Tiia; Kujala, Teija; Jansson-Verkasalo, Eira
2015-01-01
Many children experience recurrent acute otitis media (RAOM) in early childhood. In a previous study, 2-year-old children with RAOM were shown to have immature neural patterns for speech sound discrimination. The present study further investigated the consonant inventories of these same children using natural speech samples. The results showed…
Children's Identification of Consonants in a Speech-Shaped Noise or a Two-Talker Masker
ERIC Educational Resources Information Center
Leibold, Lori J.; Buss, Emily
2013-01-01
Purpose: To evaluate child-adult differences for consonant identification in a noise or a 2-talker masker. Error patterns were compared across age and masker type to test the hypothesis that errors with the noise masker reflect limitations in the peripheral encoding of speech, whereas errors with the 2-talker masker reflect target-masker…
ERIC Educational Resources Information Center
Uiboleht, Kaire; Karm, Mari; Postareff, Liisa
2016-01-01
Teaching approaches in higher education are at the general level well researched and have identified not only the two broad categories of content-focused and learning-focused approaches to teaching but also consonance and dissonance between the aspects of teaching. Consonance means that theoretically coherent teaching practices are employed, but…
ERIC Educational Resources Information Center
McCaffrey Morrison, Helen
2008-01-01
Locus equations (LEs) were derived from consonant-vowel-consonant (CVC) syllables produced by four speakers with profound hearing loss. Group data indicated that LE functions obtained for the separate CVC productions initiated by /b/, /d/, and /g/ were less well-separated in acoustic space than those obtained from speakers with normal hearing. A…
ERIC Educational Resources Information Center
Zascavage, Victoria Selden; McKenzie, Ginger Kelley; Buot, Max; Woods, Carol; Orton-Gillingham, Fellow
2012-01-01
This study compared word recognition for words written in a traditional flat font to the same words written in a three-dimensional appearing font determined to create a right hemispheric stimulation. The participants were emergent readers enrolled in Montessori schools in the United States learning to read basic CVC (consonant, vowel, consonant)…
ERIC Educational Resources Information Center
Shosted, Ryan; Hualde, Jose Ignacio; Scarpace, Daniel
2012-01-01
Are palatal consonants articulated by multiple tongue gestures (coronal and dorsal) or by a single gesture that brings the tongue into contact with the palate at several places of articulation? The lenition of palatal consonants (resulting in approximants) has been presented as evidence that palatals are simple, not complex: When reduced, they do…
ERIC Educational Resources Information Center
Redhair, Emily
2011-01-01
This study compared a stimulus fading (SF) procedure with a constant time delay (CTD) procedure for identification of consonant-vowel-consonant (CVC) nonsense words for a participant with autism. An alternating treatments design was utilized through a computer-based format. Receptive identification of target words was evaluated using a computer…
ERIC Educational Resources Information Center
Redhair, Emily I.; McCoy, Kathleen M.; Zucker, Stanley H.; Mathur, Sarup R.; Caterino, Linda
2013-01-01
This study compared a stimulus fading (SF) procedure with a constant time delay (CTD) procedure for identification of consonant-vowel-consonant (CVC) nonsense words for a participant with autism. An alternating treatments design was utilized through a computer-based format. Receptive identification of target words was evaluated using a computer…
Perfect harmony: A mathematical analysis of four historical tunings
NASA Astrophysics Data System (ADS)
Page, Michael F.
2004-10-01
In Western music, a musical interval defined by the frequency ratio of two notes is generally considered consonant when the ratio is composed of small integers. Perfect harmony or an ``ideal just scale,'' which has no exact solution, would require the division of an octave into 12 notes, each of which would be used to create six other consonant intervals. The purpose of this study is to analyze four well-known historical tunings to evaluate how well each one approximates perfect harmony. The analysis consists of a general evaluation in which all consonant intervals are given equal weighting and a specific evaluation for three preludes from Bach's ``Well-Tempered Clavier,'' for which intervals are weighted in proportion to the duration of their occurrence. The four tunings, 5-limit just intonation, quarter-comma meantone temperament, well temperament (Werckmeister III), and equal temperament, are evaluated by measures of centrality, dispersion, distance, and dissonance. When all keys and consonant intervals are equally weighted, equal temperament demonstrates the strongest performance across a variety of measures, although it is not always the best tuning. Given C as the starting note for each tuning, equal temperament and well temperament perform strongly for the three ``Well-Tempered Clavier'' preludes examined. .
2013-01-01
Background Previous studies have demonstrated functional and structural temporal lobe abnormalities located close to the auditory cortical regions in schizophrenia. The goal of this study was to determine whether functional abnormalities exist in the cortical processing of musical sound in schizophrenia. Methods Twelve schizophrenic patients and twelve age- and sex-matched healthy controls were recruited, and participants listened to a random sequence of two kinds of sonic entities, intervals (tritones and perfect fifths) and chords (atonal chords, diminished chords, and major triads), of varying degrees of complexity and consonance. The perception of musical sound was investigated by the auditory evoked potentials technique. Results Our results showed that schizophrenic patients exhibited significant reductions in the amplitudes of the N1 and P2 components elicited by musical stimuli, to which consonant sounds contributed more significantly than dissonant sounds. Schizophrenic patients could not perceive the dissimilarity between interval and chord stimuli based on the evoked potentials responses as compared with the healthy controls. Conclusion This study provided electrophysiological evidence of functional abnormalities in the cortical processing of sound complexity and music consonance in schizophrenia. The preliminary findings warrant further investigations for the underlying mechanisms. PMID:23721126
Verschuur, Carl
2009-03-01
Difficulties in speech recognition experienced by cochlear implant users may be attributed both to information loss caused by signal processing and to information loss associated with the interface between the electrode array and auditory nervous system, including cross-channel interaction. The objective of the work reported here was to attempt to partial out the relative contribution of these different factors to consonant recognition. This was achieved by comparing patterns of consonant feature recognition as a function of channel number and presence/absence of background noise in users of the Nucleus 24 device with normal hearing subjects listening to acoustic models that mimicked processing of that device. Additionally, in the acoustic model experiment, a simulation of cross-channel spread of excitation, or "channel interaction," was varied. Results showed that acoustic model experiments were highly correlated with patterns of performance in better-performing cochlear implant users. Deficits to consonant recognition in this subgroup could be attributed to cochlear implant processing, whereas channel interaction played a much smaller role in determining performance errors. The study also showed that large changes to channel number in the Advanced Combination Encoder signal processing strategy led to no substantial changes in performance.
Fels, S S; Hinton, G E
1997-01-01
Glove-Talk II is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-Talk II uses several input devices, a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. With Glove-Talk II, the subject can speak slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.
Skaalvik, Einar M; Skaalvik, Sidsel
2011-07-01
In their daily teaching and classroom management, teachers inevitably communicate and represent values. The purpose of this study was to explore relations between teachers' perception of school level values represented by the goal structure of the school and value consonance (the degree to which they felt that they shared the prevailing norms and values at the school), teachers' feeling of belonging, emotional exhaustion, job satisfaction, and motivation to leave the teaching profession. The participants were 231 Norwegian teachers in elementary school and middle school. Data were analyzed by means of structural equation modeling (SEM). Teachers' perception of mastery goal structure was strongly and positively related to value consonance and negatively related to emotional exhaustion, whereas performance goal structure, in the SEM model, was not significantly related to these constructs. Furthermore, value consonance was positively related to teachers' feeling of belonging and job satisfaction, whereas emotional exhaustion was negatively associated with job satisfaction. Job satisfaction was the strongest predictor of motivation to leave the teaching profession. A practical implication of the study is that educational goals and values should be explicitly discussed and clarified, both by education authorities and at the school level.
Correlational Analysis of Speech Intelligibility Tests and Metrics for Speech Transmission
2017-12-04
frequency scale (male voice; normal voice effort) ............................... 4 Fig. 2 Diagram of a speech communication system (Letowski...languages. Consonants contain mostly high frequency (above 1500 Hz) speech energy, but this energy is relatively small in comparison to that of the whole...voices (Letowski et al. 1993). Since the mid- frequency spectral region contains mostly vowel energy while consonants are high frequency sounds, an
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi
2015-11-01
Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
[The role of inter-dental consonant si in treating articulation disorders].
Jiang, Li-Ping; Wang, Guo-Min; Yang, Yu-Sheng; Liu, Qiong
2010-12-01
The aim of this study was to rectify deviant tongue position and make accurate pronunciation via making use of the protrusion and containment effect of interdental consonant [si] for the tongue. One hundred and fifty-seven patients with articulation disorders (postpalatoplasty and non-cleft palate) which were diagnosed as velopharyngeal sufficiency were included in this study. There were 111 males and 46 females, aging from 5 to 28 years old. Among them,29 patients were pharyngeal fricative, 73 patients were palatalized misarticulation, 36 patients were lateralization misarticulation and 19 patients were misarticulation mixed with palatalized and lateralization. During the treatment, the patients were asked to stick out the tongue to make the tooth gently biting it and pronounce a interdental consonant si smoothly. When the tongue was fully protracted, the tongue was retracted to the lingual side of mandibular anterior teeth to produce a normal apex linguae consonant [s]. This training method had a significant effect for patients with articulation disorders. The effect was most significant for patients with pharyngeal fricative, with a effective rate of 96.55%(28/29), followed by 91.78%(67/73) in palatalized misarticulation, 84.21%(16/19) in palatalized mixed with lateralization misarticulation, and 77.78%(28/36) in lateralization misarticulation. Training the pronunciation of interdental consonant [si] may control the retrusion, arching and curling movement of tongue, which therefore provides an effective treatment for articulation disorders such as pharyngeal fricative, palatalized and lateralization misarticulation. Supported by Research Fund of Science and Technology Commission of Shanghai Municipality (Grant No.08DZ2271100), Shanghai leading Academic Discipline Project (Grant No.S30206), Research Fund of Bureau of Health of Shanghai Municipality(Grant No.2008160) and Phosphor Science Foundation of Educational Commission of Shanghai Municipality (Grant No.2000SG41).
Vocal similarity predicts the relative attraction of musical chords
Purves, Dale; Gill, Kamraan Z.
2018-01-01
Musical chords are combinations of two or more tones played together. While many different chords are used in music, some are heard as more attractive (consonant) than others. We have previously suggested that, for reasons of biological advantage, human tonal preferences can be understood in terms of the spectral similarity of tone combinations to harmonic human vocalizations. Using the chromatic scale, we tested this theory further by assessing the perceived consonance of all possible dyads, triads, and tetrads within a single octave. Our results show that the consonance of chords is predicted by their relative similarity to voiced speech sounds. These observations support the hypothesis that the relative attraction of musical tone combinations is due, at least in part, to the biological advantages that accrue from recognizing and responding to conspecific vocal stimuli. PMID:29255031
ERIC Educational Resources Information Center
Lohmander, Anette; Lillvik, Malin; Friede, Hans
2004-01-01
The purpose of study was to investigate the impact of pre-surgical Infant Orthopaedics (IO) on consonant production at 18 months of age in children with Unilateral Cleft Lip and Palate (UCLP) and to compare the consonant production to that of age-matched children without clefts. The first ten children in a consecutive series of 20 with UCLP…
ERIC Educational Resources Information Center
Moradi, Shahram; Lidestam, Bjorn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Ronnberg, Jerker
2017-01-01
Purpose: We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels--in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands--in listeners with hearing impairment using hearing aids. Method: The study comprised 199…
ERIC Educational Resources Information Center
Meerschman, Iris; Van Lierde, Kristiane; Peeters, Karen; Meersman, Eline; Claeys, Sofie; D'haeseleer, Evelien
2017-01-01
Purpose: The purpose of this study was to determine the short-term effect of 2 semi-occluded vocal tract training programs, "resonant voice training using nasal consonants" versus "straw phonation," on the vocal quality of vocally healthy future occupational voice users. Method: A multigroup pretest--posttest randomized control…
Articulatory Control in Childhood Apraxia of Speech in a Novel Word-Learning Task.
Case, Julie; Grigos, Maria I
2016-12-01
Articulatory control and speech production accuracy were examined in children with childhood apraxia of speech (CAS) and typically developing (TD) controls within a novel word-learning task to better understand the influence of planning and programming deficits in the production of unfamiliar words. Participants included 16 children between the ages of 5 and 6 years (8 CAS, 8 TD). Short- and long-term changes in lip and jaw movement, consonant and vowel accuracy, and token-to-token consistency were measured for 2 novel words that differed in articulatory complexity. Children with CAS displayed short- and long-term changes in consonant accuracy and consistency. Lip and jaw movements did not change over time. Jaw movement duration was longer in children with CAS than in TD controls. Movement stability differed between low- and high-complexity words in both groups. Children with CAS displayed a learning effect for consonant accuracy and consistency. Lack of change in movement stability may indicate that children with CAS require additional practice to demonstrate changes in speech motor control, even within production of novel word targets with greater consonant and vowel accuracy and consistency. The longer movement duration observed in children with CAS is believed to give children additional time to plan and program movements within a novel skill.
Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.
2015-01-01
This study investigated how single-talker modulated noise impacts consonant and vowel cues to sentence intelligibility. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners completed speech recognition tests. All listeners received spectrally shaped speech matched to their individual audiometric thresholds to ensure sufficient audibility with the exception of a second younger listener group who received spectral shaping that matched the mean audiogram of the hearing-impaired listeners. Results demonstrated minimal declines in intelligibility for older listeners with normal hearing and more evident declines for older hearing-impaired listeners, possibly related to impaired temporal processing. A correlational analysis suggests a common underlying ability to process information during vowels that is predictive of speech-in-modulated noise abilities. Whereas, the ability to use consonant cues appears specific to the particular characteristics of the noise and interruption. Performance declines for older listeners were mostly confined to consonant conditions. Spectral shaping accounted for the primary contributions of audibility. However, comparison with the young spectral controls who received identical spectral shaping suggests that this procedure may reduce wideband temporal modulation cues due to frequency-specific amplification that affected high-frequency consonants more than low-frequency vowels. These spectral changes may impact speech intelligibility in certain modulation masking conditions. PMID:26093436
Dressler, William W.; Balieiro, Mauro C.; dos Santos, José E.
2018-01-01
Describing the link between culture (as a phenomenon pertaining to social aggregates) and the beliefs and behaviors of individuals has eluded satisfactory resolution; however, contemporary cognitive culture theory offers hope. In this theory, culture is conceptualized as cognitive models describing specific domains of life that are shared by members of a social group. It is sharing that gives culture its aggregate properties. There are two aspects to these cultural models at the level of the individual. Persons have their own representations of the world that correspond incompletely to the shared model—this is their ‘cultural competence.’ Persons are also variable in the degree to which they can put cultural models into practice in their own lives—this is their ‘cultural consonance.’ Low cultural consonance is a stressful experience and has been linked to higher psychological distress. The relationship of cultural competence per se and psychological distress is less clear. In the research reported here, cultural competence and cultural consonance are measured on the same sample and their associations with psychological distress are examined using multiple regression analysis. Results indicate that, with respect to psychological distress, while it is good to know the cultural model, it is better to put it into practice. PMID:29379460
Controller design and consonantal contrast coding using a multi-finger tactual display1
Israr, Ali; Meckl, Peter H.; Reed, Charlotte M.; Tan, Hong Z.
2009-01-01
This paper presents the design and evaluation of a new controller for a multi-finger tactual display in speech communication. A two-degree-of-freedom controller consisting of a feedback controller and a prefilter and its application in a consonant contrasting experiment are presented. The feedback controller provides stable, fast, and robust response of the fingerpad interface and the prefilter shapes the frequency-response of the closed-loop system to match with the human detection-threshold function. The controller is subsequently used in a speech communication system that extracts spectral features from recorded speech signals and presents them as vibrational-motional waveforms to three digits on a receiver’s left hand. Performance from a consonantal contrast test suggests that participants are able to identify tactual cues necessary for discriminating consonants in the initial position of consonant-vowel-consonant (CVC) segments. The average sensitivity indices for contrasting voicing, place, and manner features are 3.5, 2.7, and 3.4, respectively. The results show that the consonantal features can be successfully transmitted by utilizing a broad range of the kinesthetic-cutaneous sensory system. The present study also demonstrates the validity of designing controllers that take into account not only the electromechanical properties of the hardware, but the sensory characteristics of the human user. PMID:19507975
Neural mechanisms underlying valence inferences to sound: The role of the right angular gyrus.
Bravo, Fernando; Cross, Ian; Hawkins, Sarah; Gonzalez, Nadia; Docampo, Jorge; Bruno, Claudio; Stamatakis, Emmanuel Andreas
2017-07-28
We frequently infer others' intentions based on non-verbal auditory cues. Although the brain underpinnings of social cognition have been extensively studied, no empirical work has yet examined the impact of musical structure manipulation on the neural processing of emotional valence during mental state inferences. We used a novel sound-based theory-of-mind paradigm in which participants categorized stimuli of different sensory dissonance level in terms of positive/negative valence. Whilst consistent with previous studies which propose facilitated encoding of consonances, our results demonstrated that distinct levels of consonance/dissonance elicited differential influences on the right angular gyrus, an area implicated in mental state attribution and attention reorienting processes. Functional and effective connectivity analyses further showed that consonances modulated a specific inhibitory interaction from associative memory to mental state attribution substrates. Following evidence suggesting that individuals with autism may process social affective cues differently, we assessed the relationship between participants' task performance and self-reported autistic traits in clinically typical adults. Higher scores on the social cognition scales of the AQ were associated with deficits in recognising positive valence in consonant sound cues. These findings are discussed with respect to Bayesian perspectives on autistic perception, which highlight a functional failure to optimize precision in relation to prior beliefs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart
2013-02-01
In this study, the authors aimed to determine whether children with dyslexia (hereafter referred to as "DYS children") are more affected than children with average reading ability (hereafter referred to as "AR children") by talker and intonation variability when perceiving speech in noise. Thirty-four DYS and 25 AR children were tested on their perception of consonants in naturally produced CV tokens in multitalker babble. Twelve CVs were presented for identification in four conditions varying in the degree of talker and intonation variability. Consonant place (/bi/-/di/) and voicing (/bi/-/pi/) discrimination were investigated with the same conditions. DYS children made slightly more identification errors than AR children but only for conditions with variable intonation. Errors were more frequent for a subset of consonants, generally weakly encoded for AR children, for tokens with intonation patterns (steady and rise-fall) that occur infrequently in connected discourse. In discrimination tasks, which have a greater memory and cognitive load, DYS children scored lower than AR children across all conditions. Unusual intonation patterns had a disproportionate (but small) effect on consonant intelligibility in noise for DYS children, but adding talker variability did not. DYS children do not appear to have a general problem in perceiving speech in degraded conditions, which makes it unlikely that they lack robust phonological representations.
Fels, S S; Hinton, G E
1998-01-01
Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three-space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.
The production and phonetic representation of fake geminates in English
Oh, Grace E.; Redford, Melissa A.
2011-01-01
The current study focused on the production of non-contrastive geminates across different boundary types in English to investigate the hypothesis that word-internal heteromorphemic geminates may differ from those that arise across a word boundary. In this study, word-internal geminates arising from affixation, and described as either assimilated or concatenated, were matched to heteromorphemic geminates arising from sequences of identical consonants that spanned a word boundary and to word-internal singletons. Word-internal geminates were found to be longer than matched singletons in absolute and relative terms. By contrast, heteromorphemic geminates that occurred at word boundaries were only longer than matched singletons in absolute terms. In addition, heteromorphemic geminates in two word phrases were typically “pulled apart” in careful speech; that is, speakers marked the boundaries between free morphemes with pitch changes and pauses. Morpheme boundaries in words with bound affixes were very rarely highlighted in this way. These results are taken to indicate that most word-internal heteromorphemic geminates are represented as a single long consonant in the speech plan rather than as a consonant sequence. Only those geminates that arise in two word phrases exhibit phonetic characteristics that are fully consistent with the representation of two identical consonants crossing a morpheme boundary. PMID:22611293
Sætrevik, Bjørn
2012-01-01
The dichotic listening task is typically administered by presenting a consonant-vowel (CV) syllable to each ear and asking the participant to report the syllable heard most clearly. The results tend to show more reports of the right ear syllable than of the left ear syllable, an effect called the right ear advantage (REA). The REA is assumed to be due to the crossing over of auditory fibres and the processing of language stimuli being lateralised to left temporal areas. However, the tendency for most dichotic listening experiments to use only CV syllable stimuli limits the extent to which the conclusions can be generalised to also apply to other speech phonemes. The current study re-examines the REA in dichotic listening by using both CV and vowel-consonant (VC) syllables and combinations thereof. Results showed a replication of the REA response pattern for both CV and VC syllables, thus indicating that the general assumption of left-side localisation of processing can be applied for both types of stimuli. Further, on trials where a CV is presented in one ear and a VC is presented in the other ear, the CV is selected more often than the VC, indicating that these phonemes have an acoustic or processing advantage.
Léger, Agnès C.; Reed, Charlotte M.; Desloge, Joseph G.; Swaminathan, Jayaganesh; Braida, Louis D.
2015-01-01
Consonant-identification ability was examined in normal-hearing (NH) and hearing-impaired (HI) listeners in the presence of steady-state and 10-Hz square-wave interrupted speech-shaped noise. The Hilbert transform was used to process speech stimuli (16 consonants in a-C-a syllables) to present envelope cues, temporal fine-structure (TFS) cues, or envelope cues recovered from TFS speech. The performance of the HI listeners was inferior to that of the NH listeners both in terms of lower levels of performance in the baseline condition and in the need for higher signal-to-noise ratio to yield a given level of performance. For NH listeners, scores were higher in interrupted noise than in steady-state noise for all speech types (indicating substantial masking release). For HI listeners, masking release was typically observed for TFS and recovered-envelope speech but not for unprocessed and envelope speech. For both groups of listeners, TFS and recovered-envelope speech yielded similar levels of performance and consonant confusion patterns. The masking release observed for TFS and recovered-envelope speech may be related to level effects associated with the manner in which the TFS processing interacts with the interrupted noise signal, rather than to the contributions of TFS cues per se. PMID:26233038
ERIC Educational Resources Information Center
Yurtbasi, Metin
2016-01-01
The voiceless allophones of (alveolo) palatal stop consonant [c] and velar stop consonant [k] of the phoneme /k/ represented by the letter "K" exists in almost all languages of the world. Which of these will be sounded in speech is determined by the type of the vowel that are adjacent to them. In Turkish, the dark variant [k] occurs…
Green, K P; Gerdeman, A
1995-12-01
Two experiments examined the impact of a discrepancy in vowel quality between the auditory and visual modalities on the perception of a syllable-initial consonant. One experiment examined the effect of such a discrepancy on the McGurk effect by cross-dubbing auditory /bi/ tokens onto visual /ga/ articulations (and vice versa). A discrepancy in vowel category significantly reduced the magnitude of the McGurk effect and changed the pattern of responses. A 2nd experiment investigated the effect of such a discrepancy on the speeded classification of the initial consonant. Mean reaction times to classify the tokens increased when the vowel information was discrepant between the 2 modalities but not when the vowel information was consistent. These experiments indicate that the perceptual system is sensitive to cross-modal discrepancies in the coarticulatory information between a consonant and its following vowel during phonetic perception.
Cheng, Bing; Zhang, Yang
2015-01-01
The present study investigated how syllable structure differences between the first Language (L1) and the second language (L2) affect L2 consonant perception and production at syllable-initial and syllable-final positions. The participants were Mandarin-speaking college students who studied English as a second language. Monosyllabic English words were used in the perception test. Production was recorded from each Chinese subject and rated for accentedness by two native speakers of English. Consistent with previous studies, significant positional asymmetry effects were found across speech sound categories in terms of voicing, place of articulation, and manner of articulation. Furthermore, significant correlations between perception and accentedness ratings were found at the syllable onset position but not for the coda. Many exceptions were also found, which could not be solely accounted for by differences in L1–L2 syllabic structures. The results show a strong effect of language experience at the syllable level, which joins force with acoustic, phonetic, and phonemic properties of individual consonants in influencing positional asymmetry in both domains of L2 segmental perception and production. The complexities and exceptions call for further systematic studies on the interactions between syllable structure universals and native language interference with refined theoretical models to specify the links between perception and production in second language acquisition. PMID:26635699
Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A
2003-07-01
Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process.
Serbo-Croatian. SC-15A. Part 1. Basic Structure,
1983-09-30
regularly replace certain others in specific linguistic conditions. There are two main kinds of such mutations . One set is associated with sonority (voicing...or non-voicing), the other with palatalization. (a) Mutations caused by voicing or devoicing concern the plosive and sibilant consonants, since no... mutations exist of consonants f, h, c, J, r, v. The plosives and sibilants mutate as follows: p alternates with b p/b s alternates with z s/z t alternates
Patterns of phonological disability in Cantonese-speaking children in Hong Kong.
Cheung, P; Abberton, E
2000-01-01
Tone, vowel and consonant production are described for a large group of Cantonese-speaking children assessed in speech and language therapy clinics in Hong Kong. The patterns of disability follow predictions made on the basis of work on normal phonological development in Cantonese, and on psychoacoustic factors in acquisition: consonants account for more disability than vowels, and tones are least problematic. Possible articulatory and auditory contributions to explanation of the observed patterns are discussed.
ERIC Educational Resources Information Center
Eshghi, Marziye; Vallino, Linda D.; Baylis, Adriane L.; Preisser, John S.; Zajac, David J.
2017-01-01
Purpose: The objective was to determine velopharyngeal (VP) status of stop consonants and vowels produced by young children with repaired cleft palate (CP) and typically developing (TD) children from 12 to 18 months of age. Method: Nasal ram pressure (NRP) was monitored in 9 children (5 boys, 4 girls) with repaired CP with or without cleft lip and…
Speech training alters consonant and vowel responses in multiple auditory cortex fields
Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.
2015-01-01
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927
Lotto, A J; Kluender, K R
1998-05-01
When members of a series of synthesized stop consonants varying acoustically in F3 characteristics and varying perceptually from /da/ to /ga/ are preceded by /al/, subjects report hearing more /ga/ syllables relative to when each member is preceded by /ar/ (Mann, 1980). It has been suggested that this result demonstrates the existence of a mechanism that compensates for coarticulation via tacit knowledge of articulatory dynamics and constraints, or through perceptual recovery of vocal-tract dynamics. The present study was designed to assess the degree to which these perceptual effects are specific to qualities of human articulatory sources. In three experiments, series of consonant-vowel (CV) stimuli varying in F3-onset frequency (/da/-/ga/) were preceded by speech versions or nonspeech analogues of /al/ and /ar/. The effect of liquid identity on stop consonant labeling remained when the preceding VC was produced by a female speaker and the CV syllable was modeled after a male speaker's productions. Labeling boundaries also shifted when the CV was preceded by a sine wave glide modeled after F3 characteristics of /al/ and /ar/. Identifications shifted even when the preceding sine wave was of constant frequency equal to the offset frequency of F3 from a natural production. These results suggest an explanation in terms of general auditory processes as opposed to recovery of or knowledge of specific articulatory dynamics.
Remote programming of cochlear implants: a telecommunications model.
McElveen, John T; Blackburn, Erin L; Green, J Douglas; McLear, Patrick W; Thimsen, Donald J; Wilson, Blake S
2010-09-01
Evaluate the effectiveness of remote programming for cochlear implants. Retrospective review of the cochlear implant performance for patients who had undergone mapping and programming of their cochlear implant via remote connection through the Internet. Postoperative Hearing in Noise Test and Consonant/Nucleus/Consonant word scores for 7 patients who had undergone remote mapping and programming of their cochlear implant were compared with the mean scores of 7 patients who had been programmed by the same audiologist over a 12-month period. Times required for remote and direct programming were also compared. The quality of the Internet connection was assessed using standardized measures. Remote programming was performed via a virtual private network with a separate software program used for video and audio linkage. All 7 patients were programmed successfully via remote connectivity. No untoward patient experiences were encountered. No statistically significant differences could be found in comparing postoperative Hearing in Noise Test and Consonant/Nucleus/Consonant word scores for patients who had undergone remote programming versus a similar group of patients who had their cochlear implant programmed directly. Remote programming did not require a significantly longer programming time for the audiologist with these 7 patients. Remote programming of a cochlear implant can be performed safely without any deterioration in the quality of the programming. This ability to remotely program cochlear implant patients gives the potential to extend cochlear implantation to underserved areas in the United States and elsewhere.
Adaptation to an electropalatograph palate: acoustic, impressionistic, and perceptual data.
McLeod, Sharynne; Searl, Jeff
2006-05-01
The purpose of this study was to evaluate adaptation to the electropalatograph (EPG) from the perspective of consonant acoustics, listener perceptions, and speaker ratings. Seven adults with typical speech wore an EPG and pseudo-EPG palate over 2 days and produced syllables, read a passage, counted, and rated their adaptation to the palate. Consonant acoustics, listener ratings, and speaker ratings were analyzed. The spectral mean for the burst (/t/) and frication (/s/) was reduced for the first 60-120 min of wearing the pseudo-EPG palate. Temporal features (stop gap, frication, and syllable duration) were unaffected by wearing the pseudo-EPG palate. The EPG palate had a similar effect on consonant acoustics as the pseudo-EPG palate. Expert listener ratings indicated minimal to no change in speech naturalness or distortion from the pseudo-EPG or EPG palate. The sounds [see text] were most likely to be affected. Speaker self-ratings related to oral comfort, speech, tongue movement, appearance, and oral sensation were negatively affected by the presence of the palatal devices. Speakers detected a substantial difference when wearing a palatal device, but the effects on speech were minimal based on listener ratings. Spectral features of consonants were initially affected, although adaptation occurred. Wearing an EPG or pseudo-EPG palate for approximately 2 hr results in relatively normal-sounding speech with acoustic features similar to a no-palate condition.
Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension
Liu, Huei-Mei; Tsao, Feng-Ming
2017-01-01
Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031
Vowel reduction in word-final position by early and late Spanish-English bilinguals.
Byers, Emily; Yavas, Mehmet
2017-01-01
Vowel reduction is a prominent feature of American English, as well as other stress-timed languages. As a phonological process, vowel reduction neutralizes multiple vowel quality contrasts in unstressed syllables. For bilinguals whose native language is not characterized by large spectral and durational differences between tonic and atonic vowels, systematically reducing unstressed vowels to the central vowel space can be problematic. Failure to maintain this pattern of stressed-unstressed syllables in American English is one key element that contributes to a "foreign accent" in second language speakers. Reduced vowels, or "schwas," have also been identified as particularly vulnerable to the co-articulatory effects of adjacent consonants. The current study examined the effects of adjacent sounds on the spectral and temporal qualities of schwa in word-final position. Three groups of English-speaking adults were tested: Miami-based monolingual English speakers, early Spanish-English bilinguals, and late Spanish-English bilinguals. Subjects performed a reading task to examine their schwa productions in fluent speech when schwas were preceded by consonants from various points of articulation. Results indicated that monolingual English and late Spanish-English bilingual groups produced targeted vowel qualities for schwa, whereas early Spanish-English bilinguals lacked homogeneity in their vowel productions. This extends prior claims that schwa is targetless for F2 position for native speakers to highly-proficient bilingual speakers. Though spectral qualities lacked homogeneity for early Spanish-English bilinguals, early bilinguals produced schwas with near native-like vowel duration. In contrast, late bilinguals produced schwas with significantly longer durations than English monolinguals or early Spanish-English bilinguals. Our results suggest that the temporal properties of a language are better integrated into second language phonologies than spectral qualities. Finally, we examined the role of nonstructural variables (e.g. linguistic history measures) in predicting native-like vowel duration. These factors included: Age of L2 learning, amount of L1 use, and self-reported bilingual dominance. Our results suggested that different sociolinguistic factors predicted native-like reduced vowel duration than predicted native-like vowel qualities across multiple phonetic environments.
Vowel reduction in word-final position by early and late Spanish-English bilinguals
2017-01-01
Vowel reduction is a prominent feature of American English, as well as other stress-timed languages. As a phonological process, vowel reduction neutralizes multiple vowel quality contrasts in unstressed syllables. For bilinguals whose native language is not characterized by large spectral and durational differences between tonic and atonic vowels, systematically reducing unstressed vowels to the central vowel space can be problematic. Failure to maintain this pattern of stressed-unstressed syllables in American English is one key element that contributes to a “foreign accent” in second language speakers. Reduced vowels, or “schwas,” have also been identified as particularly vulnerable to the co-articulatory effects of adjacent consonants. The current study examined the effects of adjacent sounds on the spectral and temporal qualities of schwa in word-final position. Three groups of English-speaking adults were tested: Miami-based monolingual English speakers, early Spanish-English bilinguals, and late Spanish-English bilinguals. Subjects performed a reading task to examine their schwa productions in fluent speech when schwas were preceded by consonants from various points of articulation. Results indicated that monolingual English and late Spanish-English bilingual groups produced targeted vowel qualities for schwa, whereas early Spanish-English bilinguals lacked homogeneity in their vowel productions. This extends prior claims that schwa is targetless for F2 position for native speakers to highly-proficient bilingual speakers. Though spectral qualities lacked homogeneity for early Spanish-English bilinguals, early bilinguals produced schwas with near native-like vowel duration. In contrast, late bilinguals produced schwas with significantly longer durations than English monolinguals or early Spanish-English bilinguals. Our results suggest that the temporal properties of a language are better integrated into second language phonologies than spectral qualities. Finally, we examined the role of nonstructural variables (e.g. linguistic history measures) in predicting native-like vowel duration. These factors included: Age of L2 learning, amount of L1 use, and self-reported bilingual dominance. Our results suggested that different sociolinguistic factors predicted native-like reduced vowel duration than predicted native-like vowel qualities across multiple phonetic environments. PMID:28384234
Derakhshandeh, Fatemeh; Nikmaram, Mohammadreza; Hosseinabad, Hedieh Hashemi; Memarzadeh, Mehrdad; Taheri, Masoud; Omrani, Mohammadreza; Jalaie, Shohreh; Bijankhan, Mahmood; Sell, Debbie
2016-07-01
The aim of this study was to investigate the impact of an intensive 10-week course of articulation therapy on articulation errors in cleft lip and palate patients who have Velopharyngeal Insufficiency (VPI), non-oral and passive cleft speech characteristics. Five children with cleft palate (+/-cleft lip) with VPI and non-oral and passive cleft speech characteristics underwent 40 intensive articulation therapies over 10 weeks in a single case experimental design. The percentage of non-oral CSCs (NCSCs), passive CSCs (PCSCs), stimulable consonants (SC), correct consonants in word imitation (CCI), and correct consonants in picture naming (CCN) were captured at baseline, during intervention and in follow up phases. Visual analysis and two effect size indexes of Percentage of Nonoverlapping Data and Percentage of Improvement Rate Difference were analyzed. Articulation therapy resulted in visible decrease in NCSCs for all 5 participants across the intervention phases. Intervention was effective in changing percentage of passive CSCs in two different ways; it reduced the PCSCs in three cases and resulted in an increase in PCSCs in the other two cases. This was interpreted as intervention having changed the non-oral CSCs to consonants produced within the oral cavity but with passive characteristics affecting manner of production including weakness, nasalized plosives and nasal realizations of plosives and fricatives. Percent SC increased throughout the intervention period in all five patients. All participants demonstrated an increase in percentage of CCI and CCN suggesting an increase in the consonant inventory. Follow-up data showed that all the subjects were able to maintain their ability to articulate learned phonemes correctly even after a 4-week break from intervention. This single case experimental study supports the hypothesis that speech intervention in patients with VPI can result in an improvement in oral placements and passive CSCs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas
2015-01-01
Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach-avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans.
Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas
2015-01-01
Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach–avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans. PMID:26466351
Auditory word identification in dyslexic and normally achieving readers.
Bruno, Jennifer L; Manis, Franklin R; Keating, Patricia; Sperling, Anne J; Nakamoto, Jonathan; Seidenberg, Mark S
2007-07-01
The integrity of phonological representation/processing in dyslexic children was explored with a gating task in which children listened to successively longer segments (gates) of a word. At each gate, the task was to decide what the entire word was. Responses were scored for overall accuracy as well as the children's sensitivity to coarticulation from the final consonant. As a group, dyslexic children were less able than normally achieving readers to detect coarticulation present in the vowel portion of the word, particularly on the most difficult items, namely those ending in a nasal sound. Hierarchical regression and path analyses indicated that phonological awareness mediated the relation of gating and general language ability to word and pseudoword reading ability.
Differences between conduction aphasia and Wernicke's aphasia.
Anzaki, F; Izumi, S
2001-07-01
Conduction aphasia and Wernike's aphasia have been differentiated by the degree of auditory language comprehension. We quantitatively compared the speech sound errors of two conduction aphasia patients and three Wernicke's aphasia patients on various language modality tests. All of the patients were Japanese. The two conduction aphasia patients had "conduites d'approche" errors and phonological paraphasia. The patient with mild Wernicke's aphasia made various errors. In the patient with severe Wernicke's aphasia, neologism was observed. Phonological paraphasia in the two conduction aphasia patients seemed to occur when the examinee searched for the target word. They made more errors in vowels than in consonants of target words on the naming and repetition tests. They seemed to search the target word by the correct consonant phoneme and incorrect vocalic phoneme in the table of the Japanese alphabet. The Wernicke's aphasia patients who had severe impairment of auditory comprehension, made more errors in consonants than in vowels of target words. In conclusion, utterance of conduction aphasia and that of Wernicke's aphasia are qualitatively distinct.
Lohmander, Anette; Lundeborg, Inger; Persson, Christina
2017-01-01
Normative language-based data are important for comparing speech performances of clinical groups. The Swedish Articulation and Nasality Test (SVANTE) was developed to enable a detailed speech assessment. This study's aim was to present normative data on articulation and nasality in Swedish speakers. Single word production, sentence repetition and connected speech were collected using SVANTE in 443 individuals. Mean (SD) and prevalences in the groups of 3-, 5-, 7-, 10-, 16- and 19-year-olds were calculated from phonetic transcriptions or ordinal rating. For the 3- and 5-year-olds, a consonant inventory was also determined. The mean percent of oral consonants correct ranged from 77% at age 3 to 99% at age 19. At age 5, a mean of 96% was already reached, and the consonant inventory was established except for /s/, /r/, /ɕ/. The norms on the SVANTE, also including a short version, will be useful in the interpretation of speech outcomes.
Carvalho Lima, Vania L C; Collange Grecco, Luanda A; Marques, Valéria C; Fregni, Felipe; Brandão de Ávila, Clara R
2016-04-01
The aim of this study was to describe the results of the first case combining integrative speech therapy with anodal transcranial direct current stimulation (tDCS) over Broca's area in a child with cerebral palsy. The ABFW phonology test was used to analyze speech based on the Percentage of Correct Consonants (PCC) and Percentage of Correct Consonants - Revised (PCC-R). After treatment, increases were found in both PCC (Imitation: 53.63%-78.10%; Nomination: 53.19%-70.21%) and PPC-R (Imitation: 64.54%-83.63%; Nomination: 61.70%-77.65%). Moreover, reductions occurred in distortions, substitutions and improvement was found in oral performance, especially tongue mobility (AMIOFE-mobility before = 4 after = 7). The child demonstrated a clinically important improvement in speech fluency as shown in results of imitation number of correct consonants and phonemes acquire. Based on these promising findings, continuing research in this field should be conducted with controlled clinical trials. Copyright © 2015 Elsevier Ltd. All rights reserved.
The development of motor synergies in children: Ultrasound and acoustic measurements
Noiray, Aude; Ménard, Lucie; Iskarous, Khalil
2013-01-01
The present study focuses on differences in lingual coarticulation between French children and adults. The specific question pursued is whether 4–5 year old children have already acquired a synergy observed in adults in which the tongue back helps the tip in the formation of alveolar consonants. Locus equations, estimated from acoustic and ultrasound imaging data were used to compare coarticulation degree between adults and children and further investigate differences in motor synergy between the front and back parts of the tongue. Results show similar slope and intercept patterns for adults and children in both the acoustic and articulatory domains, with an effect of place of articulation in both groups between alveolar and non-alveolar consonants. These results suggest that 4–5 year old children (1) have learned the motor synergy investigated and (2) have developed a pattern of coarticulatory resistance depending on a consonant place of articulation. Also, results show that acoustic locus equations can be used to gauge the presence of motor synergies in children. PMID:23297916
Quantization noise in digital speech. M.S. Thesis- Houston Univ.
NASA Technical Reports Server (NTRS)
Schmidt, O. L.
1972-01-01
The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.
Articulation generalization of voiced-voiceless sounds in hearing-impaired children.
McReynolds, L V; Jetzke, E
1986-11-01
Eight hearing-impaired children participated in a study exploring the effect of training (+) or (-) voicing on generalization to cognates. In an experimental multiple baseline study across behaviors, children were trained on pairs of voiced and voiceless target sounds that they had previously omitted in final position. The pairs consisted of the /t/ and /g/ and the /d/ and /k/. When /t/ was trained, generalization was tested to (a) untrained words with the /t/ in the final position and (b) untrained words containing /d/ (the cognate) of the /t/. In like manner, when /d/ was trained, generalization was tested to both the /d/ and /t/ words. The /g/ and /k/ received identical treatment. A contrast procedure was used to teach the children to produce the final consonants. When training criterion was reached, generalization was tested. Results showed that 6 of the 8 children generalized both the voiced and unvoiced target sounds to 50% or more of the target sound probe items. Results also indicated that more generalization occurred to the voiceless cognate from voiced target sound training than occurred to voiced cognates from voiceless target sound training.
Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora
Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.
2012-01-01
Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980
Hodges, Rosemary; Munro, Natalie; Baker, Elise; McGregor, Karla; Heard, Rob
2017-01-01
Although verbal imitation can provide a valuable window into the developing language abilities of toddlers, some toddlers find verbal imitation challenging and will not comply with tests that involve elicited verbal imitation. The characteristics of stimuli that are offered to toddlers for imitation may influence how easy or hard it is for them to imitate. This study presents a new test of elicited imitation-the Monosyllable Imitation Test for Toddlers (MITT)-comprising stimuli of varying characteristics and test features designed to optimize compliance. To investigate whether the stimulus characteristics of neighbourhood density and consonant complexity have independent and/or convergent influences on imitation accuracy; and to examine non-compliance rates and diagnostic accuracy of the MITT and an existing test, the Test of Early Nonword Repetition (TENR) (Stokes and Klee 2009a). Fifty-two toddlers (25-35 months) participated. Twenty-six had typically developing language (TDs) and 26 were defined as late talkers (LTs) based on parent-reported vocabulary. The MITT stimuli were created by manipulating both neighbourhood density (dense or sparse) and consonant complexity (early- or late-developing initial consonant). The MITT was designed to maximize compliance by: (1) using eight monosyllabic stimuli, (2) providing three exposures to stimuli and (3) embedding imitation in a motivating context: a computer animation with reasons for imitation. Stimulus characteristics influenced imitation accuracy in TDs and LTs. For TDs, neighbourhood density had an independent influence, whereas for LTs consonant complexity had an independent influence. These characteristics also had convergent influences. For TDs, stimuli were all equally easy to imitate, except those that were both sparse and contained a late-developing consonant which were harder to imitate. For LTs, stimuli that were both dense and contained an early-developing consonant were easier to imitate than any other stimuli. Two LTs and no TDs were non-compliant with the MITT. With the TENR, five LTs and two TDs were non-compliant. The MITT and TENR yielded similar levels of diagnostic sensitivity, but the TENR offered higher specificity rates. Subsets of stimuli from the MITT and the TENR also showed diagnostic promise when explored post-hoc. Stimulus characteristics converge to influence imitation accuracy in both TD and LT toddlers and therefore should be considered when designing stimuli. The MITT resulted in better compliance than the TENR, but the TENR offered higher specificity. Insights about late talking, elicited imitation and speech production capabilities are discussed. © 2016 Royal College of Speech and Language Therapists.
Meyer, Ted A.; Frisch, Stefan A.; Pisoni, David B.; Miyamoto, Richard T.; Svirsky, Mario A.
2012-01-01
Hypotheses Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? Background The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener’s lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener’s closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Methods Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. Results The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. Conclusion The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process. PMID:12851554
Possible-word constraints in Cantonese speech segmentation.
Yip, Michael C
2004-03-01
A Cantonese syllable-spotting experiment was conducted to examine whether the Possible-Word Constraint (PWC), proposed by Norris, McQueen, Cutler, and Butterfield (1997), can apply in Cantonese speech segmentation. In the experiment, listeners were asked to spot out the target Cantonese syllable from a series of nonsense sound strings. Results suggested that listeners found it more difficult to spot out the target syllable [kDm1] in the nonsense sound strings that attached with a single consonant [tkDm1] than in the nonsense sound strings that attached either with a vowel [a:kDm1] or a pseudo-syllable [khow1kDm1]. Finally, the current set of results further supported that the PWC appears to be a language-universal mechanism in segmenting continuous speech.
2018-01-01
This study tested the hypothesis that object-based attention modulates the discrimination of level increments in stop-consonant noise bursts. With consonant-vowel-consonant (CvC) words consisting of an ≈80-dB vowel (v), a pre-vocalic (Cv) and a post-vocalic (vC) stop-consonant noise burst (≈60-dB SPL), we measured discrimination thresholds (LDTs) for level increments (ΔL) in the noise bursts presented either in CvC context or in isolation. In the 2-interval 2-alternative forced-choice task, each observation interval presented a CvC word (e.g., /pæk/ /pæk/), and normal-hearing participants had to discern ΔL in the Cv or vC burst. Based on the linguistic word labels, the auditory events of each trial were perceived as two auditory objects (Cv-v-vC and Cv-v-vC) that group together the bursts and vowels, hindering selective attention to ΔL. To discern ΔL in Cv or vC, the events must be reorganized into three auditory objects: the to-be-attended pre-vocalic (Cv–Cv) or post-vocalic burst pair (vC–vC), and the to-be-ignored vowel pair (v–v). Our results suggest that instead of being automatic this reorganization requires training, in spite of using familiar CvC words. Relative to bursts in isolation, bursts in context always produced inferior ΔL discrimination accuracy (a context effect), which depended strongly on the acoustic separation between the bursts and the vowel, being much keener for the object apart from (post-vocalic) than for the object adjoining (pre-vocalic) the vowel (a temporal-position effect). Variability in CvC dimensions that did not alter the noise-burst perceptual grouping had minor effects on discrimination accuracy. In addition to being robust and persistent, these effects are relatively general, evincing in forced-choice tasks with one or two observation intervals, with or without variability in the temporal position of ΔL, and with either fixed or roving CvC standards. The results lend support to the hypothesis. PMID:29364931
Dressler, William W; Balieiro, Mauro C; Ferreira de Araújo, Luiza; Silva, Wilson A; Ernesto Dos Santos, José
2016-07-01
Research on gene-environment interaction was facilitated by breakthroughs in molecular biology in the late 20th century, especially in the study of mental health. There is a reliable interaction between candidate genes for depression and childhood adversity in relation to mental health outcomes. The aim of this paper is to explore the role of culture in this process in an urban community in Brazil. The specific cultural factor examined is cultural consonance, or the degree to which individuals are able to successfully incorporate salient cultural models into their own beliefs and behaviors. It was hypothesized that cultural consonance in family life would mediate the interaction of genotype and childhood adversity. In a study of 402 adult Brazilians from diverse socioeconomic backgrounds, conducted from 2011 to 2014, the interaction of reported childhood adversity and a polymorphism in the 2A serotonin receptor was associated with higher depressive symptoms. Further analysis showed that the gene-environment interaction was mediated by cultural consonance in family life, and that these effects were more pronounced in lower social class neighborhoods. The findings reinforce the role of the serotonergic system in the regulation of stress response and learning and memory, and how these processes in turn interact with environmental events and circumstances. Furthermore, these results suggest that gene-environment interaction models should incorporate a wider range of environmental experience and more complex pathways to better understand how genes and the environment combine to influence mental health outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Maternal Vocal Feedback to 9-Month-Old Infant Siblings of Children with ASD
Talbott, Meagan R.; Nelson, Charles A.; Tager-Flusberg, Helen
2016-01-01
Infant siblings of children with autism spectrum disorder display differences in early language and social communication skills beginning as early as the first year of life. While environmental influences on early language development are well documented in other infant populations, they have received relatively little attention inside of the infant sibling context. In this study, we analyzed home video diaries collected prospectively as part of a longitudinal study of infant siblings. Infant vowel and consonant-vowel vocalizations and maternal language-promoting and non-promoting verbal responses were scored for 30 infant siblings and 30 low risk control infants at 9 months of age. Analyses evaluated whether infant siblings or their mothers exhibited differences from low risk dyads in vocalization frequency or distribution, and whether mothers’ responses were associated with other features of the high risk context. Analyses were conducted with respect to both initial risk group and preliminary outcome classification. Overall, we found no differences in infants’ consonant-vowel vocalizations, the frequency of overall maternal utterances, or the distribution of mothers’ response types. Both groups of infants produced more vowel than consonant-vowel vocalizations, and both groups of mothers responded to consonant-vowel vocalizations with more language-promoting than non-promoting responses. These results indicate that as a group, mothers of high risk infants provide equally high quality linguistic input to their infants in the first year of life and suggest that impoverished maternal linguistic input does not contribute to high risk infants’ initial language difficulties. Implications for intervention strategies are also discussed. PMID:26174704
Hallé, Pierre A; Ridouane, Rachid; Best, Catherine T
2016-01-01
In a discrimination experiment on several Tashlhiyt Berber singleton-geminate contrasts, we find that French listeners encounter substantial difficulty compared to native speakers. Native listeners of Tashlhiyt perform near ceiling level on all contrasts. French listeners perform better on final contrasts such as fit-fitt than initial contrasts such as bi-bbi or sir-ssir. That is, French listeners are more sensitive to silent closure duration in word-final voiceless stops than to either voiced murmur or frication duration of fully voiced stops or voiceless fricatives in word-initial position. We propose, tentatively, that native speakers of French, a language in which gemination is usually not considered to be phonemic, have not acquired quantity contrasts but yet exhibit a presumably universal sensitivity to rhythm, whereby listeners are able to perceive and compare the relative temporal distance between beats given by successive salient phonetic events such as a sequence of vowel nuclei.
Hallé, Pierre A.; Ridouane, Rachid; Best, Catherine T.
2016-01-01
In a discrimination experiment on several Tashlhiyt Berber singleton-geminate contrasts, we find that French listeners encounter substantial difficulty compared to native speakers. Native listeners of Tashlhiyt perform near ceiling level on all contrasts. French listeners perform better on final contrasts such as fit-fitt than initial contrasts such as bi-bbi or sir-ssir. That is, French listeners are more sensitive to silent closure duration in word-final voiceless stops than to either voiced murmur or frication duration of fully voiced stops or voiceless fricatives in word-initial position. We propose, tentatively, that native speakers of French, a language in which gemination is usually not considered to be phonemic, have not acquired quantity contrasts but yet exhibit a presumably universal sensitivity to rhythm, whereby listeners are able to perceive and compare the relative temporal distance between beats given by successive salient phonetic events such as a sequence of vowel nuclei. PMID:26973551
The development of visual speech perception in Mandarin Chinese-speaking children.
Chen, Liang; Lei, Jianghua
2017-01-01
The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.
DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS
Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.
2014-01-01
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757
Conditioned Place Preference and Aversion for Music in a Virtual Reality Environment
Molet, Mikaël; Billiet, Gauthier; Bardo, Michael T.
2012-01-01
The use of a virtual reality environment (VRE) enables behavioral scientists to create different spatial contexts in which human participants behave freely, while still confined to the laboratory. In this article, VRE was used to study conditioned place preference (CPP) and aversion (CPA). In Experiment 1, half of the participants were asked to visit a house for two min with consonant music and then they were asked to visit an alternate house with static noise for two min, whereas the remaining participants did the visits in reverse order. In Experiment 2, we used the same design as Experiment 1, except for replacing consonant music with dissonant music. After conditioning in both experiments, the participants were given a choice between spending time in the two houses. In Experiment 1, participants spent more time in the house associated with the consonant music, thus showing a CPP toward that house. In Experiment 2, participants spent less time in the house associated with the dissonant music, thus showing a CPA for that house. These results support VRE as a tool to extend research on CPP/CPA in humans. PMID:23089383
Ebeling, Martin
2008-10-01
A mathematical model is presented here to explain the sensation of consonance and dissonance on the basis of neuronal coding and the properties of a neuronal periodicity detection mechanism. This mathematical model makes use of physiological data from a neuronal model of periodicity analysis in the midbrain, whose operation can be described mathematically by autocorrelation functions with regard to time windows. Musical intervals produce regular firing patterns in the auditory nerve that depend on the vibration ratio of the two tones. The mathematical model makes it possible to define a measure for the degree of these regularities for each vibration ratio. It turns out that this measure value is in line with the degree of tonal fusion as described by Stumpf [Tonpsychologie (Psychology of Tones) (Knuf, Hilversum), reprinted 1965]. This finding makes it probable that tonal fusion is a consequence of certain properties of the neuronal periodicity detection mechanism. Together with strong roughness resulting from interval tones with fundamentals close together or close to the octave, this neuronal mechanism may be regarded as the basis of consonance and dissonance.
Measurement of voice onset time in maxillectomy patients.
Hattori, Mariko; Sumita, Yuka I; Taniguchi, Hisashi
2014-01-01
Objective speech evaluation using acoustic measurement is needed for the proper rehabilitation of maxillectomy patients. For digital evaluation of consonants, measurement of voice onset time is one option. However, voice onset time has not been measured in maxillectomy patients as their consonant sound spectra exhibit unique characteristics that make the measurement of voice onset time challenging. In this study, we established criteria for measuring voice onset time in maxillectomy patients for objective speech evaluation. We examined voice onset time for /ka/ and /ta/ in 13 maxillectomy patients by calculating the number of valid measurements of voice onset time out of three trials for each syllable. Wilcoxon's signed rank test showed that voice onset time measurements were more successful for /ka/ and /ta/ when a prosthesis was used (Z = -2.232, P = 0.026 and Z = -2.401, P = 0.016, resp.) than when a prosthesis was not used. These results indicate a prosthesis affected voice onset measurement in these patients. Although more research in this area is needed, measurement of voice onset time has the potential to be used to evaluate consonant production in maxillectomy patients wearing a prosthesis.
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
2014-03-01
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Subglottal resonances of adult male and female native speakers of American English.
Lulich, Steven M; Morton, John R; Arsikere, Harish; Sommers, Mitchell S; Leung, Gary K F; Alwan, Abeer
2012-10-01
This paper presents a large-scale study of subglottal resonances (SGRs) (the resonant frequencies of the tracheo-bronchial tree) and their relations to various acoustical and physiological characteristics of speakers. The paper presents data from a corpus of simultaneous microphone and accelerometer recordings of consonant-vowel-consonant (CVC) words embedded in a carrier phrase spoken by 25 male and 25 female native speakers of American English ranging in age from 18 to 24 yr. The corpus contains 17,500 utterances of 14 American English monophthongs, diphthongs, and the rhotic approximant [[inverted r
Infant word recognition: Insights from TRACE simulations☆
Mayor, Julien; Plunkett, Kim
2014-01-01
The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants’ graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan’s stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life. PMID:24493907
Infant word recognition: Insights from TRACE simulations.
Mayor, Julien; Plunkett, Kim
2014-02-01
The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants' graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan's stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life.
McNeil, M.R.; Katz, W.F.; Fossett, T.R.D.; Garst, D.M.; Szuminsky, N.J.; Carter, G.; Lim, K.Y.
2010-01-01
Apraxia of speech (AOS) is a motor speech disorder characterized by disturbed spatial and temporal parameters of movement. Research on motor learning suggests that augmented feedback may provide a beneficial effect for training movement. This study examined the effects of the presence and frequency of online augmented visual kinematic feedback (AVKF) and clinician-provided perceptual feedback on speech accuracy in 2 adults with acquired AOS. Within a single-subject multiple-baseline design, AVKF was provided using electromagnetic midsagittal articulography (EMA) in 2 feedback conditions (50 or 100%). Articulator placement was specified for speech motor targets (SMTs). Treated and baselined SMTs were in the initial or final position of single-syllable words, in varying consonant-vowel or vowel-consonant contexts. SMTs were selected based on each participant's pre-assessed erred productions. Productions were digitally recorded and online perceptual judgments of accuracy (including segment and intersegment distortions) were made. Inter- and intra-judge reliability for perceptual accuracy was high. Results measured by visual inspection and effect size revealed positive acquisition and generalization effects for both participants. Generalization occurred across vowel contexts and to untreated probes. Results of the frequency manipulation were confounded by presentation order. Maintenance of learned and generalized effects were demonstrated for 1 participant. These data provide support for the role of augmented feedback in treating speech movements that result in perceptually accurate speech production. Future investigations will explore the independent contributions of each feedback type (i.e. kinematic and perceptual) in producing efficient and effective training of SMTs in persons with AOS. PMID:20424468
Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters
Mittag, Maria; Inauri, Karina; Huovilainen, Tatu; Leminen, Miika; Salo, Emma; Rinne, Teemu; Kujala, Teija; Alho, Kimmo
2013-01-01
We used event-related brain potentials (ERPs) to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological (AP) and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement (Pd) of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the AP and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity (RP) to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving left-ear syllables. PMID:24348324
Koerner, Tess K; Zhang, Yang; Nelson, Peggy B; Wang, Boxiang; Zou, Hui
2017-07-01
This study examined how speech babble noise differentially affected the auditory P3 responses and the associated neural oscillatory activities for consonant and vowel discrimination in relation to segmental- and sentence-level speech perception in noise. The data were collected from 16 normal-hearing participants in a double-oddball paradigm that contained a consonant (/ba/ to /da/) and vowel (/ba/ to /bu/) change in quiet and noise (speech-babble background at a -3 dB signal-to-noise ratio) conditions. Time-frequency analysis was applied to obtain inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) measures in delta, theta, and alpha frequency bands for the P3 response. Behavioral measures included percent correct phoneme detection and reaction time as well as percent correct IEEE sentence recognition in quiet and in noise. Linear mixed-effects models were applied to determine possible brain-behavior correlates. A significant noise-induced reduction in P3 amplitude was found, accompanied by significantly longer P3 latency and decreases in ITPC across all frequency bands of interest. There was a differential effect of noise on consonant discrimination and vowel discrimination in both ERP and behavioral measures, such that noise impacted the detection of the consonant change more than the vowel change. The P3 amplitude and some of the ITPC and ERSP measures were significant predictors of speech perception at segmental- and sentence-levels across listening conditions and stimuli. These data demonstrate that the P3 response with its associated cortical oscillations represents a potential neurophysiological marker for speech perception in noise. Copyright © 2017 Elsevier B.V. All rights reserved.
Chemotherapy as language: sound symbolism in cancer medication names.
Abel, Gregory A; Glinert, Lewis H
2008-04-01
The concept of sound symbolism proposes that even the tiniest sounds comprising a word may suggest the qualities of the object which that word represents. Cancer-related medication names, which are likely to be charged with emotional meaning for patients, might be expected to contain such sound-symbolic associations. We analyzed the sounds in the names of 60 frequently-used cancer-related medications, focusing on the medications' trade names as well as the names (trade or generic) commonly used in the clinic. We assessed the frequency of common voiced consonants (/b/, /d/, /g/, /v/, /z/; thought to be associated with slowness and heaviness) and voiceless consonants (/p/, /t/, /k/, /f/, /s/; thought to be associated with fastness and lightness), and compared them to what would be expected in standard American English using a reference dataset. A Fisher's exact test for independence showed the chemotherapy consonantal frequencies to be significantly different from standard English (p=0.009 for trade; p<0.001 for "common usage"). For the trade names, the majority of the voiceless consonants were significantly increased compared to standard English; this effect was more pronounced with the "common usage" names (for the group, O/E=1.62; 95% CI [1.37, 1.89]). Hormonal and targeted therapy trade names showed the greatest frequency of voiceless consonants (for the group, O/E=1.76; 95% CI [1.20, 2.49]). Our results suggest that taken together, the names of chemotherapy medications contain an increased frequency of certain sounds associated with lightness, smallness and fastness. This finding raises important questions about the possible role of the names of medications in the experiences of cancer patients and providers.
Visual speech discrimination and identification of natural and synthetic consonant stimuli
Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.
2015-01-01
From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249
Making Sense of a Sequence of Events: A Psychologically Supported AI Implementation
NASA Astrophysics Data System (ADS)
Chassy, Philippe; Prade, Henri
People try to make sense of the usually incomplete reports they receive about events that take place. For doing this, they make use of what they believe the normal course of thing should be. An agenttextquoterights beliefs may be consonant or dissonant with what is reported. For making sense people usually ascribe different types of relations between events. A prototypical example is the ascription of causality between events. The paper proposes a systematic study of consonance and dissonance between beliefs and reports. The approach is shown to be consistent with findings in psychology. An implementation is presented with some illustrative examples.
[Error analysis of functional articulation disorders in children].
Zhou, Qiao-juan; Yin, Heng; Shi, Bing
2008-08-01
To explore the clinical characteristic of functional articulation disorders in children and provide more evidence for differential diagnosis and speech therapy. 172 children with functional articulation disorders were grouped by age. Children aged 4-5 years were assigned to one group, and those aged 6-10 years were to another group. Their phonological samples were collected and analyzed. In the two groups, substitution and omission (deletion) were the mainly articulation errors in these children, dental consonants were the main wrong sounds, and bilabial and labio-dental were rarely wrong. In age 4-5 group, sequence according to the error frequency from the highest to lowest was dental, velar, lingual, apical, bilabial, and labio-dental. In age 6-10 group, the sequence was dental, lingual, apical, velar, bilabial, labio-dental. Lateral misarticulation and palatalized misarticulation occurred more often in age 6-10 group than age 4-5 group and were only found in lingual and dental consonants in two groups. Misarticulation of functional articulation disorders mainly occurs in dental and rarely in bilabial and labio-dental. Substitution and omission are the most often occurred errors. Lateral misarticulation and palatalized misarticulation occur mainly in lingual and dental consonants.
Identification and discrimination of Spanish front vowels
NASA Astrophysics Data System (ADS)
Castellanos, Isabel; Lopez-Bascuas, Luis E.
2004-05-01
The idea that vowels are perceived less categorically than consonants is widely accepted. Ades [Psychol. Rev. 84, 524-530 (1977)] tried to explain this fact on the basis of the Durlach and Braida [J. Acoust. Soc. Am. 46, 372-383 (1969)] theory of intensity resolution. Since vowels seem to cover a broader perceptual range, context-coding noise for vowels should be greater than for consonants leading to a less categorical performance on the vocalic segments. However, relatively recent work by Macmillan et al. [J. Acoust. Soc. Am. 84, 1262-1280 (1988)] has cast doubt on the assumption of different perceptual ranges for vowels and consonants even though context variance is acknowledged to be greater for the former. A possibility is that context variance increases as number of long-term phonemic categories also increases. To test this hypothesis we focused on Spanish as the target language. Spanish has less vowel categories than English and the implication is that Spanish vowels will be more categorically perceived. Identification and discrimination experiments were conducted on a synthetic /i/-/e/ continuum and the obtained functions were studied to assess whether Spanish vowels are more categorically perceived than English vowels. The results are discussed in the context of different theories of speech perception.
Perception of temporally modified speech in auditory neuropathy.
Hassan, Dalia Mohamed
2011-01-01
Disrupted auditory nerve activity in auditory neuropathy (AN) significantly impairs the sequential processing of auditory information, resulting in poor speech perception. This study investigated the ability of AN subjects to perceive temporally modified consonant-vowel (CV) pairs and shed light on their phonological awareness skills. Four Arabic CV pairs were selected: /ki/-/gi/, /to/-/do/, /si/-/sti/ and /so/-/zo/. The formant transitions in consonants and the pauses between CV pairs were prolonged. Rhyming, segmentation and blending skills were tested using words at a natural rate of speech and with prolongation of the speech stream. Fourteen adult AN subjects were compared to a matched group of cochlear-impaired patients in their perception of acoustically processed speech. The AN group distinguished the CV pairs at a low speech rate, in particular with modification of the consonant duration. Phonological awareness skills deteriorated in adult AN subjects but improved with prolongation of the speech inter-syllabic time interval. A rehabilitation program for AN should consider temporal modification of speech, training for auditory temporal processing and the use of devices with innovative signal processing schemes. Verbal modifications as well as visual imaging appear to be promising compensatory strategies for remediating the affected phonological processing skills.
Measurement of Voice Onset Time in Maxillectomy Patients
Hattori, Mariko; Sumita, Yuka I.; Taniguchi, Hisashi
2014-01-01
Objective speech evaluation using acoustic measurement is needed for the proper rehabilitation of maxillectomy patients. For digital evaluation of consonants, measurement of voice onset time is one option. However, voice onset time has not been measured in maxillectomy patients as their consonant sound spectra exhibit unique characteristics that make the measurement of voice onset time challenging. In this study, we established criteria for measuring voice onset time in maxillectomy patients for objective speech evaluation. We examined voice onset time for /ka/ and /ta/ in 13 maxillectomy patients by calculating the number of valid measurements of voice onset time out of three trials for each syllable. Wilcoxon's signed rank test showed that voice onset time measurements were more successful for /ka/ and /ta/ when a prosthesis was used (Z = −2.232, P = 0.026 and Z = −2.401, P = 0.016, resp.) than when a prosthesis was not used. These results indicate a prosthesis affected voice onset measurement in these patients. Although more research in this area is needed, measurement of voice onset time has the potential to be used to evaluate consonant production in maxillectomy patients wearing a prosthesis. PMID:24574934
Human phoneme recognition depending on speech-intrinsic variability.
Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger
2010-11-01
The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).
Acoustics of contrastive prosody in children
NASA Astrophysics Data System (ADS)
Patel, Rupal; Piel, Jordan; Grigos, Maria
2005-04-01
Empirical data on the acoustics of prosodic control in children is limited, particularly for linguistically contrastive tasks. Twelve children aged 4, 7, and 11 years were asked to produce two utterances ``Show Bob a bot'' (voiced consonants) and ``Show Pop a pot'' (voiceless consonants) 10 times each with emphasis placed on the second word (Bob/Pop) and 10 times with emphasis placed on the last word (bot/pot). A total of 40 utterances were analyzed per child. The following acoustic measures were obtained for each word within each utterance: average fundamental frequency (f0), peak f0, average intensity, peak intensity, and duration. Preliminary results suggest that 4 year olds are unable to modulate prosodic cues to signal the linguistic contrast. The 7 year olds, however, not only signaled the appropriate stress location, but did so with the most contrastive differences in f0, intensity, and duration, of all age groups. Prosodic differences between stressed and unstressed words were more pronounced for the utterance with voiced consonants. These findings suggest that the acoustics of linguistic prosody begin to differentiate between age 4 and 7 and may be highly influenced by changes in physiological control and flexibility that may also affect segmental features.
Identification of speech transients using variable frame rate analysis and wavelet packets.
Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung
2006-01-01
Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.
Won, Jong Ho; Lorenzi, Christian; Nie, Kaibao; Li, Xing; Jameyson, Elyse M; Drennan, Ward R; Rubinstein, Jay T
2012-08-01
Previous studies have demonstrated that normal-hearing listeners can understand speech using the recovered "temporal envelopes," i.e., amplitude modulation (AM) cues from frequency modulation (FM). This study evaluated this mechanism in cochlear implant (CI) users for consonant identification. Stimuli containing only FM cues were created using 1, 2, 4, and 8-band FM-vocoders to determine if consonant identification performance would improve as the recovered AM cues become more available. A consistent improvement was observed as the band number decreased from 8 to 1, supporting the hypothesis that (1) the CI sound processor generates recovered AM cues from broadband FM, and (2) CI users can use the recovered AM cues to recognize speech. The correlation between the intact and the recovered AM components at the output of the sound processor was also generally higher when the band number was low, supporting the consonant identification results. Moreover, CI subjects who were better at using recovered AM cues from broadband FM cues showed better identification performance with intact (unprocessed) speech stimuli. This suggests that speech perception performance variability in CI users may be partly caused by differences in their ability to use AM cues recovered from FM speech cues.
Bugge, Anna; Möller, Sören; Westfall, Daniel R; Tarp, Jakob; Gejl, Anne K; Wedderkopp, Niels; Hillman, Charles H
2018-01-01
The main objective of this study was to investigate the associations between waist circumference, metabolic risk factors, and executive function in adolescents. The study was cross-sectional and included 558 adolescents (mean age 14.2 years). Anthropometrics and systolic blood pressure (sysBP) were measured and fasting blood samples were analyzed for metabolic risk factors. A metabolic risk factor cluster score (MetS-cluster score) was computed from the sum of standardized sysBP, triglycerides (TG), inverse high-density lipid cholesterol (HDLc) and insulin resistance (homeostasis model assessment). Cognitive control was measured with a modified flanker task. Regression analyses indicated that after controlling for demographic variables, HDLc exhibited a negative and TG a positive association with flanker reaction time (RT). Waist circumference did not demonstrate a statistically significant total association with the cognitive outcomes. In structural equation modeling, waist circumference displayed an indirect positive association with incongruent RT through a higher MetS-cluster score and through lower HDLc. The only statistically significant direct association between waist circumference and the cognitive outcomes was for incongruent RT in the model including HDLc as mediator. These findings are consonant with the previous literature reporting an adverse association between certain metabolic risk factors and cognitive control. Accordingly, these results suggest specificity between metabolic risk factors and cognitive control outcomes. Further, results of the present study, although cross-sectional, provide new evidence that specific metabolic risk factors may mediate an indirect association between adiposity and cognitive control in adolescents, even though a direct association between these variables was not observed. However, taking the cross-sectional study design into consideration, these results should be interpreted with caution and future longitudinal or experimental studies should verify the findings of this study.
Schelonka, Kathryn; Graulty, Christian; Canseco-Gonzalez, Enriqueta; Pitts, Michael A
2017-09-01
A three-phase inattentional blindness paradigm was combined with ERPs. While participants performed a distracter task, line segments in the background formed words or consonant-strings. Nearly half of the participants failed to notice these word-forms and were deemed inattentionally blind. All participants noticed the word-forms in phase 2 of the experiment while they performed the same distracter task. In the final phase, participants performed a task on the word-forms. In all phases, including during inattentional blindness, word-forms elicited distinct ERPs during early latencies (∼200-280ms) suggesting unconscious orthographic processing. A subsequent ERP (∼320-380ms) similar to the visual awareness negativity appeared only when subjects were aware of the word-forms, regardless of the task. Finally, word-forms elicited a P3b (∼400-550ms) only when these stimuli were task-relevant. These results are consistent with previous inattentional blindness studies and help distinguish brain activity associated with pre- and post-perceptual processing from correlates of conscious perception. Copyright © 2017 Elsevier Inc. All rights reserved.
Lexical reorganization in Brazilian Portuguese: an articulatory study
Meireles, A. R.; Barbosa, P. A.
2008-01-01
This work, which is couched in the theoretical framework of Articulatory Phonology, deals with the influence of speech rate on the change/variation from antepenultimate stress words into penultimate stress words in Brazilian Portuguese. Both acoustic and articulatory (EMMA) studies were conducted. On the acoustic side, results show different patterns of post-stressed vowel reduction according to the word type. Some words reduced their medial post-stressed vowels more than their final post-stressed vowels, and others reduced their final post-stressed vowels more than their medial post-stressed vowels. On the articulatory side, results show that the coarticulation degree of the post-stressed consonants increases with speech rate. Also, with the use of a measure called proportional consonantal interval (PCI), it was found in measurements of articulation that such measure is influenced by the word type. Three different groups of words were found according to their PCI. These results show how dynamical aspects influenced by speech rate increase are related to the lexical process of change/variation from antepenultimate stress words into penultimate ones. PMID:19885366
Kokkinakis, Kostas; Loizou, Philipos C
2011-09-01
The purpose of this study is to determine the relative impact of reverberant self-masking and overlap-masking effects on speech intelligibility by cochlear implant listeners. Sentences were presented in two conditions wherein reverberant consonant segments were replaced with clean consonants, and in another condition wherein reverberant vowel segments were replaced with clean vowels. The underlying assumption is that self-masking effects would dominate in the first condition, whereas overlap-masking effects would dominate in the second condition. Results indicated that the degradation of speech intelligibility in reverberant conditions is caused primarily by self-masking effects that give rise to flattened formant transitions. © 2011 Acoustical Society of America
Effects of gender on the production of emphasis in Jordanian Arabic: A sociophonetic study
NASA Astrophysics Data System (ADS)
Abudalbuh, Mujdey D.
Emphasis, or pharyngealization, is a distinctive phonetic phenomenon and a phonemic feature of Semitic languages such as Arabic and Hebrew. The goal of this study is to investigate the effect of gender on the production of emphasis in Jordanian Arabic as manifested on the consonants themselves as well as on the adjacent vowels. To this end, 22 speakers of Jordanian Arabic, 12 males and 10 females, participated in a production experiment where they produced monosyllabic minimal CVC pairs contrasted on the basis of the presence of a word-initial plain or emphatic consonant. Several acoustic parameters were measured including Voice Onset Time (VOT), friction duration, the spectral mean of the friction noise, vowel duration and the formant frequencies (F1-F3) of the target vowels. The results of this study indicated that VOT is a reliable acoustic correlate of emphasis in Jordanian Arabic only for voiceless stops whose emphatic VOT was significantly shorter than their plain VOT. Also, emphatic fricatives were shorter than plain fricatives. Emphatic vowels were found to be longer than plain vowels. Overall, the results showed that emphatic vowels were characterized by a raised F1 at the onset and midpoint of the vowel, lowered F2 throughout the vowel, and raised F3 at the onset and offset of the vowel relative to the corresponding values of the plain vowels. Finally, results using Nearey's (1978) normalization algorithm indicated that emphasis was more acoustically evident in the speech of males than in the speech of females in terms of the F-pattern. The results are discussed from a sociolinguistic perspective in light of the previous literature and the notion of linguistic feminism.
Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?
Coene, Martine; van der Lee, Anneke; Govaerts, Paul J.
2015-01-01
This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient's hearing impairment, to predict a patient's gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination. PMID:26557717
Dynamic Spectral Structure Specifies Vowels for Adults and Children
Nittrouer, Susan; Lowenstein, Joanna H.
2014-01-01
The dynamic specification account of vowel recognition suggests that formant movement between vowel targets and consonant margins is used by listeners to recognize vowels. This study tested that account by measuring contributions to vowel recognition of dynamic (i.e., time-varying) spectral structure and coarticulatory effects on stationary structure. Adults and children (four-and seven-year-olds) were tested with three kinds of consonant-vowel-consonant syllables: (1) unprocessed; (2) sine waves that preserved both stationary coarticulated and dynamic spectral structure; and (3) vocoded signals that primarily preserved that stationary, but not dynamic structure. Sections of two lengths were removed from syllable middles: (1) half the vocalic portion; and (2) all but the first and last three pitch periods. Adults performed accurately with unprocessed and sine-wave signals, as long as half the syllable remained; their recognition was poorer for vocoded signals, but above chance. Seven-year-olds performed more poorly than adults with both sorts of processed signals, but disproportionately worse with vocoded than sine-wave signals. Most four-year-olds were unable to recognize vowels at all with vocoded signals. Conclusions were that both dynamic and stationary coarticulated structures support vowel recognition for adults, but children attend to dynamic spectral structure more strongly because early phonological organization favors whole words. PMID:25536845
Callahan, Brandy L; Belleville, Sylvie; Ferland, Guylaine; Potvin, Olivier; Tremblay, Marie-Pier; Hudon, Carol; Macoir, Joël
2014-01-01
The Brown-Peterson task is used to assess verbal short-term memory as well as divided attention. In its auditory three-consonant version, trigrams are presented to participants who must recall the items in correct order after variable delays, during which an interference task is performed. The present study aimed to establish normative data for this test in the elderly French-Quebec population based on cross-sectional data from a retrospective, multi-center convenience sample. A total of 595 elderly native French-speakers from the province of Quebec performed the Memoria version of the auditory three-consonant Brown-Peterson test. For both series and item-by-item scoring methods, age, education, and, in most cases, recall after a 0-second interval were found to be significantly associated with recall performance after 10-second, 20-second, and 30-second interference intervals. Based on regression model results, equations to calculate Z scores are presented for the 10-second, 20-second and 30-second intervals and for each scoring method to allow estimation of expected performance based on participants' individual characteristics. As an important ceiling effect was observed at the 0-second interval, norms for this interference interval are presented in percentiles.
Dynamic spectral structure specifies vowels for children and adultsa
Nittrouer, Susan
2008-01-01
When it comes to making decisions regarding vowel quality, adults seem to weight dynamic syllable structure more strongly than static structure, although disagreement exists over the nature of the most relevant kind of dynamic structure: spectral change intrinsic to the vowel or structure arising from movements between consonant and vowel constrictions. Results have been even less clear regarding the signal components children use in making vowel judgments. In this experiment, listeners of four different ages (adults, and 3-, 5-, and 7-year-old children) were asked to label stimuli that sounded either like steady-state vowels or like CVC syllables which sometimes had middle sections masked by coughs. Four vowel contrasts were used, crossed for type (front/back or closed/open) and consonant context (strongly or only slightly constraining of vowel tongue position). All listeners recognized vowel quality with high levels of accuracy in all conditions, but children were disproportionately hampered by strong coarticulatory effects when only steady-state formants were available. Results clarified past studies, showing that dynamic structure is critical to vowel perception for all aged listeners, but particularly for young children, and that it is the dynamic structure arising from vocal-tract movement between consonant and vowel constrictions that is most important. PMID:17902868
Testing the limits of long-distance learning: Learning beyond a three-segment window
Finley, Sara
2012-01-01
Traditional flat-structured bigram and trigram models of phonotactics are useful because they capture a large number of facts about phonological processes. Additionally, these models predict that local interactions should be easier to learn than long-distance ones since long-distance dependencies are difficult to capture with these models. Long-distance phonotactic patterns have been observed by linguists in many languages, who have proposed different kinds of models, including feature-based bigram and trigram models, as well as precedence models. Contrary to flat-structured bigram and trigram models, these alternatives capture unbounded dependencies because at an abstract level of representation, the relevant elements are locally dependent, even if they are not adjacent at the observable level. Using an artificial grammar learning paradigm, we provide additional support for these alternative models of phonotactics. Participants in two experiments were exposed to a long-distance consonant harmony pattern in which the first consonant of a five-syllable word was [s] or [∫] ('sh') and triggered a suffix that was either [−su] or [−∫u] depending on the sibilant quality of this first consonant. Participants learned this pattern, despite the large distance between the trigger and the target, suggesting that when participants learn long-distance phonological patterns, that pattern is learned without specific reference to distance. PMID:22303815
Effects of obstruent consonants on the F0 contour
NASA Astrophysics Data System (ADS)
Hanson, Helen M.
2003-10-01
When a vowel follows an obstruent consonant, the fundamental frequency in the first few tens of milliseconds of the vowel is influenced by the voicing characteristics of the consonant. The goal of the research reported here is to model this influence, with the intention of improving generation of F0 contours in rule-based speech synthesis. Data have been recorded from 10 subjects. Stops, fricatives, and the nasal /m/ were paired with the vowels /i,opena/ to form CVm syllables. The syllables mVm served as baselines with which to compare the obstruents. The target syllables were embedded in carrier sentences. Intonation was varied so that each target syllable was produced with either a high, low, or no pitch accent. Results vary among subjects, but in general, obstruent effects on F0 primarily occur when the syllable carries a high pitch. In that case, F0 is increased relative to the baseline following voiceless obstruents, but F0 closely follows the baseline following voiced obstruents. After voiceless obstruents, F0 may be increased for up to 80 ms following voicing onset. When a syllable carries a low or no pitch accent, F0 is increased slightly following all obstruents. [Work supported by NIH Grant No. DC04331.
NASA Astrophysics Data System (ADS)
Apoux, Frédéric; Bacon, Sid P.
2004-09-01
The relative importance of temporal information in broad spectral regions for consonant identification was assessed in normal-hearing listeners. For the purpose of forcing listeners to use primarily temporal-envelope cues, speech sounds were spectrally degraded using four-noise-band vocoder processing. Frequency-weighting functions were determined using two methods. The first method consisted of measuring the intelligibility of speech with a hole in the spectrum either in quiet or in noise. The second method consisted of correlating performance with the randomly and independently varied signal-to-noise ratio within each band. Results demonstrated that all bands contributed equally to consonant identification when presented in quiet. In noise, however, both methods indicated that listeners consistently placed relatively more weight upon the highest frequency band. It is proposed that the explanation for the difference in results between quiet and noise relates to the shape of the modulation spectra in adjacent frequency bands. Overall, the results suggest that normal-hearing listeners use a common listening strategy in a given condition. However, this strategy may be influenced by the competing sounds, and thus may vary according to the context. Some implications of the results for cochlear implantees and hearing-impaired listeners are discussed.
Sound Symbolism in the Languages of Australia
Haynie, Hannah; Bowern, Claire; LaPalombara, Hannah
2014-01-01
The notion that linguistic forms and meanings are related only by convention and not by any direct relationship between sounds and semantic concepts is a foundational principle of modern linguistics. Though the principle generally holds across the lexicon, systematic exceptions have been identified. These “sound symbolic” forms have been identified in lexical items and linguistic processes in many individual languages. This paper examines sound symbolism in the languages of Australia. We conduct a statistical investigation of the evidence for several common patterns of sound symbolism, using data from a sample of 120 languages. The patterns examined here include the association of meanings denoting “smallness” or “nearness” with front vowels or palatal consonants, and the association of meanings denoting “largeness” or “distance” with back vowels or velar consonants. Our results provide evidence for the expected associations of vowels and consonants with meanings of “smallness” and “proximity” in Australian languages. However, the patterns uncovered in this region are more complicated than predicted. Several sound-meaning relationships are only significant for segments in prominent positions in the word, and the prevailing mapping between vowel quality and magnitude meaning cannot be characterized by a simple link between gradients of magnitude and vowel F2, contrary to the claims of previous studies. PMID:24752356
Lin, Chi-Yueh; Wang, Hsiao-Chuan
2011-07-01
The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated. © 2011 Acoustical Society of America
Vaz, Suellen; Pezarini, Isabela de Oliveira; Paschoal, Larissa; Chacon, Lourenço
2015-01-01
To describe the spelling performance of children with regard to the record of sonorant consonants in Brazilian Portuguese language, to verify if the errors in their records were influenced by the accent in the word, and to categorize the kinds of errors found. For this current survey, 801 text productions were selected as a result of the development of 14 different thematic proposals, prepared by 76 children from the first grade of primary school, in 2001, coming from two schools of a city from São Paulo, Brazil. Of these productions, all words with sonorant consonants in a syllabic position of simple onset were selected. They were then organized as they appeared as pre-tonic, tonic, and post-tonic syllables, unstressed and tonic monosyllables. The following was observed: the number of hits was extremely higher than that of errors; higher occurrence of errors in non-accented syllables; higher occurrence of phonological substitutions followed by omissions and, at last, orthographic substitutions; and higher number of substitutions that involved graphemes referring to the sonorant class. Considering the distribution of orthographic data between hits and errors, as well as their relationship with phonetic-phonological aspects, may contribute to the comprehension of school difficulties, which are usually found in the first years of literacy instruction.
Electrophysiological and hemodynamic mismatch responses in rats listening to human speech syllables.
Mahmoudzadeh, Mahdi; Dehaene-Lambertz, Ghislaine; Wallois, Fabrice
2017-01-01
Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG) and hemodynamic responses (using fNIRS) to series of human speech syllables and investigated the brain response to a change of consonant (ba vs. ga) and to a change of voice (male vs. female). Both methods revealed concordant results, although ECoG measures were more sensitive than fNIRS. Responses to syllables were bilateral, but with marked right-hemispheric lateralization. Responses to voice changes were observed with both methods, while only ECoG was sensitive to consonant changes. These results suggest that rats more effectively processed the speech envelope than fine temporal cues in contrast with human preterm neonates, in whom the opposite effects were observed. Cross-species comparisons constitute a very valuable tool to define the singularities of the human brain and species-specific bias that may help human infants to learn their native language.
A comparative intelligibility study of single-microphone noise reduction algorithms.
Hu, Yi; Loizou, Philipos C
2007-09-01
The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.
Nunthayanon, Kulthida; Honda, Ei-ichi; Shimazaki, Kazuo; Ohmori, Hiroko; Inoue-Arai, Maristela Sayuri; Kurabayashi, Tohru; Ono, Takashi
2015-01-01
Different bony structures can affect the function of the velopharyngeal muscles. Asian populations differ morphologically, including the morphologies of their bony structures. The purpose of this study was to compare the velopharyngeal structures during speech in two Asian populations: Japanese and Thai. Ten healthy Japanese and Thai females (five each) were evaluated with a 3-Tesla (3 T) magnetic resonance imaging (MRI) scanner while they produced vowel-consonant-vowel syllable (/asa/). A gradient-echo sequence, fast low-angle shot with segmented cine and parallel imaging technique was used to obtain sagittal images of the velopharyngeal structures. MRI was carried out in real time during speech production, allowing investigations of the time-to-time changes in the velopharyngeal structures. Thai subjects had a significantly longer hard palate and produced shorter consonant than Japanese subjects. The velum of the Thai participants showed significant thickening during consonant production and their retroglossal space was significantly wider at rest, whereas the dimensional change during task performance was similar in the two populations. The 3 T MRI movie method can be used to investigate velopharyngeal function and diagnose velopharyngeal insufficiency. The racial differences may include differences in skeletal patterns and soft-tissue morphology that result in functional differences for the affected structures.
Yanagida, Saori; Nishizawa, Noriko; Mizoguchi, Kenji; Hatakeyama, Hiromitsu; Fukuda, Satoshi
2015-07-01
Voice onset time (VOT) for word-initial voiceless consonants in adductor spasmodic dysphonia (ADSD) and abductor spasmodic dysphonia (ABSD) patients were measured to determine (1) which acoustic measures differed from the controls and (2) whether acoustic measures were related to the pause or silence between the test word and the preceding word. Forty-eight patients with ADSD and nine patients with ABSD, as well as 20 matched normal controls read a story in which the word "taiyo" (the sun) was repeated three times, each differentiated by the position of the word in the sentence. The target of measurement was the VOT for the word-initial voiceless consonant /t/. When the target syllable appeared in a sentence following a comma, or at the beginning of a sentence following a period, the ABSD patients' VOTs were significantly longer than those of the ADSD patients and controls. Abnormal prolongation of the VOTs was related to the pause or silence between the test word and the preceding word. VOTs in spasmodic dysphonia (SD) may vary according to the SD subtype or speaking conditions. VOT measurement was suggested to be a useful method for quantifying voice symptoms in SD. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Won, Jong Ho; Lorenzi, Christian; Nie, Kaibao; Li, Xing; Jameyson, Elyse M.; Drennan, Ward R.; Rubinstein, Jay T.
2012-01-01
Previous studies have demonstrated that normal-hearing listeners can understand speech using the recovered “temporal envelopes,” i.e., amplitude modulation (AM) cues from frequency modulation (FM). This study evaluated this mechanism in cochlear implant (CI) users for consonant identification. Stimuli containing only FM cues were created using 1, 2, 4, and 8-band FM-vocoders to determine if consonant identification performance would improve as the recovered AM cues become more available. A consistent improvement was observed as the band number decreased from 8 to 1, supporting the hypothesis that (1) the CI sound processor generates recovered AM cues from broadband FM, and (2) CI users can use the recovered AM cues to recognize speech. The correlation between the intact and the recovered AM components at the output of the sound processor was also generally higher when the band number was low, supporting the consonant identification results. Moreover, CI subjects who were better at using recovered AM cues from broadband FM cues showed better identification performance with intact (unprocessed) speech stimuli. This suggests that speech perception performance variability in CI users may be partly caused by differences in their ability to use AM cues recovered from FM speech cues. PMID:22894230
Neurodiversity, Giftedness, and Aesthetic Perceptual Judgment of Music in Children with Autism
Masataka, Nobuo
2017-01-01
The author investigated the capability of aesthetic perceptual judgment of music in male children diagnosed with autism spectrum disorder (ASD) when compared to age-matched typically developing (TD) male children. Nineteen boys between 4 and 7 years of age with ASD were compared to 28 TD boys while listening to musical stimuli of different aesthetic levels. The results from two musical experiments using the above participants, are described here. In the first study, responses to a Mozart minuet and a dissonant altered version of the same Mozart minuet were compared. In this first study, the results indicated that both ASD and TD males preferred listening to the original consonant version of the minuet over the altered dissonant version. With the same participants, the second experiment included musical stimuli from four renowned composers: Mozart and Bach’s musical works, both considered consonant in their harmonic structure, were compared with music from Schoenberg and Albinoni, two composers who wrote musical works considered exceedingly harmonically dissonant. In the second study, when the stimuli included consonant or dissonant musical stimuli from different composers, the children with ASD showed greater preference for the aesthetic quality of the highly dissonant music compared to the TD children. While children in both of the groups listened to the consonant stimuli of Mozart and Bach music for the same amount of time, the children with ASD listened to the dissonant music of Schoenberg and Albinoni longer than the TD children. As preferring dissonant music is more aesthetically demanding perceptually, these results suggest that ASD male children demonstrate an enhanced capability of aesthetic judgment of music. Subsidiary data collected after the completion of the experiment revealed that absolute pitch ability was prevalent only in the children with ASD, some of whom also possessed extraordinary musical memory. The implications of these results are discussed with reference to the broader notion of neurodiversity, a term coined to capture potentially gifted qualities in individuals diagnosed with ASD. PMID:29018372
Gangji, Nazneen; Pascoe, Michelle; Smouse, Mantoa
2015-01-01
Swahili is widely spoken in East Africa, but to date there are no culturally and linguistically appropriate materials available for speech-language therapists working in the region. The challenges are further exacerbated by the limited research available on the typical acquisition of Swahili phonology. To describe the speech development of 24 typically developing first language Swahili-speaking children between the ages of 3;0 and 5;11 years in Dar es Salaam, Tanzania. A cross-sectional design was used with six groups of four children in 6-month age bands. Single-word speech samples were obtained from each child using a set of culturally appropriate pictures designed to elicit all consonants and vowels of Swahili. Each child's speech was audio-recorded and phonetically transcribed using International Phonetic Alphabet (IPA) conventions. Children's speech development is described in terms of (1) phonetic inventory, (2) syllable structure inventory, (3) phonological processes and (4) percentage consonants correct (PCC) and percentage vowels correct (PVC). Results suggest a gradual progression in the acquisition of speech sounds and syllables between the ages of 3;0 and 5;11 years. Vowel acquisition was completed and most of the consonants acquired by age 3;0. Fricatives/z, s, h/ were later acquired at 4 years and /θ/and /r/ were the last acquired consonants at age 5;11. Older children were able to produce speech sounds more accurately and had fewer phonological processes in their speech than younger children. Common phonological processes included lateralization and sound preference substitutions. The study contributes a preliminary set of normative data on speech development of Swahili-speaking children. Findings are discussed in relation to theories of phonological development, and may be used as a basis for further normative studies with larger numbers of children and ultimately the development of a contextually relevant assessment of the phonology of Swahili-speaking children. © 2014 Royal College of Speech and Language Therapists.
Speech-Like Rhythm in a Voiced and Voiceless Orangutan Call
Lameira, Adriano R.; Hardus, Madeleine E.; Bartlett, Adrian M.; Shumaker, Robert W.; Wich, Serge A.; Menken, Steph B. J.
2015-01-01
The evolutionary origins of speech remain obscure. Recently, it was proposed that speech derived from monkey facial signals which exhibit a speech-like rhythm of ∼5 open-close lip cycles per second. In monkeys, these signals may also be vocalized, offering a plausible evolutionary stepping stone towards speech. Three essential predictions remain, however, to be tested to assess this hypothesis' validity; (i) Great apes, our closest relatives, should likewise produce 5Hz-rhythm signals, (ii) speech-like rhythm should involve calls articulatorily similar to consonants and vowels given that speech rhythm is the direct product of stringing together these two basic elements, and (iii) speech-like rhythm should be experience-based. Via cinematic analyses we demonstrate that an ex-entertainment orangutan produces two calls at a speech-like rhythm, coined “clicks” and “faux-speech.” Like voiceless consonants, clicks required no vocal fold action, but did involve independent manoeuvring over lips and tongue. In parallel to vowels, faux-speech showed harmonic and formant modulations, implying vocal fold and supralaryngeal action. This rhythm was several times faster than orangutan chewing rates, as observed in monkeys and humans. Critically, this rhythm was seven-fold faster, and contextually distinct, than any other known rhythmic calls described to date in the largest database of the orangutan repertoire ever assembled. The first two predictions advanced by this study are validated and, based on parsimony and exclusion of potential alternative explanations, initial support is given to the third prediction. Irrespectively of the putative origins of these calls and underlying mechanisms, our findings demonstrate irrevocably that great apes are not respiratorily, articulatorilly, or neurologically constrained for the production of consonant- and vowel-like calls at speech rhythm. Orangutan clicks and faux-speech confirm the importance of rhythmic speech antecedents within the primate lineage, and highlight potential articulatory homologies between great ape calls and human consonants and vowels. PMID:25569211
Willadsen, Elisabeth; Boers, Maria; Schöps, Antje; Kisling-Møller, Mia; Nielsen, Joan Bogh; Jørgensen, Line Dahl; Andersen, Mikael; Bolund, Stig; Andersen, Helene Søgaard
2018-01-01
Differing results regarding articulation skills in young children with cleft palate (CP) have been reported and often interpreted as a consequence of different surgical protocols. To assess the influence of different timing of hard palate closure in a two-stage procedure on articulation skills in 3-year-olds born with unilateral cleft lip and palate (UCLP). Secondary aims were to compare results with peers without CP, and to investigate if there are gender differences in articulation skills. Furthermore, burden of treatment was to be estimated in terms of secondary surgery, hearing and speech therapy. A randomized controlled trial (RCT). Early hard palate closure (EHPC) at 12 months versus late hard palate closure (LHPC) at 36 months in a two-stage procedure was tested in a cohort of 126 Danish-speaking children born with non-syndromic UCLP. All participants had the lip and soft palate closed around 4 months of age. Audio and video recordings of a naming test were available from 113 children (32 girls and 81 boys) and were transcribed phonetically. Recordings were obtained prior to hard palate closure in the LHPC group. The main outcome measures were percentage consonants correct adjusted (PCC-A) and consonant errors from blinded assessments. Results from 36 Danish-speaking children without CP obtained previously by Willadsen in 2012 were used for comparison. Children with EHPC produced significantly more target consonants correctly (83%) than children with LHPC (48%; p < .001). In addition, children with LHPC produced significantly more active cleft speech characteristics than children with EHPC (p < .001). Boys achieved significantly lower PCC-A scores than girls (p = .04) and produced significantly more consonant errors than girls (p = .02). No significant differences were found between groups regarding burden of treatment. The control group performed significantly better than the EHPC and LHPC groups on all compared variables. © 2017 Royal College of Speech and Language Therapists.
Why aftershock duration matters for probabilistic seismic hazard assessment
Shinji Toda,; Stein, Ross S.
2018-01-01
Most hazard assessments assume that high background seismicity rates indicate a higher probability of large shocks and, therefore, of strong shaking. However, in slowly deforming regions, such as eastern North America, Australia, and inner Honshu, this assumption breaks down if the seismicity clusters are instead aftershocks of historic and prehistoric mainshocks. Here, therefore we probe the circumstances under which aftershocks can last for 100–1000 years. Basham and Adams (1983) and Ebel et al. (2000) proposed that intraplate seismicity in eastern North America could be aftershocks of mainshocks that struck hundreds of years beforehand, a view consonant with rate–state friction (Dieterich, 1994), in which aftershock duration varies inversely with fault‐stressing rate. To test these hypotheses, we estimate aftershock durations of the 2011 Mw 9 Tohoku‐Oki rupture at 12 sites up to 250 km from the source, as well as for the near‐fault aftershocks of eight large Japanese mainshocks, sampling faults slipping 0.01 to 80 mm/yr . Whereas aftershock productivity increases with mainshock magnitude, we find that aftershock duration, the time until the aftershock rate decays to the premainshock rate, does not. Instead, aftershock sequences lasted a month on the fastest‐slipping faults and are projected to persist for more than 2000 years on the slowest. Thus, long aftershock sequences can misguide and inflate hazard assessments in intraplate regions if misinterpreted as background seismicity, whereas areas between seismicity clusters may instead harbor a higher chance of large mainshocks, the opposite of what is being assumed today.
Skoruppa, Katrin; Rosen, Stuart
2014-06-01
In this study, the authors explored phonological processing in connected speech in children with hearing loss. Specifically, the authors investigated these children's sensitivity to English place assimilation, by which alveolar consonants like t and n can adapt to following sounds (e.g., the word ten can be realized as tem in the phrase ten pounds). Twenty-seven 4- to 8-year-old children with moderate to profound hearing impairments, using hearing aids (n = 10) or cochlear implants (n = 17), and 19 children with normal hearing participated. They were asked to choose between pictures of familiar (e.g., pen) and unfamiliar objects (e.g., astrolabe) after hearing t- and n-final words in sentences. Standard pronunciations (Can you find the pen dear?) and assimilated forms in correct (… pem please?) and incorrect contexts (… pem dear?) were presented. As expected, the children with normal hearing chose the familiar object more often for standard forms and correct assimilations than for incorrect assimilations. Thus, they are sensitive to word-final place changes and compensate for assimilation. However, the children with hearing impairment demonstrated reduced sensitivity to word-final place changes, and no compensation for assimilation. Restricted analyses revealed that children with hearing aids who showed good perceptual skills compensated for assimilation in plosives only.
Theodore, Rachel M; Demuth, Katherine; Shattuck-Hufnagel, Stefanie
2015-06-01
Prosodic and articulatory factors influence children's production of inflectional morphemes. For example, plural -s is produced more reliably in utterance-final compared to utterance-medial position (i.e., the positional effect), which has been attributed to the increased planning time in utterance-final position. In previous investigations of plural -s, utterance-medial plurals were followed by a stop consonant (e.g., dogsbark), inducing high articulatory complexity. We examined whether the positional effect would be observed if the utterance-medial context were simplified to a following vowel. An elicited imitation task was used to collect productions of plural nouns from 2-year-old children. Nouns were elicited utterance-medially and utterance-finally, with the medial plural followed by either a stressed or an unstressed vowel. Acoustic analysis was used to identify evidence of morpheme production. The positional effect was absent when the morpheme was followed by a vowel (e.g., dogseat). However, it returned when the vowel-initial word contained 2 syllables (e.g., dogsarrive), suggesting that the increased processing load in the latter condition negated the facilitative effect of the easy articulatory context. Children's productions of grammatical morphemes reflect a rich interaction between emerging levels of linguistic competence, raising considerations for diagnosis and rehabilitation of language disorders.
Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M
2009-04-01
Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.
Classification via Clustering for Predicting Final Marks Based on Student Participation in Forums
ERIC Educational Resources Information Center
Lopez, M. I.; Luna, J. M.; Romero, C.; Ventura, S.
2012-01-01
This paper proposes a classification via clustering approach to predict the final marks in a university course on the basis of forum data. The objective is twofold: to determine if student participation in the course forum can be a good predictor of the final marks for the course and to examine whether the proposed classification via clustering…
Cortical activity patterns predict speech discrimination ability
Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P
2010-01-01
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123
NASA Astrophysics Data System (ADS)
Long, Derle Ray
Coincidence theory states that when the components of harmony are in enhanced alignment the sound will be more consonant to the human auditory system. An objective method of examining the components of harmony is by investigating alignment of the mathematics of a particular sound or harmony. The study examined preference responses to excerpts tuned in just intonation, Pythagorean intonation, and equal temperament. Musical excerpts were presented in pairs and study subjects simply picked one version from the pair that they perceived as the most consonant. Results of the study revealed an overall preference for equal temperament in contradiction to coincidence theory. Several additional areas for research are suggested to further investigate the results of this study.
Deng, Xingjuan; Chen, Ji; Shuai, Jie
2009-08-01
For the purpose of improving the efficiency of aphasia rehabilitation training, artificial intelligence-scheduling function is added in the aphasia rehabilitation software, and the software's performance is improved. With the characteristics of aphasia patient's voice as well as with the need of artificial intelligence-scheduling functions under consideration, the present authors have designed a set of endpoint detection algorithm. It determines the reference endpoints, then extracts every word and ensures the reasonable segmentation points between consonants and vowels, using the reference endpoints. The results of experiments show that the algorithm is able to attain the objects of detection at a higher accuracy rate. Therefore, it is applicable to the detection of endpoint on aphasia-patient's voice.
Stuttering may start with repeating consonants (k, g, t). If stuttering becomes worse, words and phrases are repeated. Later, vocal spasms develop. There is a forced, almost explosive sound to speech. The ...
Efficient Agent-Based Cluster Ensembles
NASA Technical Reports Server (NTRS)
Agogino, Adrian; Tumer, Kagan
2006-01-01
Numerous domains ranging from distributed data acquisition to knowledge reuse need to solve the cluster ensemble problem of combining multiple clusterings into a single unified clustering. Unfortunately current non-agent-based cluster combining methods do not work in a distributed environment, are not robust to corrupted clusterings and require centralized access to all original clusterings. Overcoming these issues will allow cluster ensembles to be used in fundamentally distributed and failure-prone domains such as data acquisition from satellite constellations, in addition to domains demanding confidentiality such as combining clusterings of user profiles. This paper proposes an efficient, distributed, agent-based clustering ensemble method that addresses these issues. In this approach each agent is assigned a small subset of the data and votes on which final cluster its data points should belong to. The final clustering is then evaluated by a global utility, computed in a distributed way. This clustering is also evaluated using an agent-specific utility that is shown to be easier for the agents to maximize. Results show that agents using the agent-specific utility can achieve better performance than traditional non-agent based methods and are effective even when up to 50% of the agents fail.
Li, Feipeng; Trevino, Andrea; Menon, Anjali; Allen, Jont B
2012-10-01
In a previous study on plosives, the 3-Dimensional Deep Search (3DDS) method for the exploration of the necessary and sufficient cues for speech perception was introduced (Li et al., (2010). J. Acoust. Soc. Am. 127(4), 2599-2610). Here, this method is used to isolate the spectral cue regions for perception of the American English fricatives /∫, 3, s, z, f, v, θ, δ in time, frequency, and intensity. The fricatives are analyzed in the context of consonant-vowel utterances, using the vowel /α/. The necessary cues were found to be contained in the frication noise for /∫, 3, s, z, f, v/. 3DDS analysis isolated the cue regions of /s, z/ between 3.6 and 8 [kHz] and /∫, 3/ between 1.4 and 4.2 [kHz]. Some utterances were found to contain acoustic components that were unnecessary for correct perception, but caused listeners to hear non-target consonants when the primary cue region was removed; such acoustic components are labeled "conflicting cue regions." The amplitude modulation of the high-frequency frication region by the fundamental F0 was found to be a sufficient cue for voicing. Overall, the 3DDS method allows one to analyze the effects of natural speech components without initial assumptions about where perceptual cues lie in time-frequency space or which elements of production they correspond to.
Testing the limits of long-distance learning: learning beyond a three-segment window.
Finley, Sara
2012-01-01
Traditional flat-structured bigram and trigram models of phonotactics are useful because they capture a large number of facts about phonological processes. Additionally, these models predict that local interactions should be easier to learn than long-distance ones because long-distance dependencies are difficult to capture with these models. Long-distance phonotactic patterns have been observed by linguists in many languages, who have proposed different kinds of models, including feature-based bigram and trigram models, as well as precedence models. Contrary to flat-structured bigram and trigram models, these alternatives capture unbounded dependencies because at an abstract level of representation, the relevant elements are locally dependent, even if they are not adjacent at the observable level. Using an artificial grammar learning paradigm, we provide additional support for these alternative models of phonotactics. Participants in two experiments were exposed to a long-distance consonant-harmony pattern in which the first consonant of a five-syllable word was [s] or [∫] ("sh") and triggered a suffix that was either [-su] or [-∫u] depending on the sibilant quality of this first consonant. Participants learned this pattern, despite the large distance between the trigger and the target, suggesting that when participants learn long-distance phonological patterns, that pattern is learned without specific reference to distance. Copyright © 2012 Cognitive Science Society, Inc.
Shimokura, Ryota; Akasaka, Sakie; Nishimura, Tadashi; Hosoi, Hiroshi; Matsui, Toshie
2017-02-01
Some Japanese monosyllables contain consonants that are not easily discernible for individuals with sensorineural hearing loss. However, the acoustic features that make these monosyllables difficult to discern have not been clearly identified. Here, this study used the autocorrelation function (ACF), which can capture temporal features of signals, to clarify the factors influencing speech intelligibility. For each monosyllable, five factors extracted from the ACF [Φ(0): total energy; τ 1 and ϕ 1 : delay time and amplitude of the maximum peak; τ e : effective duration; W ϕ (0) : spectral centroid], voice onset time, speech intelligibility index, and loudness level were compared with the percentage of correctly perceived articulations (144 ears) obtained by 50 Japanese vowel and consonant-vowel monosyllables produced by one female speaker. Results showed that median effective duration [(τ e ) med ] was strongly correlated with the percentage of correctly perceived articulations of the consonants (r = 0.87, p < 0.01). (τ e ) med values were computed by running ACFs with the time lag at which the magnitude of the logarithmic-ACF envelope had decayed to -10 dB. Effective duration is a measure of temporal pattern persistence, i.e., the duration over which the waveform maintains a stable pattern. The authors postulate that low recognition ability is related to degraded perception of temporal fluctuation patterns.
Perceptual assessment of fricative--stop coarticulation.
Repp, B H; Mann, V A
1981-04-01
The perceptual dependence of stop consonants on preceding fricatives [Mann and Repp, J. Acoust. Soc. Am. 69, 548--558 (1981)] was further investigated in two experiments employing both natural and synthetic speech. These experiments consistently replicated our original finding that listeners, report velar stops following [s]. In addition, our data confirmed earlier reports that natural fricative noises (excerpted from utterances of [st alpha], [sk alpha], [(formula: see text)k alpha]) contain cues to the following stop consonants; this was revealed in subjects' identifications of stops from isolated fricative noises and from stimuli consisting of these noises followed by synthetic CV portions drawn from a [t alpha]--[k alpha] continuum. However, these cues in the noise portion could not account for the contextual effect of fricative identity ([formula: see text] versus [sp) on stop perception (more "k" responses following [s]). Rather, this effect seems to be related to a coarticulatory influence of a preceding fricative on stop production; Subjects' responses to excised natural CV portions (with bursts and aspiration removed) were biased towards a relatively more forward place of stop articulation when the CVs had originally been preceded by [s]; and the identification of a preceding ambiguous fricative was biased in the direction of the original fricative context in which a given CV portion had been produced. These findings support an articulatory explanation for the effect of preceding fricatives on stop consonant perception.
Effects of stimulus response compatibility on covert imitation of vowels.
Adank, Patti; Nuttall, Helen; Bekkering, Harold; Maegherman, Gwijde
2018-03-13
When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus-response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter. Responses tend to be faster for congruent than for incongruent distracters; thus, showing evidence of covert imitation. Simulation accounts propose a key role for covert imitation in speech perception. However, covert imitation has thus far only been demonstrated for a select class of speech sounds, namely consonants, and it is unclear whether covert imitation extends to vowels. We aimed to demonstrate that covert imitation effects as measured with the SRC paradigm extend to vowels, in two experiments. We examined whether covert imitation occurs for vowels in a consonant-vowel-consonant context in visual, audio, and audiovisual modalities. We presented the prompt at four time points to examine how covert imitation varied over the distracter's duration. The results of both experiments clearly demonstrated covert imitation effects for vowels, thus supporting simulation theories of speech perception. Covert imitation was not affected by stimulus modality and was maximal for later time points.
Maïonchi-Pino, Norbert; de Cara, Bruno; Ecalle, Jean; Magnan, Annie
2012-04-01
In this study, the authors queried whether French-speaking children with dyslexia were sensitive to consonant sonority and position within syllable boundaries to influence a phonological syllable-based segmentation in silent reading. Participants included 15 French-speaking children with dyslexia, compared with 30 chronological age-matched and reading level-matched controls. Children were tested with an audiovisual recognition task. A target pseudoword (TOLPUDE) was simultaneously presented visually and auditorily and then was compared with a printed test pseudoword that either was identical or differed after the coda deletion (TOPUDE) or the onset deletion (TOLUDE). The intervocalic consonant sequences had either a sonorant coda-sonorant onset (TOR.LADE), sonorant coda-obstruent onset (TOL.PUDE), obstruent coda-sonorant onset (DOT.LIRE), or obstruent coda-obstruent onset (BIC.TADE) sonority profile. All children processed identity better than they processed deletion, especially with the optimal sonorant coda-obstruent onset sonority profile. However, children preserved syllabification (coda deletion; TO.PUDE) rather than resyllabification (onset deletion; TO.LUDE) with intervocalic consonant sequence reductions, especially when sonorant codas were deleted but the optimal intersyllable contact was respected. It was surprising to find that although children with dyslexia generally exhibit phonological and acoustic-phonetic impairments (voicing), they showed sensitivity to the optimal sonority profile and a preference for preserved syllabification. The authors proposed a sonority-modulated explanation to account for phonological syllable-based processing. Educational implications are discussed.
Wada, Junichiro; Hideshima, Masayuki; Inukai, Shusuke; Matsuura, Hiroshi; Wakabayashi, Noriyuki
2014-01-01
To investigate the effects of the width and cross-sectional shape of the major connectors of maxillary dentures located in the middle area of the palate on the accuracy of phonetic output of consonants using an originally developed speech recognition system. Nine adults (4 males and 5 females, aged 24-26 years) with sound dentition were recruited. The following six sounds were considered: [∫i], [t∫i], [ɾi], [ni], [çi], and [ki]. The experimental connectors were fabricated to simulate bars (narrow, 8-mm width) and plates (wide, 20-mm width). Two types of cross-sectional shapes in the sagittal plane were specified: flat and plump edge. The appearance ratio of phonetic segment labels was calculated with the speech recognition system to indicate the accuracy of phonetic output. Statistical analysis was conducted using one-way ANOVA and Tukey's test. The mean appearance ratio of correct labels (MARC) significantly decreased for [ni] with the plump edge (narrow connector) and for [ki] with both the flat and plump edge (wide connectors). For [çi], the MARCs tended to be lower with flat plates. There were no significant differences for the other consonants. The width and cross-sectional shape of the connectors had limited effects on the articulation of consonants at the palate. © 2015 S. Karger AG, Basel.
Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy
Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker
2013-01-01
This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding. PMID:23801980
Influence of musical expertise on segmental and tonal processing in Mandarin Chinese.
Marie, Céline; Delogu, Franco; Lampis, Giulia; Belardinelli, Marta Olivetti; Besson, Mireille
2011-10-01
A same-different task was used to test the hypothesis that musical expertise improves the discrimination of tonal and segmental (consonant, vowel) variations in a tone language, Mandarin Chinese. Two four-word sequences (prime and target) were presented to French musicians and nonmusicians unfamiliar with Mandarin, and event-related brain potentials were recorded. Musicians detected both tonal and segmental variations more accurately than nonmusicians. Moreover, tonal variations were associated with higher error rate than segmental variations and elicited an increased N2/N3 component that developed 100 msec earlier in musicians than in nonmusicians. Finally, musicians also showed enhanced P3b components to both tonal and segmental variations. These results clearly show that musical expertise influenced the perceptual processing as well as the categorization of linguistic contrasts in a foreign language. They show positive music-to-language transfer effects and open new perspectives for the learning of tone languages.
Toward An Understanding of Cluster Evolution: A Deep X-Ray Selected Cluster Catalog from ROSAT
NASA Technical Reports Server (NTRS)
Jones, Christine; Oliversen, Ronald (Technical Monitor)
2002-01-01
In the past year, we have focussed on studying individual clusters found in this sample with Chandra, as well as using Chandra to measure the luminosity-temperature relation for a sample of distant clusters identified through the ROSAT study, and finally we are continuing our study of fossil groups. For the luminosity-temperature study, we compared a sample of nearby clusters with a sample of distant clusters and, for the first time, measured a significant change in the relation as a function of redshift (Vikhlinin et al. in final preparation for submission to Cape). We also used our ROSAT analysis to select and propose for Chandra observations of individual clusters. We are now analyzing the Chandra observations of the distant cluster A520, which appears to have undergone a recent merger. Finally, we have completed the analysis of the fossil groups identified in ROM observations. In the past few months, we have derived X-ray fluxes and luminosities as well as X-ray extents for an initial sample of 89 objects. Based on the X-ray extents and the lack of bright galaxies, we have identified 16 fossil groups. We are comparing their X-ray and optical properties with those of optically rich groups. A paper is being readied for submission (Jones, Forman, and Vikhlinin in preparation).
Cognitive dissonance reduction as constraint satisfaction.
Shultz, T R; Lepper, M R
1996-04-01
A constraint satisfaction neural network model (the consonance model) simulated data from the two major cognitive dissonance paradigms of insufficient justification and free choice. In several cases, the model fit the human data better than did cognitive dissonance theory. Superior fits were due to the inclusion of constraints that were not part of dissonance theory and to the increased precision inherent to this computational approach. Predictions generated by the model for a free choice between undesirable alternatives were confirmed in a new psychological experiment. The success of the consonance model underscores important, unforeseen similarities between what had been formerly regarded as the rather exotic process of dissonance reduction and a variety of other, more mundane psychological processes. Many of these processes can be understood as the progressive application of constraints supplied by beliefs and attitudes.
Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G
2001-06-01
The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.
NASA Astrophysics Data System (ADS)
Qian, Yibin; Ren, Zhongzhou; Ni, Dongdong
2016-08-01
We further investigate the cluster emission from heavy nuclei beyond the lead region in the framework of the preformed cluster model. The refined cluster-core potential is constructed by the double-folding integral of the density distributions of the daughter nucleus and the emitted cluster, where the radius or the diffuseness parameter in the Fermi density distribution formula is determined according to the available experimental data on the charge radii and the neutron skin thickness. The Schrödinger equation of the cluster-daughter relative motion is then solved within the outgoing Coulomb wave-function boundary conditions to obtain the decay width. It is found that the present decay width of cluster emitters is clearly enhanced as compared to that in the previous case, which involved the fixed parametrization for the density distributions of daughter nuclei and clusters. Among the whole procedure, the nuclear deformation of clusters is also introduced into the calculations, and the degree of its influence on the final decay half-life is checked to some extent. Moreover, the effect from the bubble density distribution of clusters on the final decay width is carefully discussed by using the central depressed distribution.
The development of phonological skills in late and early talkers
KEHOE, Margaret; CHAPLIN, Elisa; MUDRY, Pauline; FRIEND, Margaret
2016-01-01
This study examined the relationship between phonological and lexical development in a group of French-speaking children (n=30), aged 29 months. The participants were divided into three sub-groups based on the number of words in their expressive vocabulary : low vocabulary (below the 15th percentile) (<< late-talkers >>) ; average-sized vocabulary (40-60th percentile) (<< middle group >>) and advanced vocabulary (above the 90th percentile) (<< precocious >> or “early talkers”). The phonological abilities (e.g., phonemic inventory, percentage of correct consonants, and phonological processes) of the three groups were compared. The comparison was based on analyses of spontaneous language samples. Most findings were consistent with previous results found in English-speaking children, indicating that the phonological abilities of late talkers are less well developed than those of children with average-sized vocabularies which in turn are less well-developed than those of children with advanced vocabularies. Nevertheless, several phonological measures were not related to vocabulary size, in particular those concerning syllable-final position. These findings differ from those obtained in English. The article finally discusses the clinical implications of the findings for children with delayed language development. PMID:26924855
The Effects of Emotion on Second Formant Frequency Fluctuations in Adults Who Stutter.
Bauerly, Kim R
2018-06-05
Changes in second formant frequency fluctuations (FFF2) were examined in adults who stutter (AWS) and adults who do not stutter (ANS) when producing nonwords under varying emotional conditions. Ten AWS and 10 ANS viewed images selected from the International Affective Picture System representing dimensions of arousal (e.g., excited versus bored) and hedonic valence (e.g., happy versus sad). Immediately following picture presentation, participants produced a consonant-vowel + final /t/ (CVt) nonword consisting of the initial sounds /p/, /b/, /s/, or /z/, followed by a vowel (/i/, /u/, /ε/) and a final /t/. CVt tokens were assessed for word duration and FFF2. Significantly slower word durations were shown in the AWS compared to the ANS across conditions. Although these differences appeared to increase under arousing conditions, no interaction was found. Results for FFF2 revealed a significant group-condition interaction. Post hoc analysis indicated that this was due to the AWS showing significantly greater FFF2 when speaking under conditions eliciting increases in arousal and unpleasantness. ANS showed little change in FFF2 across conditions. The results suggest that AWS' articulatory stability is more susceptible to breakdown under negative emotional influences. © 2018 S. Karger AG, Basel.
ERIC Educational Resources Information Center
Gessman, Albert M.
1990-01-01
Discusses phonic shifting or sound shifts through an examination of Grimm's Law, or the Germanic Consonant Shift. The discussion includes comments on why the phonic shift developed and its pattern. (10 references) (GLR)
Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.
2014-01-01
Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We point out that while AV training could be an impediment to immediate unisensory perceptual learning in cochlear implant patients, it was also associated with higher scores during training. PMID:25206344
Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T
2014-01-01
Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We point out that while AV training could be an impediment to immediate unisensory perceptual learning in cochlear implant patients, it was also associated with higher scores during training.
Bootstrap Percolation on Homogeneous Trees Has 2 Phase Transitions
NASA Astrophysics Data System (ADS)
Fontes, L. R. G.; Schonmann, R. H.
2008-09-01
We study the threshold θ bootstrap percolation model on the homogeneous tree with degree b+1, 2≤ θ≤ b, and initial density p. It is known that there exists a nontrivial critical value for p, which we call p f , such that a) for p> p f , the final bootstrapped configuration is fully occupied for almost every initial configuration, and b) if p< p f , then for almost every initial configuration, the final bootstrapped configuration has density of occupied vertices less than 1. In this paper, we establish the existence of a distinct critical value for p, p c , such that 0< p c < p f , with the following properties: 1) if p≤ p c , then for almost every initial configuration there is no infinite cluster of occupied vertices in the final bootstrapped configuration; 2) if p> p c , then for almost every initial configuration there are infinite clusters of occupied vertices in the final bootstrapped configuration. Moreover, we show that 3) for p< p c , the distribution of the occupied cluster size in the final bootstrapped configuration has an exponential tail; 4) at p= p c , the expected occupied cluster size in the final bootstrapped configuration is infinite; 5) the probability of percolation of occupied vertices in the final bootstrapped configuration is continuous on [0, p f ] and analytic on ( p c , p f ), admitting an analytic continuation from the right at p c and, only in the case θ= b, also from the left at p f .
Hemispatial neglect and serial order in verbal working memory.
Antoine, Sophie; Ranzini, Mariagrazia; van Dijck, Jean-Philippe; Slama, Hichem; Bonato, Mario; Tousch, Ann; Dewulf, Myrtille; Bier, Jean-Christophe; Gevers, Wim
2018-01-09
Working memory refers to our ability to actively maintain and process a limited amount of information during a brief period of time. Often, not only the information itself but also its serial order is crucial for good task performance. It was recently proposed that serial order is grounded in spatial cognition. Here, we compared performance of a group of right hemisphere-damaged patients with hemispatial neglect to healthy controls in verbal working memory tasks. Participants memorized sequences of consonants at span level and had to judge whether a target consonant belonged to the memorized sequence (item task) or whether a pair of consonants were presented in the same order as in the memorized sequence (order task). In line with this idea that serial order is grounded in spatial cognition, we found that neglect patients made significantly more errors in the order task than in the item task compared to healthy controls. Furthermore, this deficit seemed functionally related to neglect severity and was more frequently observed following right posterior brain damage. Interestingly, this specific impairment for serial order in verbal working memory was not lateralized. We advance the hypotheses of a potential contribution to the deficit of serial order in neglect patients of either or both (1) reduced spatial working memory capacity that enables to keep track of the spatial codes that provide memorized items with a positional context, (2) a spatial compression of these codes in the intact representational space. © 2018 The British Psychological Society.
Early lexical characteristics of toddlers with cleft lip and palate.
Hardin-Jones, Mary; Chapman, Kathy L
2014-11-01
Objective : To examine development of early expressive lexicons in toddlers with cleft palate to determine whether they differ from those of noncleft toddlers in terms of size and lexical selectivity. Design : Retrospective. Patients : A total of 37 toddlers with cleft palate and 22 noncleft toddlers. Main Outcome Measures : The groups were compared for size of expressive lexicon reported on the MacArthur Communicative Development Inventory and the percentage of words beginning with obstruents and sonorants produced in a language sample. Differences between groups in the percentage of word initial consonants correct on the language sample were also examined. Results : Although expressive vocabulary was comparable at 13 months of age for both groups, size of the lexicon for the cleft group was significantly smaller than that for the noncleft group at 21 and 27 months of age. Toddlers with cleft palate produced significantly more words beginning with sonorants and fewer words beginning with obstruents in their spontaneous speech samples. They were also less accurate when producing word initial obstruents compared with the noncleft group. Conclusions : Toddlers with cleft palate demonstrate a slower rate of lexical development compared with their noncleft peers. The preference that toddlers with cleft palate demonstrate for words beginning with sonorants could suggest they are selecting words that begin with consonants that are easier for them to produce. An alternative explanation might be that because these children are less accurate in the production of obstruent consonants, listeners may not always identify obstruents when they occur.
Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles
2012-01-01
We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756
Context cue focality influences strategic prospective memory monitoring.
Hunter Ball, B; Bugg, Julie M
2018-02-12
Monitoring the environment for the occurrence of prospective memory (PM) targets is a resource-demanding process that produces cost (e.g., slower responding) to ongoing activities. However, research suggests that individuals are able to monitor strategically by using contextual cues to reduce monitoring in contexts in which PM targets are not expected to occur. In the current study, we investigated the processes supporting context identification (i.e., determining whether or not the context is appropriate for monitoring) by testing the context cue focality hypothesis. This hypothesis predicts that the ability to monitor strategically depends on whether the ongoing task orients attention to the contextual cues that are available to guide monitoring. In Experiment 1, participants performed an ongoing lexical decision task and were told that PM targets (TOR syllable) would only occur in word trials (focal context cue condition) or in items starting with consonants (nonfocal context cue condition). In Experiment 2, participants performed an ongoing first letter judgment (consonant/vowel) task and were told that PM targets would only occur in items starting with consonants (focal context cue condition) or in word trials (nonfocal context cue condition). Consistent with the context cue focality hypothesis, strategic monitoring was only observed during focal context cue conditions in which the type of ongoing task processing automatically oriented attention to the relevant features of the contextual cue. These findings suggest that strategic monitoring is dependent on limited-capacity processing resources and may be relatively limited when the attentional demands of context identification are sufficiently high.
Enhanced Sensitivity to Subphonemic Segments in Dyslexia: A New Instance of Allophonic Perception
Serniclaes, Willy; Seck, M’ballo
2018-01-01
Although dyslexia can be individuated in many different ways, it has only three discernable sources: a visual deficit that affects the perception of letters, a phonological deficit that affects the perception of speech sounds, and an audio-visual deficit that disturbs the association of letters with speech sounds. However, the very nature of each of these core deficits remains debatable. The phonological deficit in dyslexia, which is generally attributed to a deficit of phonological awareness, might result from a specific mode of speech perception characterized by the use of allophonic (i.e., subphonemic) units. Here we will summarize the available evidence and present new data in support of the “allophonic theory” of dyslexia. Previous studies have shown that the dyslexia deficit in the categorical perception of phonemic features (e.g., the voicing contrast between /t/ and /d/) is due to the enhanced sensitivity to allophonic features (e.g., the difference between two variants of /d/). Another consequence of allophonic perception is that it should also give rise to an enhanced sensitivity to allophonic segments, such as those that take place within a consonant cluster. This latter prediction is validated by the data presented in this paper. PMID:29587419
[Development and equivalence evaluation of spondee lists of mandarin speech test materials].
Zhang, Hua; Wang, Shuo; Wang, Liang; Chen, Jing; Chen, Ai-ting; Guo, Lian-sheng; Zhao, Xiao-yan; Ji, Chen
2006-06-01
To edit the spondee (disyllable) word lists as a part of mandarin speech test materials (MSTM). These will be basic speech materials for routine tests in clinics and laboratories. Two groups of professionals (audiologists, Chinese and Mandarin scientists, linguistician and statistician) were set up at first. The editing principles were established after 3 round table meetings. Ten spondee lists, each with 50 words, were edited and recorded into cassettes. All lists were phonemically balanced (3-dimensions: vowels, consonants and Chinese tones). Seventy-three normal hearing college students were tested. The speech was presented by earphone monaurally. Three statistic methods were used for equivalent analysis. Related analysis showed that all lists were much related, except List 5. Cluster analysis showed that all ten lists could be classified as two groups. But Kappa test showed that the lists' homogeneity were not well. Spondee lists are one of the most routine speech test materials. Their editing, recording and equivalent evaluation are affected by many factors. This also needs multi-discipline cooperation. All lists edited in present study need future modification in recording and testing in order to be used clinically and in research. The phonemic balance should be kept.
Spasmodic Dysphonia: a Laryngeal Control Disorder Specific to Speech
Ludlow, Christy L.
2016-01-01
Spasmodic dysphonia (SD) is a rare neurological disorder that emerges in middle age, is usually sporadic, and affects intrinsic laryngeal muscle control only during speech. Spasmodic bursts in particular laryngeal muscles disrupt voluntary control during vowel sounds in adductor SD and interfere with voice onset after voiceless consonants in abductor SD. Little is known about its origins; it is classified as a focal dystonia secondary to an unknown neurobiological mechanism that produces a chronic abnormality of laryngeal motor neuron regulation during speech. It develops primarily in females and does not interfere with breathing, crying, laughter, and shouting. Recent postmortem studies have implicated the accumulation of clusters in the parenchyma and perivascular regions with inflammatory changes in the brainstem in one to two cases. A few cases with single mutations in THAP1, a gene involved in transcription regulation, suggest that a weak genetic predisposition may contribute to mechanisms causing a nonprogressive abnormality in laryngeal motor neuron control for speech but not for vocal emotional expression. Research is needed to address the basic cellular and proteomic mechanisms that produce this disorder to provide intervention that could target the pathogenesis of the disorder rather than only providing temporary symptom relief. PMID:21248101
Spasmodic dysphonia: a laryngeal control disorder specific to speech.
Ludlow, Christy L
2011-01-19
Spasmodic dysphonia (SD) is a rare neurological disorder that emerges in middle age, is usually sporadic, and affects intrinsic laryngeal muscle control only during speech. Spasmodic bursts in particular laryngeal muscles disrupt voluntary control during vowel sounds in adductor SD and interfere with voice onset after voiceless consonants in abductor SD. Little is known about its origins; it is classified as a focal dystonia secondary to an unknown neurobiological mechanism that produces a chronic abnormality of laryngeal motor neuron regulation during speech. It develops primarily in females and does not interfere with breathing, crying, laughter, and shouting. Recent postmortem studies have implicated the accumulation of clusters in the parenchyma and perivascular regions with inflammatory changes in the brainstem in one to two cases. A few cases with single mutations in THAP1, a gene involved in transcription regulation, suggest that a weak genetic predisposition may contribute to mechanisms causing a nonprogressive abnormality in laryngeal motor neuron control for speech but not for vocal emotional expression. Research is needed to address the basic cellular and proteomic mechanisms that produce this disorder to provide intervention that could target the pathogenesis of the disorder rather than only providing temporary symptom relief.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-01
... unit is a block cluster, which consists of one or more geographically contiguous census blocks. As in... a number of distinct processes, ranging from forming block clusters, selecting the block clusters... sample of block clusters, while the E Sample is the census of housing units and enumerations in the same...
High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions
2016-08-30
High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated
ERIC Educational Resources Information Center
Mayr, Robert; Howells, Gwennan; Lewis, Rhonwen
2015-01-01
This study provides the first systematic account of word-final cluster acquisition in bilingual children. To this end, forty Welsh-English bilingual children differing in language dominance and age (2;6 to 5;0) participated in a picture-naming task in English and Welsh. The results revealed significant age and dominance effects on cluster…
Constraints on the Transfer of Perceptual Learning in Accented Speech
Eisner, Frank; Melinger, Alissa; Weber, Andrea
2013-01-01
The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598
Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W
2015-01-01
As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues). The results suggest that the benefit to CI consonant-identification performance provided by the residual acoustic hearing is even greater when visual cues are also present. An analysis of consonant confusions suggests that this is because the voicing cues provided by the residual acoustic hearing are highly complementary with the mainly place-of-articulation cues provided by the visual stimulus. These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a second CI (i.e., bilateral implantation). Although recent developments in CI technology and surgical techniques have increased the likelihood of preserving residual acoustic hearing, preservation cannot be guaranteed in each individual case. Therefore, the potential gain to be derived from bilateral implantation needs to be weighed against the possible loss of the benefit provided by residual acoustic hearing.
Theoretical Analysis of Optical Absorption and Emission in Mixed Noble Metal Nanoclusters.
Day, Paul N; Pachter, Ruth; Nguyen, Kiet A
2018-04-26
In this work, we studied theoretically two hybrid gold-silver clusters, which were reported to have dual-band emission, using density functional theory (DFT) and linear and quadratic response time-dependent DFT (TDDFT). Hybrid functionals were found to successfully predict absorption and emission, although explanation of the NIR emission from the larger cluster (cluster 1) requires significant vibrational excitation in the final state. For the smaller cluster (cluster 2), the Δ H(0-0) value calculated for the T1 → S0 transition, using the PBE0 functional, is in good agreement with the measured NIR emission, and the calculated T2 → S0 value is in fair agreement with the measured visible emission. The calculated T1 → S0 phosphorescence Δ H(0-0) for cluster 1 is close to the measured visible emission energy. In order for the calculated phosphorescence for cluster 1 to agree with the intense NIR emission reported experimentally, the vibrational energy of the final state (S0) is required to be about 0.7 eV greater than the zero-point vibrational energy.
On the interaction of deaffrication and consonant harmony*
Dinnsen, Daniel A.; Gierut, Judith A.; Morrisette, Michele L.; Green, Christopher R.; Farris-Trimble, Ashley W.
2010-01-01
Error patterns in children’s phonological development are often described as simplifying processes that can interact with one another with different consequences. Some interactions limit the applicability of an error pattern, and others extend it to more words. Theories predict that error patterns interact to their full potential. While specific interactions have been documented for certain pairs of processes, no developmental study has shown that the range of typologically predicted interactions occurs for those processes. To determine whether this anomaly is an accidental gap or a systematic peculiarity of particular error patterns, two commonly occurring processes were considered, namely Deaffrication and Consonant Harmony. Results are reported from a cross-sectional and longitudinal study of 12 children (age 3;0 – 5;0) with functional phonological delays. Three interaction types were attested to varying degrees. The longitudinal results further instantiated the typology and revealed a characteristic trajectory of change. Implications of these findings are explored. PMID:20513256
Speech outcomes in Cantonese patients after glossectomy.
Wong, Ripley Kit; Poon, Esther Sok-Man; Woo, Cynthia Yuen-Man; Chan, Sabina Ching-Shun; Wong, Elsa Siu-Ping; Chu, Ada Wai-Sze
2007-08-01
We sought to determine the major factors affecting speech production of Cantonese-speaking glossectomized patients. Error pattern was analyzed. Forty-one Cantonese-speaking subjects who had undergone glossectomy > or = 6 months previously were recruited. Speech production evaluation included (1) phonetic error analysis in nonsense syllable; (2) speech intelligibility in sentences evaluated by naive listeners; (3) overall speech intelligibility in conversation evaluated by experienced speech therapists. Patients receiving adjuvant radiotherapy had significantly poorer segmental and connected speech production. Total or subtotal glossectomy also resulted in poor speech outcomes. Patients having free flap reconstruction showed the best speech outcomes. Patients without lymph node metastasis had significantly better speech scores when compared with patients with lymph node metastasis. Initial consonant production had the worst scores, while vowel production was the least affected. Speech outcomes of Cantonese-speaking glossectomized patients depended on the severity of the disease. Initial consonants had the greatest effect on speech intelligibility.
Sandberg, Petra; Rönnlund, Michael; Derwinger-Hallberg, Anna; Stigsdotter Neely, Anna
2016-10-01
The study investigated the relationship between cognitive factors and gains in number recall following training in a number-consonant mnemonic in a sample of 112 older adults (M = 70.9 years). The cognitive factors examined included baseline episodic memory, working memory, processing speed, and verbal knowledge. In addition, predictors of maintenance of gains to a follow-up assessment, eight months later, were examined. Whereas working memory was a prominent predictor of baseline recall, the magnitude of gains in recall from pre- to post-test assessments were predicted by baseline episodic memory, processing speed, and verbal knowledge. Verbal knowledge was the only significant predictor of maintenance. Collectively, the results indicate the need to consider multiple factors to account for individual differences in memory plasticity. The potential contribution of additional factors to individual differences in memory plasticity is discussed.
[Observation of oral actions using digital image processing system].
Ichikawa, T; Komoda, J; Horiuchi, M; Ichiba, H; Hada, M; Matsumoto, N
1990-04-01
A new digital image processing system to observe oral actions is proposed. The system provides analyses of motion pictures along with other physiological signals. The major components are a video tape recorder, a digital image processor, a percept scope, a CCD camera, an A/D converter and a personal computer. Five reference points were marked on the lip and eyeglasses of 9 adult subjects. Lip movements were recorded and analyzed using the system when uttering five vowels and [ka, sa, ta, ha, ra, ma, pa, ba[. 1. Positions of the lip when uttering five vowels were clearly classified. 2. Active articulatory movements of the lip were not recognized when uttering consonants [k, s, t, h, r[. It seemed lip movements were dependent on tongue and mandibular movements. Downward and rearward movements of the upper lip, and upward and forward movements of the lower lip were observed when uttering consonants [m, p, b[.
Fricative-stop coarticulation: acoustic and perceptual evidence.
Repp, B H; Mann, V A
1982-06-01
Eight native speakers of American English each produced ten tokens of all possible CV, FCV, and VFCV utterances with V = [a] or [u], F = [s] or [integral of], and C = [t] or [k]. Acoustic analysis showed that the formant transition onsets following the stop consonant release were systematically influenced by the preceding fricative, although there were large individual differences. In particular, F3 and F4 tended to be higher following [s] than following [integral of]. The coarticulatory effects were equally large in FCV (e.g.,/sta/) and VFCV (e.g.,/asda/) utterances; that is, they were not reduced when a syllable boundary intervened between fricative and stop. In a parallel perceptual study, the CV portions of these utterances (with release bursts removed to provoke errors) were presented to listeners for identification of the stop consonant. The pattern of place-of-articulation confusions, too, revealed coarticulatory effects due to the excised fricative context.
The Neural Representation of Consonant-Vowel Transitions in Adults Who Wear Hearing Aids
Tremblay, Kelly L.; Kalstein, Laura; Billings, Cuttis J.; Souza, Pamela E.
2006-01-01
Hearing aids help compensate for disorders of the ear by amplifying sound; however, their effectiveness also depends on the central auditory system's ability to represent and integrate spectral and temporal information delivered by the hearing aid. The authors report that the neural detection of time-varying acoustic cues contained in speech can be recorded in adult hearing aid users using the acoustic change complex (ACC). Seven adults (50–76 years) with mild to severe sensorineural hearing participated in the study. When presented with 2 identifiable consonant-vowel (CV) syllables (“shee” and “see”), the neural detection of CV transitions (as indicated by the presence of a P1-N1-P2 response) was different for each speech sound. More specifically, the latency of the evoked neural response coincided in time with the onset of the vowel, similar to the latency patterns the authors previously reported in normal-hearing listeners. PMID:16959736
Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew
2017-12-01
Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.
The Common Prescription Patterns Based on the Hierarchical Clustering of Herb-Pairs Efficacies
2016-01-01
Prescription patterns are rules or regularities used to generate, recognize, or judge a prescription. Most of existing studies focused on the specific prescription patterns for diverse diseases or syndromes, while little attention was paid to the common patterns, which reflect the global view of the regularities of prescriptions. In this paper, we designed a method CPPM to find the common prescription patterns. The CPPM is based on the hierarchical clustering of herb-pair efficacies (HPEs). Firstly, HPEs were hierarchically clustered; secondly, the individual herbs are labeled by the HPEC (the clusters of HPEs); and then the prescription patterns were extracted from the combinations of HPEC; finally the common patterns are recognized statistically. The results showed that HPEs have hierarchical clustering structure. When the clustering level is 2 and the HPEs were classified into two clusters, the common prescription patterns are obvious. Among 332 candidate prescriptions, 319 prescriptions follow the common patterns. The description of the patterns is that if a prescription contains the herbs of the cluster (C 1), it is very likely to have other herbs of another cluster (C 2); while a prescription has the herbs of C 2, it may have no herbs of C 1. Finally, we discussed that the common patterns are mathematically coincident with the Blood-Qi theory. PMID:27190534
Willadsen, Elisabeth; Lohmander, Anette; Persson, Christina; Lundeborg, Inger; Alaluusua, Suvi; Aukner, Ragnhild; Bau, Anja; Boers, Maria; Bowden, Melanie; Davies, Julie; Emborg, Berit; Havstam, Christina; Hayden, Christine; Henningsson, Gunilla; Holmefjord, Anders; Hölttä, Elina; Kisling-Møller, Mia; Kjøll, Lillian; Lundberg, Maria; McAleer, Eilish; Nyberg, Jill; Paaso, Marjukka; Pedersen, Nina Helen; Rasmussen, Therese; Reisæter, Sigvor; Andersen, Helene Søgaard; Schöps, Antje; Tørdal, Inger-Beate; Semb, Gunvor
2017-02-01
Normal articulation before school start is a main objective in cleft palate treatment. The aim was to investigate if differences exist in consonant proficiency at age 5 years between children with unilateral cleft lip and palate (UCLP) randomised to different surgical protocols for primary palatal repair. A secondary aim was to estimate burden of care in terms of received additional secondary surgeries and speech therapy. Three parallel group, randomised clinical trials were undertaken as an international multicentre study by 10 cleft teams in five countries: Denmark, Finland, Norway, Sweden, and the UK. Three different surgical protocols for primary palatal repair were tested against a common procedure in the total cohort of 448 children born with non-syndromic UCLP. Speech audio- and video-recordings of 391 children (136 girls and 255 boys) were available and transcribed phonetically. The main outcome measure was Percent Consonants Correct (PCC) from blinded assessments. In Trial 1, arm A showed statistically significant higher PCC scores (82%) than arm B (78%) (p = .045). No significant differences were found between prevalences in Trial 2, A: 79%, C: 82%; or Trial 3, A: 80%, D: 85%. Across all trials, girls achieved better PCC scores, excluding s-errors, than boys (91.0% and 87.5%, respectively) (p = .01). PCC scores were higher in arm A than B in Trial 1, whereas no differences were found between arms in Trials 2 or 3. The burden of care in terms of secondary pharyngeal surgeries, number of fistulae, and speech therapy visits differed. ISRCTN29932826.
African Y chromosome and mtDNA divergence provides insight into the history of click languages.
Knight, Alec; Underhill, Peter A; Mortensen, Holly M; Zhivotovsky, Lev A; Lin, Alice A; Henn, Brenna M; Louis, Dorothy; Ruhlen, Merritt; Mountain, Joanna L
2003-03-18
About 30 languages of southern Africa, spoken by Khwe and San, are characterized by a repertoire of click consonants and phonetic accompaniments. The Jumid R:'hoansi (!Kung) San carry multiple deeply coalescing gene lineages. The deep genetic diversity of the San parallels the diversity among the languages they speak. Intriguingly, the language of the Hadzabe of eastern Africa, although not closely related to any other language, shares click consonants and accompaniments with languages of Khwe and San. We present original Y chromosome and mtDNA variation of Hadzabe and other ethnic groups of Tanzania and Y chromosome variation of San and peoples of the central African forests: Biaka, Mbuti, and Lisongo. In the context of comparable published data for other African populations, analyses of each of these independently inherited DNA segments indicate that click-speaking Hadzabe and Jumid R:'hoansi are separated by genetic distance as great or greater than that between any other pair of African populations. Phylogenetic tree topology indicates a basal separation of the ancient ancestors of these click-speaking peoples. That genetic divergence does not appear to be the result of recent gene flow from neighboring groups. The deep genetic divergence among click-speaking peoples of Africa and mounting linguistic evidence suggest that click consonants date to early in the history of modern humans. At least two explanations remain viable. Clicks may have persisted for tens of thousands of years, independently in multiple populations, as a neutral trait. Alternatively, clicks may have been retained, because they confer an advantage during hunting in certain environments.
Sharp and round shapes of seen objects have distinct influences on vowel and consonant articulation.
Vainio, L; Tiainen, M; Tiippana, K; Rantala, A; Vainio, M
2017-07-01
The shape and size-related sound symbolism phenomena assume that, for example, the vowel [i] and the consonant [t] are associated with sharp-shaped and small-sized objects, whereas [ɑ] and [m] are associated with round and large objects. It has been proposed that these phenomena are mostly based on the involvement of articulatory processes in representing shape and size properties of objects. For example, [i] might be associated with sharp and small objects, because it is produced by a specific front-close shape of articulators. Nevertheless, very little work has examined whether these object properties indeed have impact on speech sound vocalization. In the present study, the participants were presented with a sharp- or round-shaped object in a small or large size. They were required to pronounce one out of two meaningless speech units (e.g., [i] or [ɑ]) according to the size or shape of the object. We investigated how a task-irrelevant object property (e.g., the shape when responses are made according to size) influences reaction times, accuracy, intensity, fundamental frequency, and formant 1 and formant 2 of vocalizations. The size did not influence vocal responses but shape did. Specifically, the vowel [i] and consonant [t] were vocalized relatively rapidly when the object was sharp-shaped, whereas [u] and [m] were vocalized relatively rapidly when the object was round-shaped. The study supports the view that the shape-related sound symbolism phenomena might reflect mapping of the perceived shape with the corresponding articulatory gestures.
Early phonology revealed by international adoptees' birth language retention.
Choi, Jiyoun; Broersma, Mirjam; Cutler, Anne
2017-07-11
Until at least 6 mo of age, infants show good discrimination for familiar phonetic contrasts (i.e., those heard in the environmental language) and contrasts that are unfamiliar. Adult-like discrimination (significantly worse for nonnative than for native contrasts) appears only later, by 9-10 mo. This has been interpreted as indicating that infants have no knowledge of phonology until vocabulary development begins, after 6 mo of age. Recently, however, word recognition has been observed before age 6 mo, apparently decoupling the vocabulary and phonology acquisition processes. Here we show that phonological acquisition is also in progress before 6 mo of age. The evidence comes from retention of birth-language knowledge in international adoptees. In the largest ever such study, we recruited 29 adult Dutch speakers who had been adopted from Korea when young and had no conscious knowledge of Korean language at all. Half were adopted at age 3-5 mo (before native-specific discrimination develops) and half at 17 mo or older (after word learning has begun). In a short intensive training program, we observe that adoptees (compared with 29 matched controls) more rapidly learn tripartite Korean consonant distinctions without counterparts in their later-acquired Dutch, suggesting that the adoptees retained phonological knowledge about the Korean distinction. The advantage is equivalent for the younger-adopted and the older-adopted groups, and both groups not only acquire the tripartite distinction for the trained consonants but also generalize it to untrained consonants. Although infants younger than 6 mo can still discriminate unfamiliar phonetic distinctions, this finding indicates that native-language phonological knowledge is nonetheless being acquired at that age.
Nan, Yun; Liu, Li; Geiser, Eveline; Shu, Hua; Gong, Chen Chen; Dong, Qi; Gabrieli, John D E; Desimone, Robert
2018-06-25
Musical training confers advantages in speech-sound processing, which could play an important role in early childhood education. To understand the mechanisms of this effect, we used event-related potential and behavioral measures in a longitudinal design. Seventy-four Mandarin-speaking children aged 4-5 y old were pseudorandomly assigned to piano training, reading training, or a no-contact control group. Six months of piano training improved behavioral auditory word discrimination in general as well as word discrimination based on vowels compared with the controls. The reading group yielded similar trends. However, the piano group demonstrated unique advantages over the reading and control groups in consonant-based word discrimination and in enhanced positive mismatch responses (pMMRs) to lexical tone and musical pitch changes. The improved word discrimination based on consonants correlated with the enhancements in musical pitch pMMRs among the children in the piano group. In contrast, all three groups improved equally on general cognitive measures, including tests of IQ, working memory, and attention. The results suggest strengthened common sound processing across domains as an important mechanism underlying the benefits of musical training on language processing. In addition, although we failed to find far-transfer effects of musical training to general cognition, the near-transfer effects to speech perception establish the potential for musical training to help children improve their language skills. Piano training was not inferior to reading training on direct tests of language function, and it even seemed superior to reading training in enhancing consonant discrimination.
Shannon, Robert V.; Cruz, Rachel J.; Galvin, John J.
2011-01-01
High stimulation rates in cochlear implants (CI) offer better temporal sampling, can induce stochastic-like firing of auditory neurons and can increase the electric dynamic range, all of which could improve CI speech performance. While commercial CI have employed increasingly high stimulation rates, no clear or consistent advantage has been shown for high rates. In this study, speech recognition was acutely measured with experimental processors in 7 CI subjects (Clarion CII users). The stimulation rate varied between (approx.) 600 and 4800 pulses per second per electrode (ppse) and the number of active electrodes varied between 4 and 16. Vowel, consonant, consonant-nucleus-consonant word and IEEE sentence recognition was acutely measured in quiet and in steady noise (+10 dB signal-to-noise ratio). Subjective quality ratings were obtained for each of the experimental processors in quiet and in noise. Except for a small difference for vowel recognition in quiet, there were no significant differences in performance among the experimental stimulation rates for any of the speech measures. There was also a small but significant increase in subjective quality rating as stimulation rates increased from 1200 to 2400 ppse in noise. Consistent with previous studies, performance significantly improved as the number of electrodes was increased from 4 to 8, but no significant difference showed between 8, 12 and 16 electrodes. Altogether, there was little-to-no advantage of high stimulation rates in quiet or in noise, at least for the present speech tests and conditions. PMID:20639631
Maruthy, Santosh; Feng, Yongqiang; Max, Ludo
2018-03-01
A longstanding hypothesis about the sensorimotor mechanisms underlying stuttering suggests that stuttered speech dysfluencies result from a lack of coarticulation. Formant-based measures of either the stuttered or fluent speech of children and adults who stutter have generally failed to obtain compelling evidence in support of the hypothesis that these individuals differ in the timing or degree of coarticulation. Here, we used a sensitive acoustic technique-spectral coefficient analyses-that allowed us to compare stuttering and nonstuttering speakers with regard to vowel-dependent anticipatory influences as early as the onset burst of a preceding voiceless stop consonant. Eight adults who stutter and eight matched adults who do not stutter produced C 1 VC 2 words, and the first four spectral coefficients were calculated for one analysis window centered on the burst of C 1 and two subsequent windows covering the beginning of the aspiration phase. Findings confirmed that the combined use of four spectral coefficients is an effective method for detecting the anticipatory influence of a vowel on the initial burst of a preceding voiceless stop consonant. However, the observed patterns of anticipatory coarticulation showed no statistically significant differences, or trends toward such differences, between the stuttering and nonstuttering groups. Combining the present results for fluent speech in one given phonetic context with prior findings from both stuttered and fluent speech in a variety of other contexts, we conclude that there is currently no support for the hypothesis that the fluent speech of individuals who stutter is characterized by limited coarticulation.
ERIC Educational Resources Information Center
Lupi, Marsha Mead
1979-01-01
The article illustrates the use of commercial jingles as high interest, low-level reading and language arts materials for primary age mildly retarded students. It is pointed out that jingles can be used in teaching initial consonants, vocabulary words, and arithmetic concepts. (SBH)
Code of Federal Regulations, 2011 CFR
2011-10-01
... OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL EMERGENCY RESTORATION PRIORITY PROCEDURES FOR TELECOMMUNICATIONS SERVICES § 211.0 Purpose. This part establishes policies and procedures.... 820), policies, plans, and procedures developed pursuant to the Executive order shall be in consonance...
Code of Federal Regulations, 2012 CFR
2012-10-01
... OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL EMERGENCY RESTORATION PRIORITY PROCEDURES FOR TELECOMMUNICATIONS SERVICES § 211.0 Purpose. This part establishes policies and procedures.... 820), policies, plans, and procedures developed pursuant to the Executive order shall be in consonance...
Code of Federal Regulations, 2014 CFR
2014-10-01
... OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL EMERGENCY RESTORATION PRIORITY PROCEDURES FOR TELECOMMUNICATIONS SERVICES § 211.0 Purpose. This part establishes policies and procedures.... 820), policies, plans, and procedures developed pursuant to the Executive order shall be in consonance...
Code of Federal Regulations, 2013 CFR
2013-10-01
... OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL EMERGENCY RESTORATION PRIORITY PROCEDURES FOR TELECOMMUNICATIONS SERVICES § 211.0 Purpose. This part establishes policies and procedures.... 820), policies, plans, and procedures developed pursuant to the Executive order shall be in consonance...
Code of Federal Regulations, 2010 CFR
2010-10-01
... OFFICE OF SCIENCE AND TECHNOLOGY POLICY AND NATIONAL SECURITY COUNCIL EMERGENCY RESTORATION PRIORITY PROCEDURES FOR TELECOMMUNICATIONS SERVICES § 211.0 Purpose. This part establishes policies and procedures.... 820), policies, plans, and procedures developed pursuant to the Executive order shall be in consonance...
An analysis of the job of railroad train dispatcher.
DOT National Transportation Integrated Search
1974-04-01
This report constitutes a detailed study of the job of railroad train dispatcher, conducted to provide a data base for the derivation of criteria of job knowledge, skills and training consonant with safe operations. Documentation was reviewed; specia...
Music and emotion: an EEG connectivity study in patients with disorders of consciousness.
Varotto, G; Fazio, P; Rossi Sebastiano, D; Avanzini, G; Franceschetti, S; Panzica, F; CRC
2012-01-01
Human emotion perception is a topic of great interest for both cognitive and clinical neuroscience, but its electrophysiological correlates are still poorly understood. The present study is aimed at evaluating if measures of synchronization and indexes based on graph-theory are a tool suitable to study and quantify electrophysiological changes due to emotional stimuli perception. In particular, our study is aimed at evaluating if different EEG connectivity patterns can be induced by pleasant (consonant) or unpleasant (dissonant) music, in a population of healthy subjects, and in patients with severe disorders of consciousness (DOCs), namely vegetative state (VS) patients. In the control group, pleasant music induced an increase in network number of connections, compared with the resting condition, while no changes were caused by the unpleasant stimuli. However, clustering coefficient and path length, two indexes derived from graph theory, able to characterise segregation and integration properties of a network, were not affected by the stimuli, neither pleasant nor unpleasant. In the VS group, changes were found only in those patients with the less severe consciousness impairment, according to the clinical assessment. In these patients a stronger synchronization was found during the unpleasant condition; moreover we observed changes in the network topology, with decreased values of clustering coefficient and path length during both musical stimuli.Our results show that measures of synchronization can provide new insights into the study of the electro physiological correlates of emotion perception, indicating that these tools can be used to study patients with DOCs, in whom the issue of objective measures and quantification of the degree of impairment is still an open and unsolved question.
Revisiting the destiny compulsion.
Potamianou, Anna
2017-02-01
This paper is an attempt to deal with some questions raised by the so-called 'compulsion of destiny' constellation. In presenting the standpoints of Freud and of psychoanalysts who after him were concerned with this problematic, the author takes the view that several aspects of the configuration merit further discussion. Accordingly, the dynamics of repetition compulsion, the complexity of the projective strategy, the coexistence of passive and omnipotent trends are considered. Concerning compulsive repetitions the dimension of drive intrication is underlined, thus moderating the understanding of this clinical entity as mainly related to death drive trends. Projection is understood as serving complex psychic demands. The coexistence of passive and omnipotent trends is envisaged, as manifested in phantasies of submission / participation of patients to a force that exceeds human limitations. For certain cases the consonance of somatic and psychic experiences is noted. Finally, elements from the material of two cases are presented which pertain to the problematic of the compulsion of destiny in which random events are submitted to heavy psychic necessities. Copyright © 2016 Institute of Psychoanalysis.
The recent process of decentralization and democratic management of education in Brazil
NASA Astrophysics Data System (ADS)
Santos Filho, José Camilo Dos
1993-09-01
Brazilian society is beginning a new historical period in which the principle of decentralization is beginning to predominate over centralization, which held sway during the last 25 years. In contrast to recent Brazilian history, there is now a search for political, democratic and participatory decentralization more consonant with grass-roots aspirations. The first section of this article presents a brief analysis of some decentralization policies implemented by the military regime of 1964, and discusses relevant facts related to the resistance of civil society to state authoritarianism, and to the struggle for the democratization and organization of civil society up to the end of the 1970s. The second section analyzes some new experiences of democratic public school administration initiated in the 1970s and 1980s. The final section discusses the move toward decentralization and democratization of public school administration in the new Federal and State Constitutions, and in the draft of the new Law of National Education.
The [Mo₆Cl14]2- Cluster is Biologically Secure and Has Anti-Rotavirus Activity In Vitro.
Rojas-Mancilla, Edgardo; Oyarce, Alexis; Verdugo, Viviana; Morales-Verdejo, Cesar; Echeverria, Cesar; Velásquez, Felipe; Chnaiderman, Jonas; Valiente-Echeverría, Fernando; Ramirez-Tagle, Rodrigo
2017-07-05
The molybdenum cluster [Mo₆Cl 14 ] 2- is a fluorescent component with potential for use in cell labelling and pharmacology. Biological safety and antiviral properties of the cluster are as yet unknown. Here, we show the effect of acute exposition of human cells and red blood cells to the molybdenum cluster and its interaction with proteins and antiviral activity in vitro. We measured cell viability of HepG2 and EA.hy926 cell lines exposed to increasing concentrations of the cluster (0.1 to 250 µM), by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) colorimetric assay. Hemolysis and morphological alterations of red blood cells, obtained from healthy donors, exposed to the cluster (10 to 200 µM) at 37 °C were analyzed. Furthermore, quenching of tryptophan residues of albumin was performed. Finally, plaque formation by rotavirus SA11 in MA104 cells treated with the cluster (100 to 300 µM) were analyzed. We found that all doses of the cluster showed similar cell viability, hemolysis, and morphology values, compared to control. Quenching of tryptophan residues of albumin suggests a protein-cluster complex formation. Finally, the cluster showed antiviral activity at 300 µM. These results indicate that the cluster [Mo₆Cl 14 ] 2- could be intravenously administered in animals at therapeutic doses for further in vivo studies and might be studied as an antiviral agent.
Dynamics of cD Clusters of Galaxies. 4; Conclusion of a Survey of 25 Abell Clusters
NASA Technical Reports Server (NTRS)
Oegerle, William R.; Hill, John M.; Fisher, Richard R. (Technical Monitor)
2001-01-01
We present the final results of a spectroscopic study of a sample of cD galaxy clusters. The goal of this program has been to study the dynamics of the clusters, with emphasis on determining the nature and frequency of cD galaxies with peculiar velocities. Redshifts measured with the MX Spectrometer have been combined with those obtained from the literature to obtain typically 50 - 150 observed velocities in each of 25 galaxy clusters containing a central cD galaxy. We present a dynamical analysis of the final 11 clusters to be observed in this sample. All 25 clusters are analyzed in a uniform manner to test for the presence of substructure, and to determine peculiar velocities and their statistical significance for the central cD galaxy. These peculiar velocities were used to determine whether or not the central cD galaxy is at rest in the cluster potential well. We find that 30 - 50% of the clusters in our sample possess significant subclustering (depending on the cluster radius used in the analysis), which is in agreement with other studies of non-cD clusters. Hence, the dynamical state of cD clusters is not different than other present-day clusters. After careful study, four of the clusters appear to have a cD galaxy with a significant peculiar velocity. Dressler-Shectman tests indicate that three of these four clusters have statistically significant substructure within 1.5/h(sub 75) Mpc of the cluster center. The dispersion 75 of the cD peculiar velocities is 164 +41/-34 km/s around the mean cluster velocity. This represents a significant detection of peculiar cD velocities, but at a level which is far below the mean velocity dispersion for this sample of clusters. The picture that emerges is one in which cD galaxies are nearly at rest with respect to the cluster potential well, but have small residual velocities due to subcluster mergers.
32 CFR 2700.3 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Defense Other Regulations Relating to National Defense OFFICE FOR MICRONESIAN STATUS NEGOTIATIONS SECURITY INFORMATION REGULATIONS Introduction § 2700.3 Applicability. This Regulation supplements E.O. 12065 within OMSN with regard to National Security Information. In consonance with the authorities listed in § 2700...
Kansas Working Papers in Linguistics, Volume 20.
ERIC Educational Resources Information Center
Goodell, Melissa, Ed.; Choi, Dong-Ik, Ed.
1995-01-01
Papers in this volume include the following: "Evidence for Foot Structure in Hausa" (Ousseina Alidou); "Korean 'Tense' Consonants as Geminates" (Dong-Ik Choi); "Gemination Processes: Motivation, Form, and Constraints" (Mamadou Niang); "Syllable 'Sonority' Hierarchy and Pulaar Stress: A Metrical Approach"…
Formation of intermediate-mass black holes through runaway collisions in the first star clusters
NASA Astrophysics Data System (ADS)
Sakurai, Yuya; Yoshida, Naoki; Fujii, Michiko S.; Hirano, Shingo
2017-12-01
We study the formation of massive black holes in the first star clusters. We first locate star-forming gas clouds in protogalactic haloes of ≳107 M⊙ in cosmological hydrodynamics simulations and use them to generate the initial conditions for star clusters with masses of ∼105 M⊙. We then perform a series of direct-tree hybrid N-body simulations to follow runaway stellar collisions in the dense star clusters. In all the cluster models except one, runaway collisions occur within a few million years, and the mass of the central, most massive star reaches ∼400-1900 M⊙. Such very massive stars collapse to leave intermediate-mass black holes (IMBHs). The diversity of the final masses may be attributed to the differences in a few basic properties of the host haloes such as mass, central gas velocity dispersion and mean gas density of the central core. Finally, we derive the IMBH mass to cluster mass ratios, and compare them with the observed black hole to bulge mass ratios in the present-day Universe.
White Dwarfs in Star Clusters: The Initial-Final Mass Relation for Stars from 0.85 to 8 M$_\\odot$
NASA Astrophysics Data System (ADS)
Cummings, Jeffrey; Kalirai, Jason; Tremblay, P.-E.; Ramírez-Ruiz, Enrico
2018-01-01
The spectroscopic study of white dwarfs provides both their mass, cooling age, and intrinsic photometric properties. For white dwarfs in the field of well-studied star clusters, this intrinsic photometry can be used to determine if they are members of that star cluster. Comparison of a member white dwarf's cooling age to its total cluster's age provides the evolutionary timescale of its progenitor star, and hence the mass. This is the initial-final mass relation (IFMR) for stars, which gives critical information on how a progenitor star evolves and loses mass throughout its lifetime, and how this changes with progenitor mass. Our work, for the first time, presents a uniform analysis of 85 white dwarf cluster members spanning from progenitor masses of 0.85 to 8 M$_\\odot$. Comparison of our work to theoretical IFMRs shows remarkable consistency in their shape but differences remain. We will discuss possible explanations for these differences, including the effects of stellar rotation.
Donaldson, Gail S; Dawson, Patricia K; Borden, Lamar Z
2011-01-01
Previous studies have confirmed that current steering can increase the number of discriminable pitches available to many cochlear implant (CI) users; however, the ability to perceive additional pitches has not been linked to improved speech perception. The primary goals of this study were to determine (1) whether adult CI users can achieve higher levels of spectral cue transmission with a speech processing strategy that implements current steering (Fidelity120) than with a predecessor strategy (HiRes) and, if so, (2) whether the magnitude of improvement can be predicted from individual differences in place-pitch sensitivity. A secondary goal was to determine whether Fidelity120 supports higher levels of speech recognition in noise than HiRes. A within-subjects repeated measures design evaluated speech perception performance with Fidelity120 relative to HiRes in 10 adult CI users. Subjects used the novel strategy (either HiRes or Fidelity120) for 8 wks during the main study; a subset of five subjects used Fidelity120 for three additional months after the main study. Speech perception was assessed for the spectral cues related to vowel F1 frequency, vowel F2 frequency, and consonant place of articulation; overall transmitted information for vowels and consonants; and sentence recognition in noise. Place-pitch sensitivity was measured for electrode pairs in the apical, middle, and basal regions of the implanted array using a psychophysical pitch-ranking task. With one exception, there was no effect of strategy (HiRes versus Fidelity120) on the speech measures tested, either during the main study (N = 10) or after extended use of Fidelity120 (N = 5). The exception was a small but significant advantage for HiRes over Fidelity120 for consonant perception during the main study. Examination of individual subjects' data revealed that 3 of 10 subjects demonstrated improved perception of one or more spectral cues with Fidelity120 relative to HiRes after 8 wks or longer experience with Fidelity120. Another three subjects exhibited initial decrements in spectral cue perception with Fidelity120 at the 8-wk time point; however, evidence from one subject suggested that such decrements may resolve with additional experience. Place-pitch thresholds were inversely related to improvements in vowel F2 frequency perception with Fidelity120 relative to HiRes. However, no relationship was observed between place-pitch thresholds and the other spectral measures (vowel F1 frequency or consonant place of articulation). Findings suggest that Fidelity120 supports small improvements in the perception of spectral speech cues in some Advanced Bionics CI users; however, many users show no clear benefit. Benefits are more likely to occur for vowel spectral cues (related to F1 and F2 frequency) than for consonant spectral cues (related to place of articulation). There was an inconsistent relationship between place-pitch sensitivity and improvements in spectral cue perception with Fidelity120 relative to HiRes. This may partly reflect the small number of sites at which place-pitch thresholds were measured. Contrary to some previous reports, there was no clear evidence that Fidelity120 supports improved sentence recognition in noise.
Sometimes the Tail Should Wag the Dog (The Printout).
ERIC Educational Resources Information Center
Miller, Larry
1989-01-01
Argues that an understanding of how children acquire reading and writing processes should be the prime basis for making decisions about new information technology. Notes that technology can be selected that is consonant with one's understanding of teaching and learning. (MM)
Can Disability Studies and Psychology Join Hands?
ERIC Educational Resources Information Center
Olkin, Rhoda; Pledger, Constance
2003-01-01
Although the field of disabilities studies incorporates psychology within its interdisciplinary purview, it embodies a distinct perspective consonant with the new paradigm of disability. Although psychology has begun embracing diversity, disability remains marginalized. Examines the foundational ideas of disability studies, training in disability…
An empirical method to cluster objective nebulizer adherence data among adults with cystic fibrosis.
Hoo, Zhe H; Campbell, Michael J; Curley, Rachael; Wildman, Martin J
2017-01-01
The purpose of using preventative inhaled treatments in cystic fibrosis is to improve health outcomes. Therefore, understanding the relationship between adherence to treatment and health outcome is crucial. Temporal variability, as well as absolute magnitude of adherence affects health outcomes, and there is likely to be a threshold effect in the relationship between adherence and outcomes. We therefore propose a pragmatic algorithm-based clustering method of objective nebulizer adherence data to better understand this relationship, and potentially, to guide clinical decisions. This clustering method consists of three related steps. The first step is to split adherence data for the previous 12 months into four 3-monthly sections. The second step is to calculate mean adherence for each section and to score the section based on mean adherence. The third step is to aggregate the individual scores to determine the final cluster ("cluster 1" = very low adherence; "cluster 2" = low adherence; "cluster 3" = moderate adherence; "cluster 4" = high adherence), and taking into account adherence trend as represented by sequential individual scores. The individual scores should be displayed along with the final cluster for clinicians to fully understand the adherence data. We present three cases to illustrate the use of the proposed clustering method. This pragmatic clustering method can deal with adherence data of variable duration (ie, can be used even if 12 months' worth of data are unavailable) and can cluster adherence data in real time. Empirical support for some of the clustering parameters is not yet available, but the suggested classifications provide a structure to investigate parameters in future prospective datasets in which there are accurate measurements of nebulizer adherence and health outcomes.
Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks
2016-04-26
REPORT TYPE Final 3. DATES COVERED (From - To) 15 Oct 2014 to 14 Jan 2015 4. TITLE AND SUBTITLE Detecting statistically significant clusters of...extend the work of Perry et al. [6] by developing a statistical framework that supports the detection of triangle motif-based clusters in complex...priori, the need for triangle motif-based clustering . 2. Developed an algorithm for clustering undirected networks, where the triangle con guration was
The improvement and simulation for LEACH clustering routing protocol
NASA Astrophysics Data System (ADS)
Ji, Ai-guo; Zhao, Jun-xiang
2017-01-01
An energy-balanced unequal multi-hop clustering routing protocol LEACH-EUMC is proposed in this paper. The candidate cluster head nodes are elected firstly, then they compete to be formal cluster head nodes by adding energy and distance factors, finally the date are transferred to sink through multi-hop. The results of simulation show that the improved algorithm is better than LEACH in network lifetime, energy consumption and the amount of data transmission.
ERIC Educational Resources Information Center
Hoffman, Louis J.
The Cluster Program at Benjamin Franklin High School, funded under Title I of the 1965 Elementary Secondary Education Act, is designed to be a school within a school in which 249 ninth grade students attend classes in two separate clusters. Each cluster is formulated such that all students receive instruction from five teachers in classes whose…
ERIC Educational Resources Information Center
Helgeson, Stanley L., Ed.; Blosser, Patricia E., Ed.
This issue contains expanded abstracts of research reports grouped into two clusters and a section of individual studies. The first cluster contains abstracts of two research reports dealing with trait-treatment interaction studies. The second cluster deals with examination items categorized according to Bloom's Taxonomy. The final section,…
78 FR 32161 - Oklahoma: Final Authorization of State Hazardous Waste Management Program Revision
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
... statutory and regulatory provisions necessary to administer the provisions of RCRA Cluster XXI, and... July 1, 2010 Through June 30, 2011 RCRA Cluster XXI prepared on June 14, 2012. The DEQ incorporates the... the authorizations at 77 FR 1236-1262, 75 FR 15273 through 15276 for RCRA Cluster XXI. The Federal...
Kluender, K R; Lotto, A J
1994-02-01
When F1-onset frequency is lower, longer F1 cut-back (VOT) is required for human listeners to perceive synthesized stop consonants as voiceless. K. R. Kluender [J. Acoust. Soc. Am. 90, 83-96 (1991)] found comparable effects of F1-onset frequency on the "labeling" of stop consonants by Japanese quail (coturnix coturnix japonica) trained to distinguish stop consonants varying in F1 cut-back. In that study, CVs were synthesized with natural-like rising F1 transitions, and endpoint training stimuli differed in the onset frequency of F1 because a longer cut-back resulted in a higher F1 onset. In order to assess whether earlier results were due to auditory predispositions or due to animals having learned the natural covariance between F1 cut-back and F1-onset frequency, the present experiment was conducted with synthetic continua having either a relatively low (375 Hz) or high (750 Hz) constant-frequency F1. Six birds were trained to respond differentially to endpoint stimuli from three series of synthesized /CV/s varying in duration of F1 cut-back. Second and third formant transitions were appropriate for labial, alveolar, or velar stops. Despite the fact that there was no opportunity for animal subjects to use experienced covariation of F1-onset frequency and F1 cut-back, quail typically exhibited shorter labeling boundaries (more voiceless stops) for intermediate stimuli of the continua when F1 frequency was higher. Responses by human subjects listening to the same stimuli were also collected. Results lend support to the earlier conclusion that part or all of the effect of F1 onset frequency on perception of voicing may be adequately explained by general auditory processes.(ABSTRACT TRUNCATED AT 250 WORDS)
Optimizing the Combination of Acoustic and Electric Hearing in the Implanted Ear
Karsten, Sue A.; Turner, Christopher W.; Brown, Carolyn J.; Jeon, Eun Kyung; Abbas, Paul J.; Gantz, Bruce J.
2016-01-01
Objectives The aim of this study was to determine an optimal approach to program combined acoustic plus electric (A+E) hearing devices in the same ear to maximize speech-recognition performance. Design Ten participants with at least 1 year of experience using Nucleus Hybrid (short electrode) A+E devices were evaluated across three different fitting conditions that varied in the frequency ranges assigned to the acoustically and electrically presented portions of the spectrum. Real-ear measurements were used to optimize the acoustic component for each participant, and the acoustic stimulation was then held constant across conditions. The lower boundary of the electric frequency range was systematically varied to create three conditions with respect to the upper boundary of the acoustic spectrum: Meet, Overlap, and Gap programming. Consonant recognition in quiet and speech recognition in competing-talker babble were evaluated after participants were given the opportunity to adapt by using the experimental programs in their typical everyday listening situations. Participants provided subjective ratings and evaluations for each fitting condition. Results There were no significant differences in performance between conditions (Meet, Overlap, Gap) for consonant recognition in quiet. A significant decrement in performance was measured for the Overlap fitting condition for speech recognition in babble. Subjective ratings indicated a significant preference for the Meet fitting regimen. Conclusions Participants using the Hybrid ipsilateral A+E device generally performed better when the acoustic and electric spectra were programmed to meet at a single frequency region, as opposed to a gap or overlap. Although there is no particular advantage for the Meet fitting strategy for recognition of consonants in quiet, the advantage becomes evident for speech recognition in competing-talker babble and in patient preferences. PMID:23059851
Psychophysical and Neural Correlates of Auditory Attraction and Aversion
NASA Astrophysics Data System (ADS)
Patten, Kristopher Jakob
This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.
Spencer, Caroline; Weber-Fox, Christine
2014-01-01
Purpose In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Methods Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9–5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. Results CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Conclusion Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. PMID:25173455
Wen, Yushi; Zhang, Chaoyang; Xue, Xianggui; Long, Xinping
2015-05-14
Clustering is experimentally and theoretically verified during the complicated processes involved in heating high explosives, and has been thought to influence their detonation properties. However, a detailed description of the clustering that occurs has not been fully elucidated. We used molecular dynamic simulations with an improved reactive force field, ReaxFF_lg, to carry out a comparative study of cluster evolution during the early stages of heating for three representative explosives: 1,3,5-triamino-2,4,6-trinitrobenzene (TATB), β-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) and pentaerythritol tetranitrate (PETN). These representatives vary greatly in their oxygen balance (OB), molecular structure, stability and experimental sensitivity. We found that when heated, TATB, HMX and PETN differ in the size, amount, proportion and lifetime of their clusters. We also found that the clustering tendency of explosives decreases as their OB becomes less negative. We propose that the relationship between OB and clustering can be attributed to the role of clustering in detonation. That is, clusters can form more readily in a high explosive with a more negative OB, which retard its energy release, secondary decomposition, further decomposition to final small molecule products and widen its detonation reaction zone. Moreover, we found that the carbon content of the clusters increases during clustering, in accordance with the observed soot, which is mainly composed of carbon as the final product of detonation or deflagration.
Childhood apraxia of speech: A survey of praxis and typical speech characteristics.
Malmenholt, Ann; Lohmander, Anette; McAllister, Anita
2017-07-01
The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.
Distinct developmental profiles in typical speech acquisition
Campbell, Thomas F.; Shriberg, Lawrence D.; Green, Jordan R.; Abdi, Hervé; Rusiewicz, Heather Leavy; Venkatesh, Lakshmi; Moore, Christopher A.
2012-01-01
Three- to five-year-old children produce speech that is characterized by a high level of variability within and across individuals. This variability, which is manifest in speech movements, acoustics, and overt behaviors, can be input to subgroup discovery methods to identify cohesive subgroups of speakers or to reveal distinct developmental pathways or profiles. This investigation characterized three distinct groups of typically developing children and provided normative benchmarks for speech development. These speech development profiles, identified among 63 typically developing preschool-aged speakers (ages 36–59 mo), were derived from the children's performance on multiple measures. These profiles were obtained by submitting to a k-means cluster analysis of 72 measures that composed three levels of speech analysis: behavioral (e.g., task accuracy, percentage of consonants correct), acoustic (e.g., syllable duration, syllable stress), and kinematic (e.g., variability of movements of the upper lip, lower lip, and jaw). Two of the discovered group profiles were distinguished by measures of variability but not by phonemic accuracy; the third group of children was characterized by their relatively low phonemic accuracy but not by an increase in measures of variability. Analyses revealed that of the original 72 measures, 8 key measures were sufficient to best distinguish the 3 profile groups. PMID:22357794
Minicucci, Domenic; Guediche, Sara; Blumstein, Sheila E
2013-08-01
The current study explored how factors of acoustic-phonetic and lexical competition affect access to the lexical-semantic network during spoken word recognition. An auditory semantic priming lexical decision task was presented to subjects while in the MR scanner. Prime-target pairs consisted of prime words with the initial voiceless stop consonants /p/, /t/, and /k/ followed by word and nonword targets. To examine the neural consequences of lexical and sound structure competition, primes either had voiced minimal pair competitors or they did not, and they were either acoustically modified to be poorer exemplars of the voiceless phonetic category or not. Neural activation associated with semantic priming (Unrelated-Related conditions) revealed a bilateral fronto-temporo-parietal network. Within this network, clusters in the left insula/inferior frontal gyrus (IFG), left superior temporal gyrus (STG), and left posterior middle temporal gyrus (pMTG) showed sensitivity to lexical competition. The pMTG also demonstrated sensitivity to acoustic modification, and the insula/IFG showed an interaction between lexical competition and acoustic modification. These findings suggest the posterior lexical-semantic network is modulated by both acoustic-phonetic and lexical structure, and that the resolution of these two sources of competition recruits frontal structures. Copyright © 2013 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-01-01
... development programs and policies. (d) Regional Programs will be consonant with all rural development... Secretary of Agriculture STATE AND REGIONAL ANNUAL PLANS OF WORK Regional Program § 23.9 General. (a... “Regional Programs.” (b) The Regional Programs shall develop and provide knowledge essential to assist and...
Code of Federal Regulations, 2010 CFR
2010-01-01
... responsive to rural development needs and activities. (c) The Regional Programs will concentrate on the high... development programs and policies. (d) Regional Programs will be consonant with all rural development... Secretary of Agriculture STATE AND REGIONAL ANNUAL PLANS OF WORK Regional Program § 23.9 General. (a...
Improving Mathematics Instruction Using Technology: A Vygotskian Perspective.
ERIC Educational Resources Information Center
Harvey, Francis A.; Charnitski, Christina Wotell
Strategies and programs for improving mathematics instruction should be derived from sound educational theory. The sociocultural learning theories of Vygotsky may offer guidance in developing technology-based mathematics curriculum materials consonant with the NCTM (National Council of Teachers of Mathematics) goals and objectives. Vygotsky's…
Tres mitos de la fonetica espanola (Three Myths of Spanish Phonetics).
ERIC Educational Resources Information Center
Dalbor, John B.
1980-01-01
Contrasts current pronunciation of some Spanish consonants with the teachings and theory of pronunciation manuals, advocating more realistic standards of instruction. Gives a detailed phonetic description of common variants of the sounds discussed, covering both Spanish and Latin American dialects. (MES)
77 FR 52701 - Board on Coastal Engineering Research
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
... DEPARTMENT OF DEFENSE Department of the Army; Corps of Engineers Board on Coastal Engineering... following committee meeting: Name of Committee: Board on Coastal Engineering Research. Date of Meeting... consonance with the needs of the coastal engineering field and the objectives of the Chief of Engineers...
77 FR 3240 - Board on Coastal Engineering Research
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
... DEPARTMENT OF DEFENSE Department of the Army; Corps of Engineers Board on Coastal Engineering... following committee meeting: Name of Committee: Board on Coastal Engineering Research. DATES: Date of... development of research projects in consonance with the needs of the coastal engineering field and the...
75 FR 62113 - Board on Coastal Engineering Research
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-07
... DEPARTMENT OF DEFENSE Department of the Army; Corps of Engineers Board on Coastal Engineering... following committee meeting: Name of Committee: Board on Coastal Engineering Research. Date of Meeting... development of research projects in consonance with the needs of the coastal engineering field and the...
The Adolescent Dyslexic: Strategies for Spelling.
ERIC Educational Resources Information Center
Stirling, Eileen
1989-01-01
The spelling difficulties of the adolescent dyslexic student are described, and techniques are presented to provide the student with the tools needed to cope with spelling requirements, including the study of vowel sounds, doubling the consonant following a short vowel, root words, and laws of probabilities. (JDD)
Physiological aspects of a vocal exercise.
Elliot, N; Sundberg, J; Gramming, P
1997-06-01
The physiological aim of vocal exercises is mostly understood in intuitive terms only. This article presents an attempt to document the phonatory behavior induced by a vocal exercise. An elevated vertical position of the larynx is frequently associated with hyperfunctional phonatory habits, presumably because it induces an exaggerated vocal fold adduction. Using the multichannel electroglottograph (MEGG), the laryngeal position was determined in a group of subjects who performed a voice exercise that contained extremely prolonged versions of the consonant/b:/. This exercise is used by the coauthor (N.E.) as part of a standard vocal exercise program. Two of the seven subjects were dysphonic phonastenic patients, and the rest were normal trained or untrained persons. Different attempts to calibrate the MEGG confirmed a linear relationship with larynx height, provided electrode positioning was correct. The results showed that the exercise induced substantial vertical displacements of the larynx. Comparison with larynx height during voicing of other consonants showed that the/b/, in particular, tended to lower the larynx.
NASA Astrophysics Data System (ADS)
Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine
2009-12-01
This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.
Formula recollection through a WORLDLY recognized mnemonic technique
NASA Astrophysics Data System (ADS)
Schunicht, Shannon
2009-10-01
Physics may be made fun, and encourage further learning through ease of recollection of complicated formulas; allthewhile increasing a student's confortability with their algebraic skills. Examples will be shown how ANY complicated formula will be made into a memorable acronym using this author's mnemonic technique, i.e. allowing each vowel to represent a mathematical operation: ``a'' multiplication implying ``@''; ``o''-division implying ``over''; ``i''-subtraction to imply ``minus''; ``u''-addition to imply ``plus''; and ``e'' implying ``equals''. Most constants and variables are indeed consonants; ``c'' = ``speed of light'' & ``z'' = ``altitude''. With this mnemonic technique ANY formula may be algebraically manipulated into a word, or series of words for ease of recollection. Additional letters may be added to enhance the intelligibility of such a letter combination, but these additional letters need be consonants ONLY. This mnemonic technique was developed as a compensatory memory method when taking physics at Texas A&M University following a severe head injury (19 days unconsciousness!) suffered by this author.
de Souza Armini, Rubia; Bernabé, Cristian Setúbal; Rosa, Caroline Azevedo; Siller, Carlos Antônio; Schimitel, Fagna Giacomin; Tufik, Sérgio; Klein, Donald Franklin; Schenberg, Luiz Carlos
2015-03-01
Panic disorder patients are exquisitely and specifically sensitive to hypercapnia. The demonstration that carbon dioxide provokes panic in fear-unresponsive amygdala-calcified Urbach-Wiethe patients emphasizes that panic is not fear nor does it require the activation of the amygdala. This is consonant with increasing evidence suggesting that panic is mediated caudally at midbrain's dorsal periaqueductal gray matter (DPAG). Another startling feature of the apparently spontaneous clinical panic is the counterintuitive lack of increments in corticotropin, cortisol and prolactin, generally considered 'stress hormones'. Here we show that the stress hormones are not changed during DPAG-evoked panic when escape is prevented by stimulating the rat in a small compartment. Neither did the corticotropin increase when physical exertion was statistically adjusted to the same degree as non-stimulated controls, as measured by lactate plasma levels. Conversely, neuroendocrine responses to foot-shocks were independent from muscular effort. Data are consonant with DPAG mediation of panic attacks. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spectral analysis method and sample generation for real time visualization of speech
NASA Astrophysics Data System (ADS)
Hobohm, Klaus
A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.
Nonlocal screening effects on core-level photoemission spectra investigated by large-cluster models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okada, K.; Kotani, A.
1995-08-15
The copper 2{ital p} core-level x-ray photoemission spectrum in CuO{sub 2} plane systems is calculated by means of large-cluster models to investigate in detail the nonlocal screening effects, which were pointed out by van Veenendaal {ital et} {ital al}. [Phys. Rev. B 47, 11 462 (1993)]. Calculating the hole distributions for the initial and final states of photoemission, we show that the atomic coordination in a cluster strongly affects accessible final states. Accordingly, we point out that the interpretation for Cu{sub 3}O{sub 10} given by van Veenendaal {ital et} {ital al}. is not always general. Moreover, it is shown thatmore » the spectrum can be remarkably affected by whether or not the O 2{ital p}{sub {pi}} orbits are taken into account in the calculations. We also introduce a Hartree-Fock approximation in order to treat much larger-cluster models.« less
A Computational Cluster for Multiscale Simulations of Ionic Liquids
2008-09-16
AND SUBTITLE DURIP: A Computational Cluster for Multiscale Simulations of Ionic Liquids 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA955007-1-0512 5c...AVAILABILITY STATEMENT ZO\\5oc\\\\%1>^ 13. SUPPLEMENTARY NOTES 14. ABSTRACT The focus of this project was to acquire and use computer cluster nodes...by ANSI Std. Z39.18 Adobe Professional 7.0 Comprehensive Final Report: Gregory A. Voth, PI Contract/Grant Title: DURIP: A Computational Cluster for
Janevska, Slavica; Arndt, Birgit; Baumann, Leonie; Apken, Lisa Helene; Mauriz Marques, Lucas Maciel; Humpf, Hans-Ulrich; Tudzynski, Bettina
2017-01-01
The PKS-NRPS-derived tetramic acid equisetin and its N-desmethyl derivative trichosetin exhibit remarkable biological activities against a variety of organisms, including plants and bacteria, e.g., Staphylococcus aureus. The equisetin biosynthetic gene cluster was first described in Fusarium heterosporum, a species distantly related to the notorious rice pathogen Fusarium fujikuroi. Here we present the activation and characterization of a homologous, but silent, gene cluster in F. fujikuroi. Bioinformatic analysis revealed that this cluster does not contain the equisetin N-methyltransferase gene eqxD and consequently, trichosetin was isolated as final product. The adaption of the inducible, tetracycline-dependent Tet-on promoter system from Aspergillus niger achieved a controlled overproduction of this toxic metabolite and a functional characterization of each cluster gene in F. fujikuroi. Overexpression of one of the two cluster-specific transcription factor (TF) genes, TF22, led to an activation of the three biosynthetic cluster genes, including the PKS-NRPS key gene. In contrast, overexpression of TF23, encoding a second Zn(II)2Cys6 TF, did not activate adjacent cluster genes. Instead, TF23 was induced by the final product trichosetin and was required for expression of the transporter-encoding gene MFS-T. TF23 and MFS-T likely act in consort and contribute to detoxification of trichosetin and therefore, self-protection of the producing fungus. PMID:28379186
Janevska, Slavica; Arndt, Birgit; Baumann, Leonie; Apken, Lisa Helene; Mauriz Marques, Lucas Maciel; Humpf, Hans-Ulrich; Tudzynski, Bettina
2017-04-05
The PKS-NRPS-derived tetramic acid equisetin and its N -desmethyl derivative trichosetin exhibit remarkable biological activities against a variety of organisms, including plants and bacteria, e.g., Staphylococcus aureus . The equisetin biosynthetic gene cluster was first described in Fusarium heterosporum , a species distantly related to the notorious rice pathogen Fusarium fujikuroi . Here we present the activation and characterization of a homologous, but silent, gene cluster in F. fujikuroi . Bioinformatic analysis revealed that this cluster does not contain the equisetin N -methyltransferase gene eqxD and consequently, trichosetin was isolated as final product. The adaption of the inducible, tetracycline-dependent Tet-on promoter system from Aspergillus niger achieved a controlled overproduction of this toxic metabolite and a functional characterization of each cluster gene in F. fujikuroi . Overexpression of one of the two cluster-specific transcription factor (TF) genes, TF22 , led to an activation of the three biosynthetic cluster genes, including the PKS-NRPS key gene. In contrast, overexpression of TF23 , encoding a second Zn(II)₂Cys₆ TF, did not activate adjacent cluster genes. Instead, TF23 was induced by the final product trichosetin and was required for expression of the transporter-encoding gene MFS-T . TF23 and MFS-T likely act in consort and contribute to detoxification of trichosetin and therefore, self-protection of the producing fungus.