ERIC Educational Resources Information Center
Snellings, Patrick; van der Leij, Aryan; Blok, Henk; de Jong, Peter F.
2010-01-01
This study investigated the role of speech perception accuracy and speed in fluent word decoding of reading disabled (RD) children. A same-different phoneme discrimination task with natural speech tested the perception of single consonants and consonant clusters by young but persistent RD children. RD children were slower than chronological age…
ERIC Educational Resources Information Center
Chan, Alice Y. W.
2006-01-01
This article discusses the strategies used by Cantonese ESL learners to cope with their problems in pronouncing English initial consonant clusters. A small-scale research study was carried out with six secondary and six university students in Hong Kong, who were asked to perform four speech tasks: the reading of a word list, the description of a…
Acquisition of /S/ Clusters in English-Speaking Children with Phonological Disorders
ERIC Educational Resources Information Center
Yavas, Mehmet; McLeod, Sharynne
2010-01-01
Two member onset consonant clusters with /s/ as the first member (#sC onsets) behave differently from other double onset consonant clusters in English. Phonological explanations of children's consonant cluster production have been posited to predict children's speech acquisition. The aim of this study was to consider the role of the Sonority…
Spanish Dyslexic Spelling Abilities: The Case of Consonant Clusters
ERIC Educational Resources Information Center
Serrano, Francisca; Defior, Sylvia
2012-01-01
This paper investigates Spanish dyslexic spelling abilities: specifically, the influence of syllabic linguistic structure (simple vs consonant cluster) on children's spelling performance. Consonant clusters are phonologically complex structures, so it was anticipated that there would be lower spelling performance for these syllabic structures than…
Acquisition of Japanese contracted sounds in L1 phonology
NASA Astrophysics Data System (ADS)
Tsurutani, Chiharu
2002-05-01
Japanese possesses a group of palatalized consonants, known to Japanese scholars as the contracted sounds, [CjV]. English learners of Japanese appear to treat them initially as consonant + glide clusters, where there is an equivalent [Cj] cluster in English, or otherwise tend to insert an epenthetic vowel [CVjV]. The acquisition of the Japanese contracted sounds by first language (L1) learners has not been widely studied compared with the consonant clusters in English with which they bear a close phonetic resemblance but have quite a different phonological status. This is a study to investigate the L1 acquisition process of the Japanese contracted sounds (a) in order to observe how the palatalization gesture is acquired in Japanese and (b) to investigate differences in the sound acquisition processes of first and second language (L2) learners: Japanese children compared with English learners. To do this, the productions of Japanese children ranging in age from 2.5 to 3.5 years were transcribed and the pattern of misproduction was observed.
ERIC Educational Resources Information Center
Ota, Mitsuhiko; Green, Sam J.
2013-01-01
Although it has been often hypothesized that children learn to produce new sound patterns first in frequently heard words, the available evidence in support of this claim is inconclusive. To re-examine this question, we conducted a survival analysis of word-initial consonant clusters produced by three children in the Providence Corpus (0 ; 11-4 ;…
A Constraint-Based Approach to Acquisition of Word-Final Consonant Clusters in Turkish Children
ERIC Educational Resources Information Center
Gokgoz-Kurt, Burcu
2017-01-01
The current study provides a constraint-based analysis of L1 word-final consonant cluster acquisition in Turkish child language, based on the data originally presented by Topbas and Kopkalli-Yavuz (2008). The present analysis was done using [?]+obstruent consonant cluster acquisition. A comparison of Gradual Learning Algorithm (GLA) under…
ERIC Educational Resources Information Center
Young, Edna Carter; Thompson, Cynthia K.
1987-01-01
The effects of treatment on errors in consonant clusters and in ambisyllabic consonants were investigated in two adults with histories of developmental phonological problems. Results indicated that treatment, consisting of a sound-referenced rebus approach, affected change in production of trained words as well as generalization to untrained words…
Influence of syllable structure on L2 auditory word learning.
Hamada, Megumi; Goya, Hideki
2015-04-01
This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.
Japanese Listeners' Perceptions of Phonotactic Violations
ERIC Educational Resources Information Center
Fais, Laurel; Kajikawa, Sachiyo; Werker, Janet; Amano, Shigeaki
2005-01-01
The canonical form for Japanese words is (Consonant)Vowel(Consonant) Vowel[approximately]. However, a regular process of high vowel devoicing between voiceless consonants and word-finally after voiceless consonants results in consonant clusters and word-final consonants, apparent violations of that phonotactic pattern. We investigated Japanese…
Shollenbarger, Amy J; Robinson, Gregory C; Taran, Valentina; Choi, Seo-Eun
2017-10-05
This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant-vowel-consonant-consonant (CVCC) words and nonwords. Forty-nine 1st graders met criteria for 2 dialect groups: AAE and MAE. Three conditions were tested in each rhyme and segmentation task: Real Words No Model, Real Words With a Model, and Nonwords With a Model. The AAE group had significantly more responses that rhymed CVCC words with consonant-vowel-consonant words and segmented CVCC words as consonant-vowel-consonant than the MAE group across all experimental conditions. In the rhyming task, the presence of a model in the real word condition elicited more reduced final cluster responses for both groups. In the segmentation task, the MAE group was at ceiling, so only the AAE group changed across the different stimulus presentations and reduced the final cluster less often when given a model. Rhyming and phoneme segmentation performance can be influenced by a child's dialect when CVCC words are used.
The phonological abilities of Cantonese-speaking children with hearing loss.
Dodd, B J; So, L K
1994-06-01
Little is known about the acquisition of phonology by children with hearing loss who learn languages other than English. In this study, the phonological abilities of 12 Cantonese-speaking children (ages 4:2 to 6:11) with prelingual hearing impairment are described. All but 3 children had almost complete syllable-initial consonant repertoires; all but 2 had complete syllable-final consonant and vowel repertoires; and only 1 child failed to produce all nine tones. Children's perception of single words was assessed using sets of words that included tone, consonant, and semantic distractors. Although the performance of the subjects was not age appropriate, they nevertheless most often chose the target, with most errors observed for the tone distractor. The phonological rules used included those that characterize the speech of younger hearing children acquiring Cantonese (e.g., cluster reduction, stopping, and deaspiration). However, most children also used at least one unusual phonological rule (e.g., frication, addition, initial consonant deletion, and/or backing). These rules are common in the speech of Cantonese-speaking children diagnosed as phonologically disordered. The influence of the ambient language on children's patterns of phonological errors is discussed.
ERIC Educational Resources Information Center
Shollenbarger, Amy J.; Robinson, Gregory C.; Taran, Valentina; Choi, Seo-eun
2017-01-01
Purpose: This study explored how typically developing 1st grade African American English (AAE) speakers differ from mainstream American English (MAE) speakers in the completion of 2 common phonological awareness tasks (rhyming and phoneme segmentation) when the stimulus items were consonant-vowel-consonant-consonant (CVCC) words and nonwords.…
Reviewing Sonority for Word-Final Sonorant+Obstruent Consonant Cluster Development in Turkish
ERIC Educational Resources Information Center
Topbas, Seyhun; Kopkalli-Yavuz, Handan
2008-01-01
The purpose of this study is to investigate the acquisition patterns of sonorant+obstruent coda clusters in Turkish to determine whether Turkish data support the prediction the Sonority Sequencing Principle (SSP) makes as to which consonant (i.e. C1 or C2) is more likely to be preserved in sonorant+obstruent clusters, and the error patterns of…
Consonant Cluster Acquisition by L2 Thai Speakers
ERIC Educational Resources Information Center
Rungruang, Apichai
2017-01-01
Attempts to account for consonant cluster acquisition are always made into two aspects. One is transfer of the first language (L1), and another is markedness effects on the developmental processes in second language acquisition. This study has continued these attempts by finding out how well Thai university students were able to perceive English…
Phonetic Effects on the Timing of Gestural Coordination in Modern Greek Consonant Clusters
ERIC Educational Resources Information Center
Yip, Jonathan Chung-Kay
2013-01-01
Theoretical approaches to the principles governing the coordination of speech gestures differ in their assessment of the contributions of biomechanical and perceptual pressures on this coordination. Perceptually-oriented accounts postulate that, for consonant-consonant (C1-C2) sequences, gestural timing patterns arise from speakers' sensitivity to…
Investigation into Korean EFL Learners' Acquisition of English /s/ + Consonant Onset Clusters
ERIC Educational Resources Information Center
Choi, Jungyoun
2016-01-01
This paper investigated the phonological acquisition of English /s/ + consonant onset clusters by Korean learners of English as a Foreign Language (EFL) who varied in their levels of proficiency. The data were collected from twenty eighth-graders in a Korean secondary school, who were divided into two groups according to their proficiency: low-…
ERIC Educational Resources Information Center
Faes, Jolien; Gillis, Steven
2017-01-01
In early word productions, the same types of errors are manifest in children with cochlear implants (CI) as in their normally hearing (NH) peers with respect to consonant clusters. However, the incidence of those types and their longitudinal development have not been examined or quantified in the literature thus far. Furthermore, studies on the…
Asymmetries in the Acquisition of Word-Initial and Word-Final Consonant Clusters
ERIC Educational Resources Information Center
Kirk, Cecilia; Demuth, Katherine
2005-01-01
Effects of negative input for 13 categories of grammatical error were assessed in a longitudinal study of naturalistic adult-child discourse. Two-hour samples of conversational interaction were obtained at two points in time, separated by a lag of 12 weeks, for 12 children (mean age 2;0 at the start). The data were interpreted within the framework…
ERIC Educational Resources Information Center
Cho, Taehong; McQueen, James M.
2011-01-01
Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or /k/, deleted or preserved) in…
Ota, Mitsuhiko; Green, Sam J
2013-06-01
Although it has been often hypothesized that children learn to produce new sound patterns first in frequently heard words, the available evidence in support of this claim is inconclusive. To re-examine this question, we conducted a survival analysis of word-initial consonant clusters produced by three children in the Providence Corpus (0 ; 11-4 ; 0). The analysis took account of several lexical factors in addition to lexical input frequency, including the age of first production, production frequency, neighborhood density and number of phonemes. The results showed that lexical input frequency was a significant predictor of the age at which the accuracy level of cluster production in each word first reached 80%. The magnitude of the frequency effect differed across cluster types. Our findings indicate that some of the between-word variance found in the development of sound production can indeed be attributed to the frequency of words in the child's ambient language.
Phoon, Hooi San; Abdullah, Anna Christina; Lee, Lay Wah; Murugaiah, Puvaneswary
2014-05-01
To date, there has been little research done on phonological acquisition in the Malay language of typically developing Malay-speaking children. This study serves to fill this gap by providing a systematic description of Malay consonant acquisition in a large cohort of preschool-aged children between 4- and 6-years-old. In the study, 326 Malay-dominant speaking children were assessed using a picture naming task that elicited 53 single words containing all the primary consonants in Malay. Two main analyses were conducted to study their consonant acquisition: (1) age of customary and mastery production of consonants; and (2) consonant accuracy. Results revealed that Malay children acquired all the syllable-initial and syllable-final consonants before 4;06-years-old, with the exception of syllable-final /s/, /h/ and /l/ which were acquired after 5;06-years-old. The development of Malay consonants increased gradually from 4- to 6 years old, with female children performing better than male children. The accuracy of consonants based on manner of articulation showed that glides, affricates, nasals, and stops were higher than fricatives and liquids. In general, syllable-initial consonants were more accurate than syllable-final consonants while consonants in monosyllabic and disyllabic words were more accurate than polysyllabic words. These findings will provide significant information for speech-language pathologists for assessing Malay-speaking children and designing treatment objectives that reflect the course of phonological development in Malay.
Wiese, Richard; Orzechowska, Paula; Alday, Phillip M.; Ulbrich, Christiane
2017-01-01
Phonological knowledge of a language involves knowledge about which segments can be combined under what conditions. Languages vary in the quantity and quality of licensed combinations, in particular sequences of consonants, with Polish being a language with a large inventory of such combinations. The present paper reports on a two-session experiment in which Polish-speaking adult participants learned nonce words with final consonant clusters. The aim was to study the role of two factors which potentially play a role in the learning of phonotactic structures: the phonological principle of sonority (ordering sound segments within the syllable according to their inherent loudness) and the (non-) existence as a usage-based phenomenon. EEG responses in two different time windows (adversely to behavioral responses) show linguistic processing by native speakers of Polish to be sensitive to both distinctions, in spite of the fact that Polish is rich in sonority-violating clusters. In particular, a general learning effect in terms of an N400 effect was found which was demonstrated to be different for sonority-obeying clusters than for sonority-violating clusters. Furthermore, significant interactions of formedness and session, and of existence and session, demonstrate that both factors, the sonority principle and the frequency pattern, play a role in the learning process. PMID:28119642
Phonological Systems of Speech-Disordered Clients with Positive/Negative Histories of Otitis Media.
ERIC Educational Resources Information Center
Churchill, Janine D.; And Others
1988-01-01
Evaluation of object-naming utterances of articulation-disordered children (ages 3-6) found that subjects with histories of recurrent otitis media during their first 24 months evidenced stridency deletion (in consonant singletons and in consonant clusters) significantly more than did subjects with negative otitis media histories. (Author/DB)
The Role of Consonant/Vowel Organization in Perceptual Discrimination
ERIC Educational Resources Information Center
Chetail, Fabienne; Drabs, Virginie; Content, Alain
2014-01-01
According to a recent hypothesis, the CV pattern (i.e., the arrangement of consonant and vowel letters) constrains the mental representation of letter strings, with each vowel or vowel cluster being the core of a unit. Six experiments with the same/different task were conducted to test whether this structure is extracted prelexically. In the…
Markedness in the Perception of L2 English Consonant Clusters
ERIC Educational Resources Information Center
AlMahmoud, Mahmoud S.
2011-01-01
The central goal of this dissertation is to explore the relative perceptibility of vowel epenthesis in English onset clusters by second language learners whose native language is averse to onset clusters. The dissertation examines how audible vowel epenthesis in different onset clusters is, whether this perceptibility varies from one cluster to…
Phonological awareness of English by Chinese and Korean bilinguals
NASA Astrophysics Data System (ADS)
Chung, Hyunjoo; Schmidt, Anna; Cheng, Tse-Hsuan
2002-05-01
This study examined non-native speakers phonological awareness of spoken English. Chinese speaking adults, Korean speaking adults, and English speaking adults were tested. The L2 speakers had been in the US for less than 6 months. Chinese and Korean allow no consonant clusters and have limited numbers of consonants allowable in syllable final position, whereas English allows a variety of clusters and various consonants in syllable final position. Subjects participated in eight phonological awareness tasks (4 replacement tasks and 4 deletion tasks) based on English phonology. In addition, digit span was measured. Preliminary analysis indicates that Chinese and Korean speaker errors appear to reflect L1 influences (such as orthography, phonotactic constraints, and phonology). All three groups of speakers showed more difficulty with manipulation of rime than onset, especially with postvocalic nasals. Results will be discussed in terms of syllable structure, L1 influence, and association with short term memory.
Effect of Vowel Context on the Recognition of Initial Consonants in Kannada.
Kalaiah, Mohan Kumar; Bhat, Jayashree S
2017-09-01
The present study was carried out to investigate the effect of vowel context on the recognition of Kannada consonants in quiet for young adults. A total of 17 young adults with normal hearing in both ears participated in the study. The stimuli included consonant-vowel syllables, spoken by 12 native speakers of Kannada. Consonant recognition task was carried out as a closed-set (fourteen-alternative forced-choice). The present study showed an effect of vowel context on the perception of consonants. Maximum consonant recognition score was obtained in the /o/ vowel context, followed by the /a/ and /u/ vowel contexts, and then the /e/ context. Poorest consonant recognition score was obtained in the vowel context /i/. Vowel context has an effect on the recognition of Kannada consonants, and the vowel effect was unique for Kannada consonants.
Infants Learn Phonotactic Regularities from Brief Auditory Experience.
ERIC Educational Resources Information Center
Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia
2003-01-01
Two experiments investigated whether novel phonotactic regularities, not present in English, could be acquired by 16.5-month-olds from brief auditory experience. Subjects listened to consonant-vowel-consonant syllables in which particular consonants were artificially restricted to either initial or final position. Findings in a subsequent…
Enhancing Vowel Discrimination Using Constructed Spelling
ERIC Educational Resources Information Center
Stewart, Katherine; Hayashi, Yusuke; Saunders, Kathryn
2010-01-01
In a computerized task, an adult with intellectual disabilities learned to construct consonant-vowel-consonant words in the presence of corresponding spoken words. During the initial assessment, the participant demonstrated high accuracy on one word group (containing the vowel-consonant units "it" and "un") but low accuracy on the other group…
ERIC Educational Resources Information Center
Kim, Minjung; Kim, Soo-Jin; Stoel-Gammon, Carol
2017-01-01
This study investigates the phonological acquisition of Korean consonants using conversational speech samples collected from sixty monolingual typically developing Korean children aged two, three, and four years. Phonemic acquisition was examined for syllable-initial and syllable-final consonants. Results showed that Korean children acquired stops…
The role of tone and segmental information in visual-word recognition in Thai.
Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira
2017-07-01
Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.
Segmentation and Representation of Consonant Blends in Kindergarten Children's Spellings
ERIC Educational Resources Information Center
Werfel, Krystal L.; Schuele, C. Melanie
2012-01-01
Purpose: The purpose of this study was to describe the growth of children's segmentation and representation of consonant blends in the kindergarten year and to evaluate the extent to which linguistic features influence segmentation and representation of consonant blends. Specifically, the roles of word position (initial blends, final blends),…
Bartle-Meyer, Carly J; Goozee, Justine V; Murdoch, Bruce E
2009-02-01
The current study aimed to use electromagnetic articulography (EMA) to investigate the effect of increasing word length on lingual kinematics in acquired apraxia of speech (AOS). Tongue-tip and tongue-back movement was recorded for five speakers with AOS and a concomitant aphasia (mean age = 53.6 years; SD = 12.60) during target consonant production (i.e. /t, s, k/ singletons; /kl, sk/ clusters), for one and two syllable stimuli. The results obtained for each of the participants with AOS were individually compared to those obtained by a control group (n = 12; mean age = 52.08 years; SD = 12.52). Results indicated that the participants with AOS exhibited longer movement durations and, in some instances, larger tongue movements during consonant singletons and consonant cluster constituents embedded within mono- and multisyllabic utterances. Despite this, two participants with AOS exhibited a word length effect that was comparable with the control speakers, and possibly indicative of an intact phonological system.
ERIC Educational Resources Information Center
Gerlach, Sharon Ruth
2010-01-01
This dissertation examines three processes affecting consonants in child speech: harmony (long-distance assimilation) involving major place features as in "coat" [kouk]; long-distance metathesis as in "cup" [p[wedge]k]; and initial consonant deletion as in "fish" [is]. These processes are unattested in adult phonology, leading to proposals for…
The Role of Geminates in Infants' Early Word Production and Word-Form Recognition
ERIC Educational Resources Information Center
Vihman, Marilyn; Majoran, Marinella
2017-01-01
Infants learning languages with long consonants, or geminates, have been found to "overselect" and "overproduce" these consonants in early words and also to commonly omit the word-initial consonant. A production study with thirty Italian children recorded at 1;3 and 1;9 strongly confirmed both of these tendencies. To test the…
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.
Mismatch Responses to Lexical Tone, Initial Consonant, and Vowel in Mandarin-Speaking Preschoolers
ERIC Educational Resources Information Center
Lee, Chia-Ying; Yen, Huei-ling; Yeh, Pei-wen; Lin, Wan-Hsuan; Cheng, Ying-Ying; Tzeng, Yu-Lin; Wu, Hsin-Chi
2012-01-01
The present study investigates how age, phonological saliency, and deviance size affect the presence of mismatch negativity (MMN) and positive mismatch response (P-MMR). This work measured the auditory mismatch responses to Mandarin lexical tones, initial consonants, and vowels in 4- to 6-year-old preschoolers using the multiple-deviant oddball…
Describing Phonological Paraphasias in Three Variants of Primary Progressive Aphasia.
Dalton, Sarah Grace Hudspeth; Shultz, Christine; Henry, Maya L; Hillis, Argye E; Richardson, Jessica D
2018-03-01
The purpose of this study was to describe the linguistic environment of phonological paraphasias in 3 variants of primary progressive aphasia (semantic, logopenic, and nonfluent) and to describe the profiles of paraphasia production for each of these variants. Discourse samples of 26 individuals diagnosed with primary progressive aphasia were investigated for phonological paraphasias using the criteria established for the Philadelphia Naming Test (Moss Rehabilitation Research Institute, 2013). Phonological paraphasias were coded for paraphasia type, part of speech of the target word, target word frequency, type of segment in error, word position of consonant errors, type of error, and degree of change in consonant errors. Eighteen individuals across the 3 variants produced phonological paraphasias. Most paraphasias were nonword, followed by formal, and then mixed, with errors primarily occurring on nouns and verbs, with relatively few on function words. Most errors were substitutions, followed by addition and deletion errors, and few sequencing errors. Errors were evenly distributed across vowels, consonant singletons, and clusters, with more errors occurring in initial and medial positions of words than in the final position of words. Most consonant errors consisted of only a single-feature change, with few 2- or 3-feature changes. Importantly, paraphasia productions by variant differed from these aggregate results, with unique production patterns for each variant. These results suggest that a system where paraphasias are coded as present versus absent may be insufficient to adequately distinguish between the 3 subtypes of PPA. The 3 variants demonstrate patterns that may be used to improve phenotyping and diagnostic sensitivity. These results should be integrated with recent findings on phonological processing and speech rate. Future research should attempt to replicate these results in a larger sample of participants with longer speech samples and varied elicitation tasks. https://doi.org/10.23641/asha.5558107.
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called “consonant bias”). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2nd and 4th Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4th Grade children, whereas 2nd graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4th graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading. PMID:24523917
Bartle, Carly J; Goozée, Justine V; Murdoch, Bruce E
2007-03-01
The effect of increasing word length on the articulatory dynamics (i.e. duration, distance, maximum acceleration, maximum deceleration, and maximum velocity) of consonant production in acquired apraxia of speech was investigated using electromagnetic articulography (EMA). Tongue-tip and tongue-back movement of one apraxic patient was recorded using the AG-200 EMA system during word-initial consonant productions in one, two, and three syllable words. Significantly deviant articulatory parameters were recorded for each of the target consonants during one, two, and three syllables words. Word length effects were most evident during the release phase of target consonant productions. The results are discussed with respect to theories of speech motor control as they relate to AOS.
ERIC Educational Resources Information Center
Woynaroski, Tiffany; Watson, Linda; Gardner, Elizabeth; Newsom, Cassandra R.; Keceli-Kaysili, Bahar; Yoder, Paul J.
2016-01-01
Diversity of key consonants used in communication (DKCC) is a value-added predictor of expressive language growth in initially preverbal children with autism spectrum disorder (ASD). Studying the predictors of DKCC growth in young children with ASD might inform treatment of this under-studied aspect of prelinguistic development. Eighty-seven…
ERIC Educational Resources Information Center
van Severen, Lieve; Gillis, Joris J. M.; Molemans, Inge; van den Berg, Renate; De Maeyer, Sven; Gillis, Steven
2013-01-01
The impact of input frequency (IF) and functional load (FL) of segments in the ambient language on the acquisition order of word-initial consonants is investigated. Several definitions of IF/FL are compared and implemented. The impact of IF/FL and their components are computed using a longitudinal corpus of interactions between thirty…
Bernhardt, B May; Hanson, R; Perez, D; Ávila, C; Lleó, C; Stemberger, J P; Carballo, G; Mendoza, E; Fresneda, D; Chávez-Peón, M
2015-01-01
Research on children's word structure development is limited. Yet, phonological intervention aims to accelerate the acquisition of both speech-sounds and word structure, such as word length, stress or shapes in CV sequences. Until normative studies and meta-analyses provide in-depth information on this topic, smaller investigations can provide initial benchmarks for clinical purposes. To provide preliminary reference data for word structure development in a variety of Spanish with highly restricted coda use: Granada Spanish (similar to many Hispano-American varieties). To be clinically applicable, such data would need to show differences by age, developmental typicality and word structure complexity. Thus, older typically developing (TD) children were expected to show higher accuracy than younger children and those with protracted phonological development (PPD). Complex or phonologically marked forms (e.g. multisyllabic words, clusters) were expected to be late developing. Participants were 59 children aged 3-5 years in Granada, Spain: 30 TD children, and 29 with PPD and no additional language impairments. Single words were digitally recorded by a native Spanish speaker using a 103-word list and transcribed by native Spanish speakers, with confirmation by a second transcriber team and acoustic analysis. The program Phon 1.5 provided quantitative data. In accordance with expectations, the TD and older age groups had better-established word structures than the younger children and those with PPD. Complexity was also relevant: more structural mismatches occurred in multisyllabic words, initial unstressed syllables and clusters. Heterosyllabic consonant sequences were more accurate than syllable-initial sequences. The most common structural mismatch pattern overall was consonant deletion, with syllable deletion most common in 3-year-olds and children with PPD. The current study provides preliminary reference data for word structure development in a Spanish variety with restricted coda use, both by age and types of word structures. Between ages 3 and 5 years, global measures (whole word match, word shape match) distinguished children with typical versus protracted phonological development. By age 4, children with typical development showed near-mastery of word structures, whereas 4- and 5-year-olds with PPD continued to show syllable deletion and cluster reduction, especially in multisyllabic words. The results underline the relevance of multisyllabic words and words with clusters in Spanish phonological assessment and the utility of word structure data for identification of protracted phonological development. © 2014 Royal College of Speech and Language Therapists.
Spencer, Caroline; Weber-Fox, Christine
2014-09-01
In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9-5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. At the end of this activity the reader will be able to: (1) describe the current status of nonlinguistic and linguistic predictors for recovery and persistence of stuttering; (2) summarize current evidence regarding the potential value of consonant cluster articulation and nonword repetition abilities in helping to predict stuttering outcome in preschool children; (3) discuss the current findings in relation to potential implications for theories of developmental stuttering; (4) discuss the current findings in relation to potential considerations for the evaluation and treatment of developmental stuttering. Copyright © 2014 Elsevier Inc. All rights reserved.
Influence of Initial and Final Consonants on Vowel Duration in CVC Syllables.
ERIC Educational Resources Information Center
Naeser, Margaret A.
This study investigates the influence of initial and final consonants /p, b, s, z/ on the duration of four vowels /I, i, u, ae/ in 64 CVC syllables uttered by eight speakers of English from the same dialect area. The CVC stimuli were presented to the subjects in a frame sentence from a master tape. Subjects repeated each sentence immediately after…
Gagnon, Bernadine; Miozzo, Michele
2017-01-01
Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044
ERIC Educational Resources Information Center
Becker, Frank; Reinvang, Ivar
2007-01-01
This study used the event-related brain potential mismatch negativity (MMN) to investigate preconscious discrimination of harmonically rich tones (differing in duration) and consonant-vowel syllables (differing in the initial consonant) in aphasia. Eighteen Norwegian aphasic patients, examined on average 3 months after brain injury, were compared…
Frequency, Gradience, and Variation in Consonant Insertion
ERIC Educational Resources Information Center
An, Young-ran
2010-01-01
This dissertation addresses the extent to which linguistic behavior can be described in terms of the projection of patterns from existing lexical items, through an investigation of Korean reduplication. Korean has a productive pattern of reduplication in which a consonant is inserted in a vowel-initial base, illustrated by forms such as "alok"--"t…
The Labial-Coronal Effect Revisited: Japanese Adults Say Pata, but Hear Tapa
ERIC Educational Resources Information Center
Tsuji, Sho; Gomez, Nayeli Gonzalez; Medina, Victoria; Nazzi, Thierry; Mazuka, Reiko
2012-01-01
The labial-coronal effect has originally been described as a bias to initiate a word with a labial consonant-vowel-coronal consonant (LC) sequence. This bias has been explained with constraints on the human speech production system, and its perceptual correlates have motivated the suggestion of a perception-production link. However, previous…
ERIC Educational Resources Information Center
McCaffrey Morrison, Helen
2008-01-01
Locus equations (LEs) were derived from consonant-vowel-consonant (CVC) syllables produced by four speakers with profound hearing loss. Group data indicated that LE functions obtained for the separate CVC productions initiated by /b/, /d/, and /g/ were less well-separated in acoustic space than those obtained from speakers with normal hearing. A…
Green, K P; Gerdeman, A
1995-12-01
Two experiments examined the impact of a discrepancy in vowel quality between the auditory and visual modalities on the perception of a syllable-initial consonant. One experiment examined the effect of such a discrepancy on the McGurk effect by cross-dubbing auditory /bi/ tokens onto visual /ga/ articulations (and vice versa). A discrepancy in vowel category significantly reduced the magnitude of the McGurk effect and changed the pattern of responses. A 2nd experiment investigated the effect of such a discrepancy on the speeded classification of the initial consonant. Mean reaction times to classify the tokens increased when the vowel information was discrepant between the 2 modalities but not when the vowel information was consistent. These experiments indicate that the perceptual system is sensitive to cross-modal discrepancies in the coarticulatory information between a consonant and its following vowel during phonetic perception.
English speech acquisition in 3- to 5-year-old children learning Russian and English.
Gildersleeve-Neumann, Christina E; Wright, Kira L
2010-10-01
English speech acquisition in Russian-English (RE) bilingual children was investigated, exploring the effects of Russian phonetic and phonological properties on English single-word productions. Russian has more complex consonants and clusters and a smaller vowel inventory than English. One hundred thirty-seven single-word samples were phonetically transcribed from 14 RE and 28 English-only (E) children, ages 3;3 (years;months) to 5;7. Language and age differences were compared descriptively for phonetic inventories. Multivariate analyses compared phoneme accuracy and error rates between the two language groups. RE children produced Russian-influenced phones in English, including palatalized consonants and trills, and demonstrated significantly higher rates of trill substitution, final devoicing, and vowel errors than E children, suggesting Russian language effects on English. RE and E children did not differ in their overall production complexity, with similar final consonant deletion and cluster reduction error rates, similar phonetic inventories by age, and similar levels of phonetic complexity. Both older language groups were more accurate than the younger language groups. We observed effects of Russian on English speech acquisition; however, there were similarities between the RE and E children that have not been reported in previous studies of speech acquisition in bilingual children. These findings underscore the importance of knowing the phonological properties of both languages of a bilingual child in assessment.
Phonetic difficulty and stuttering in English
Howell, Peter; Au-Yeung, James; Yaruss, Scott; Eldridge, Kevin
2007-01-01
Previous work has shown that phonetic difficulty affects older, but not younger, speakers who stutter and that older speakers experience more difficulty on content words than function words. The relationship between stuttering rate and a recently-developed index of phonetic complexity (IPC, Jakielski, 1998) was examined in this study separately for function and content words for speakers in 6-11, 11 plus-18 and 18 plus age groups. The hypothesis that stuttering rate on the content words of older speakers, but not younger speakers, would be related to the IPC score was supported. It is argued that the similarity between results using the IPC scores with a previous analysis that looked at late emerging consonants, consonant strings and multiple syllables (also conducted on function and content words separately), validates the former instrument. In further analyses, the factors that are most likely to lead to stuttering in English and their order of importance were established. The order found was consonant by manner, consonant by place, word length and contiguous consonant clusters. As the effects of phonetic difficulty are evident in teenage and adulthood, at least some of the factors may have an acquired influence on stuttering (rather than an innate universal basis, as the theory behind Jakielski's work suggests). This may be established in future work by doing cross-linguistic comparisons to see which factors operate universally. Disfluency on function words in early childhood appears to be responsive to factors other than phonetic complexity. PMID:17342878
Speech characteristics in a Ugandan child with a rare paramedian craniofacial cleft: a case report.
Van Lierde, K M; Bettens, K; Luyten, A; De Ley, S; Tungotyo, M; Balumukad, D; Galiwango, G; Bauters, W; Vermeersch, H; Hodges, A
2013-03-01
The purpose of this study is to describe the speech characteristics in an English-speaking Ugandan boy of 4.5 years who has a rare paramedian craniofacial cleft (unilateral lip, alveolar, palatal, nasal and maxillary cleft, and associated hypertelorism). Closure of the lip together with the closure of the hard and soft palate (one-stage palatal closure) was performed at the age of 5 months. Objective as well as subjective speech assessment techniques were used. The speech samples were perceptually judged for articulation, intelligibility and nasality. The Nasometer was used for the objective measurement of the nasalance values. The most striking communication problems in this child with the rare craniofacial cleft are an incomplete phonetic inventory, a severely impaired speech intelligibility with the presence of very severe hypernasality, mild nasal emission, phonetic disorders (omission of several consonants, decreased intraoral pressure in explosives, insufficient frication of fricatives and the use of a middorsum palatal stop) and phonological disorders (deletion of initial and final consonants and consonant clusters). The increased objective nasalance values are in agreement with the presence of the audible nasality disorders. The results revealed that several phonetic and phonological articulation disorders together with a decreased speech intelligibility and resonance disorders are present in the child with a rare craniofacial cleft. To what extent a secondary surgery for velopharyngeal insufficiency, combined with speech therapy, will improve speech intelligibility, articulation and resonance characteristics is a subject for further research. The results of such analyses may ultimately serve as a starting point for specific surgical and logopedic treatment that addresses the specific needs of children with rare facial clefts. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Bidelman, Gavin M.; Heinz, Michael G.
2011-01-01
Human listeners prefer consonant over dissonant musical intervals and the perceived contrast between these classes is reduced with cochlear hearing loss. Population-level activity of normal and impaired model auditory-nerve (AN) fibers was examined to determine (1) if peripheral auditory neurons exhibit correlates of consonance and dissonance and (2) if the reduced perceptual difference between these qualities observed for hearing-impaired listeners can be explained by impaired AN responses. In addition, acoustical correlates of consonance-dissonance were also explored including periodicity and roughness. Among the chromatic pitch combinations of music, consonant intervals∕chords yielded more robust neural pitch-salience magnitudes (determined by harmonicity∕periodicity) than dissonant intervals∕chords. In addition, AN pitch-salience magnitudes correctly predicted the ordering of hierarchical pitch and chordal sonorities described by Western music theory. Cochlear hearing impairment compressed pitch salience estimates between consonant and dissonant pitch relationships. The reduction in contrast of neural responses following cochlear hearing loss may explain the inability of hearing-impaired listeners to distinguish musical qualia as clearly as normal-hearing individuals. Of the neural and acoustic correlates explored, AN pitch salience was the best predictor of behavioral data. Results ultimately show that basic pitch relationships governing music are already present in initial stages of neural processing at the AN level. PMID:21895089
Papers from the Linguistics Laboratory. Working Papers in Linguistics, No. 50.
ERIC Educational Resources Information Center
Ainsworth-Darnell, Kim, Ed.; D'Imperio, Mariapaola, Ed.
Research reports included in this volume of working papers in linguistics are: "Perception of Consonant Clusters and Variable Gap Time" (Mike Cahill); "Near-Merger in Russian Palatalization" (Erin Diehm, Keith Johnson); "Breadth of Focus, Modality, and Prominence Perception in Neapolitan Italian" (Mariapaola…
Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment
ERIC Educational Resources Information Center
Buchwald, Adam; Miozzo, Michele
2012-01-01
Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…
Neural Correlates of Sublexical Processing in Phonological Working Memory
ERIC Educational Resources Information Center
McGettigan, Carolyn; Warren, Jane E.; Eisner, Frank; Marshall, Chloe R.; Shanmugalingam, Pradheep; Scott, Sophie K.
2011-01-01
This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural…
Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin
2006-01-01
In tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually lateralized to the right hemisphere. We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger preattentive response, as revealed by whole-head electric recordings of the mismatch negativity, in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern. Given the distinct acoustic features between a lexical tone and a consonant, this opposite lateralization pattern suggests the dependence of hemisphere dominance mainly on acoustic cues before speech input is mapped into a semantic representation in the processing stream. PMID:17159136
Influence of consonant frequency on Icelandic-speaking children's speech acquisition.
Másdóttir, Thóra; Stokes, Stephanie F
2016-04-01
A developmental hierarchy of phonetic feature complexity has been proposed, suggesting that later emerging sounds have greater articulatory complexity than those learned earlier. The aim of this research was to explore this hierarchy in a relatively unexplored language, Icelandic. Twenty-eight typically-developing Icelandic-speaking children were tested at 2;4 and 3;4 years. Word-initial and word-medial phonemic inventories and a phonemic implicational hierarchy are described. The frequency of occurrence of Icelandic consonants in the speech of 2;4 and 3;4 year old children was, from most to least frequent, n, s, t, p, r, m, l, k, f, ʋ, j, ɵ, h, kʰ, c, [Formula: see text], ɰ, pʰ, tʰ, cʰ, ç, [Formula: see text], [Formula: see text], [Formula: see text]. Consonant frequency was a strong predictor of consonant accuracy at 2;4 months (r(23) = -0.75), but the effect was weaker at 3;4 months (r(23) = -0.51). Acquisition of /c/, /[Formula: see text]/ and /l/ occurred earlier, relative to English, Swedish, Dutch and German. A frequency-bound practice effect on emerging consonants is proposed to account for the early emergence of /c/, /[Formula: see text]/ and /l/ in Icelandic.
Neural Representations Used by Brain Regions Underlying Speech Production
ERIC Educational Resources Information Center
Segawa, Jennifer Anne
2013-01-01
Speech utterances are phoneme sequences but may not always be represented as such in the brain. For instance, electropalatography evidence indicates that as speaking rate increases, gestures within syllables are manipulated separately but those within consonant clusters act as one motor unit. Moreover, speech error data suggest that a syllable's…
Testing for OO-Faithfulness in the Acquisition of Consonant Clusters
ERIC Educational Resources Information Center
Tessier, Anne-Michelle
2012-01-01
This article provides experimental evidence for the claim in Hayes (2004) and McCarthy (1998) that language learners are biased to assume that morphological paradigms should be phonologically-uniform--that is, that derived words should retain all the phonological properties of their bases. The evidence comes from an artificial language…
English Speech Acquisition in 3- to 5-Year-Old Children Learning Russian and English
ERIC Educational Resources Information Center
Gildersleeve-Neumann, Christina E.; Wright, Kira L.
2010-01-01
Purpose: English speech acquisition in Russian-English (RE) bilingual children was investigated, exploring the effects of Russian phonetic and phonological properties on English single-word productions. Russian has more complex consonants and clusters and a smaller vowel inventory than English. Method: One hundred thirty-seven single-word samples…
Stimulus Characteristics of Single-Word Tests of Children's Speech Sound Production
ERIC Educational Resources Information Center
Macrae, Toby
2017-01-01
Purpose: This clinical focus article provides readers with a description of the stimulus characteristics of 12 popular tests of speech sound production. Method: Using significance testing and descriptive analyses, stimulus items were compared in terms of the number of opportunities for production of all consonant singletons, clusters, and rhotic…
The Relationship between Speech Impairment, Phonological Awareness and Early Literacy Development
ERIC Educational Resources Information Center
Harris, Judy; Botting, Nicola; Myers, Lucy; Dodd, Barbara
2011-01-01
Although children with speech impairment are at increased risk for impaired literacy, many learn to read and spell without difficulty. Around half the children with speech impairment have delayed acquisition, making errors typical of a normally developing younger child (e.g. reducing consonant clusters so that "spoon" is pronounced as…
The Word Frequency Effect on Second Language Vocabulary Learning
ERIC Educational Resources Information Center
Koirala, Cesar
2015-01-01
This study examines several linguistic factors as possible contributors to perceived word difficulty in second language learners in an experimental setting. The investigated factors include: (1) frequency of word usage in the first language, (2) word length, (3) number of syllables in a word, and (4) number of consonant clusters in a word. Word…
On Sources of the Word Length Effect in Young Readers
ERIC Educational Resources Information Center
Gagl, Benjamin; Hawelka, Stefan; Wimmer, Heinz
2015-01-01
We investigated how letter length, phoneme length, and consonant clusters contribute to the word length effect in 2nd- and 4th-grade children. They read words from three different conditions: In one condition, letter length increased but phoneme length did not due to multiletter graphemes (H"aus"-B"auch"-S"chach"). In…
Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech
ERIC Educational Resources Information Center
Yip, Michael C.
2016-01-01
Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…
Tremblay, Pascale; Small, Steven L.
2011-01-01
What is the nature of the interface between speech perception and production, where auditory and motor representations converge? One set of explanations suggests that during perception, the motor circuits involved in producing a perceived action are in some way enacting the action without actually causing movement (covert simulation) or sending along the motor information to be used to predict its sensory consequences (i.e., efference copy). Other accounts either reject entirely the involvement of motor representations in perception, or explain their role as being more supportive than integral, and not employing the identical circuits used in production. Using fMRI, we investigated whether there are brain regions that are conjointly active for both speech perception and production, and whether these regions are sensitive to articulatory (syllabic) complexity during both processes, which is predicted by a covert simulation account. A group of healthy young adults (1) observed a female speaker produce a set of familiar words (perception), and (2) observed and then repeated the words (production). There were two types of words, varying in articulatory complexity, as measured by the presence or absence of consonant clusters. The simple words contained no consonant cluster (e.g. “palace”), while the complex words contained one to three consonant clusters (e.g. “planet”). Results indicate that the left ventral premotor cortex (PMv) was significantly active during speech perception and speech production but that activation in this region was scaled to articulatory complexity only during speech production, revealing an incompletely specified efferent motor signal during speech perception. The right planum temporal (PT) was also active during speech perception and speech production, and activation in this region was scaled to articulatory complexity during both production and perception. These findings are discussed in the context of current theories theory of speech perception, with particular attention to accounts that include an explanatory role for mirror neurons. PMID:21664275
Nonhomogeneous transfer reveals specificity in speech motor learning.
Rochet-Capellan, Amélie; Richer, Lara; Ostry, David J
2012-03-01
Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning.
Nonhomogeneous transfer reveals specificity in speech motor learning
Rochet-Capellan, Amélie; Richer, Lara
2012-01-01
Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning. PMID:22190628
Yanagida, Saori; Nishizawa, Noriko; Mizoguchi, Kenji; Hatakeyama, Hiromitsu; Fukuda, Satoshi
2015-07-01
Voice onset time (VOT) for word-initial voiceless consonants in adductor spasmodic dysphonia (ADSD) and abductor spasmodic dysphonia (ABSD) patients were measured to determine (1) which acoustic measures differed from the controls and (2) whether acoustic measures were related to the pause or silence between the test word and the preceding word. Forty-eight patients with ADSD and nine patients with ABSD, as well as 20 matched normal controls read a story in which the word "taiyo" (the sun) was repeated three times, each differentiated by the position of the word in the sentence. The target of measurement was the VOT for the word-initial voiceless consonant /t/. When the target syllable appeared in a sentence following a comma, or at the beginning of a sentence following a period, the ABSD patients' VOTs were significantly longer than those of the ADSD patients and controls. Abnormal prolongation of the VOTs was related to the pause or silence between the test word and the preceding word. VOTs in spasmodic dysphonia (SD) may vary according to the SD subtype or speaking conditions. VOT measurement was suggested to be a useful method for quantifying voice symptoms in SD. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Davidson, Lisa; Wilson, Colin
2016-01-01
Recent research has shown that speakers are sensitive to non-contrastive phonetic detail present in nonnative speech (e.g. Escudero et al. 2012; Wilson et al. 2014). Difficulties in interpreting and implementing unfamiliar phonetic variation can lead nonnative speakers to modify second language forms by vowel epenthesis and other changes. These…
Influence of Syllable Structure on L2 Auditory Word Learning
ERIC Educational Resources Information Center
Hamada, Megumi; Goya, Hideki
2015-01-01
This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…
ERIC Educational Resources Information Center
Sperbeck, Mieko
2010-01-01
The primary aim of this dissertation was to investigate the relationship between speech perception and speech production difficulties among Japanese second language (L2) learners of English, in their learning complex syllable structures. Japanese L2 learners and American English controls were tested in a categorical ABX discrimination task of…
ERIC Educational Resources Information Center
Pouplier, Marianne; Marin, Stefania; Waltl, Susanne
2014-01-01
Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…
Syllabification of Final Consonant Clusters: A Salient Pronunciation Problem of Kurdish EFL Learners
ERIC Educational Resources Information Center
Keshavarz, Mohammad Hossein
2017-01-01
While there is a plethora of research on pronunciation problems of EFL learners with different L1 backgrounds, published empirical studies on syllabification errors of Iraqi Kurdish EFL learners are scarce. Therefore, to contribute to this line of research, the present study set out to investigate difficulties of this group of learners in the…
Infant Discrimination of a Morphologically Relevant Word-Final Contrast
ERIC Educational Resources Information Center
Fais, Laurel; Kajikawa, Sachiyo; Amano, Shigeaki; Werker, Janet F.
2009-01-01
Six-, 12-, and 18-month-old English-hearing infants were tested on their ability to discriminate nonword forms ending in the final stop consonants /k/ and /t/ from their counterparts with final /s/ added, resulting in final clusters /ks/ and /ts/, in a habituation-dishabituation, looking time paradigm. Infants at all 3 ages demonstrated an ability…
Portuguese Lexical Clusters and CVC Sequences in Speech Perception and Production.
Cunha, Conceição
2015-01-01
This paper investigates similarities between lexical consonant clusters and CVC sequences differing in the presence or absence of a lexical vowel in speech perception and production in two Portuguese varieties. The frequent high vowel deletion in the European variety (EP) and the realization of intervening vocalic elements between lexical clusters in Brazilian Portuguese (BP) may minimize the contrast between lexical clusters and CVC sequences in the two Portuguese varieties. In order to test this hypothesis we present a perception experiment with 72 participants and a physiological analysis of 3-dimensional movement data from 5 EP and 4 BP speakers. The perceptual results confirmed a gradual confusion of lexical clusters and CVC sequences in EP, which corresponded roughly to the gradient consonantal overlap found in production. © 2015 S. Karger AG, Basel.
On the role of perception in shaping phonological assimilation rules.
Hura, S L; Lindblom, B; Diehl, R L
1992-01-01
Assimilation of nasals to the place of articulation of following consonants is a common and natural process among the world's languages. Recent phonological theory attributes this naturalness to the postulated geometry of articulatory features and the notion of spreading (McCarthy, 1988). Others view assimilation as a result of perception (Ohala, 1990), or as perceptually tolerated articulatory simplification (Kohler, 1990). Kohler notes that certain consonant classes (such as nasals and stops) are more likely than other classes (such as fricatives) to undergo place assimilation to a following consonant. To explain this pattern, he proposes that assimilation tends not to occur when the members of a consonant class are relatively distinctive perceptually, such that their articulatory reduction would be particularly salient. This explanation, of course, presupposes that the stops and nasals which undergo place assimilation are less distinctive than fricatives, which tend not to assimilate. We report experimental results that confirm Kohler's perceptual assumption: In the context of a following word initial stop, fricatives were less confusable than nasals or unreleased stops. We conclude, in agreement with Ohala and Kohler, that perceptual factors are likely to shape phonological assimilation rules.
Lockart, Rebekah; McLeod, Sharynne
2013-08-01
To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Task 1 (baseline) involved transcribing English words. In Task 2, students transcribed 25 words spoken by a Cantonese adult. An average of 59.1% consonants was transcribed correctly (72.9% when Cantonese-English transfer patterns were allowed). There was higher accuracy on shared English and Cantonese syllable-initial consonants /m,n,f,s,h,j,w,l/ and syllable-final consonants. In Task 3, students identified consonant errors and transcribed 100 words spoken by Cantonese-speaking children under 4 additive conditions: (1) baseline, (2) +adult model, (3) +information about Cantonese phonology, and (4) all variables (2 and 3 were counterbalanced). There was a significant improvement in the students' identification and transcription scores for conditions 2, 3, and 4, with a moderate effect size. Increased skill was not based on listeners' proficiency in speaking another language, perceived transcription skill, musicality, or confidence with multilingual clients. Speech-language pathology students, with no exposure to or specific training in Cantonese, have some skills to identify errors and transcribe Cantonese. Provision of a Cantonese-adult model and information about Cantonese phonology increased students' accuracy in transcribing Cantonese speech.
ERIC Educational Resources Information Center
Russak, Susie; Saiegh-Haddad, Elinor
2017-01-01
This article examines the effect of phonological context (singleton vs. clustered consonants) on full phoneme segmentation in Hebrew first language (L1) and in English second language (L2) among typically reading adults (TR) and adults with reading disability (RD) (n = 30 per group), using quantitative analysis and a fine-grained analysis of…
Yoon, Ji Hye; Jeong, Yong
2018-01-01
Background and Purpose Korean-speaking patients with a brain injury may show agraphia that differs from that of English-speaking patients due to the unique features of Hangul syllabic writing. Each grapheme in Hangul must be arranged from left to right and/or top to bottom within a square space to form a syllable, which requires greater visuospatial abilities than when writing the letters constituting an alphabetic writing system. Among the Hangul grapheme positions within a syllable, the position of a vowel is important because it determines the writing direction and the whole configuration in Korean syllabic writing. Due to the visuospatial characteristics of the Hangul vowel, individuals with early-onset Alzheimer's disease (EOAD) may experiences differences between the difficulties of writing Hangul vowels and consonants due to prominent visuospatial dysfunctions caused by parietal lesions. Methods Eighteen patients with EOAD and 18 age-and-education-matched healthy adults participated in this study. The participants were requested to listen to and write 30 monosyllabic characters that consisted of an initial consonant, medial vowel, and final consonant with a one-to-one phoneme-to-grapheme correspondence. We measured the writing time for each grapheme, the pause time between writing the initial consonant and the medial vowel (P1), and the pause time between writing the medial vowel and the final consonant (P2). Results All grapheme writing and pause times were significantly longer in the EOAD group than in the controls. P1 was also significantly longer than P2 in the EOAD group. Conclusions Patients with EOAD might require a higher judgment ability and longer processing time for determining the visuospatial grapheme position before writing medial vowels. This finding suggests that a longer pause time before writing medial vowels is an early marker of visuospatial dysfunction in patients with EOAD. PMID:29504296
Yoon, Ji Hye; Jeong, Yong; Na, Duk L
2018-04-01
Korean-speaking patients with a brain injury may show agraphia that differs from that of English-speaking patients due to the unique features of Hangul syllabic writing. Each grapheme in Hangul must be arranged from left to right and/or top to bottom within a square space to form a syllable, which requires greater visuospatial abilities than when writing the letters constituting an alphabetic writing system. Among the Hangul grapheme positions within a syllable, the position of a vowel is important because it determines the writing direction and the whole configuration in Korean syllabic writing. Due to the visuospatial characteristics of the Hangul vowel, individuals with early-onset Alzheimer's disease (EOAD) may experiences differences between the difficulties of writing Hangul vowels and consonants due to prominent visuospatial dysfunctions caused by parietal lesions. Eighteen patients with EOAD and 18 age-and-education-matched healthy adults participated in this study. The participants were requested to listen to and write 30 monosyllabic characters that consisted of an initial consonant, medial vowel, and final consonant with a one-to-one phoneme-to-grapheme correspondence. We measured the writing time for each grapheme, the pause time between writing the initial consonant and the medial vowel (P1), and the pause time between writing the medial vowel and the final consonant (P2). All grapheme writing and pause times were significantly longer in the EOAD group than in the controls. P1 was also significantly longer than P2 in the EOAD group. Patients with EOAD might require a higher judgment ability and longer processing time for determining the visuospatial grapheme position before writing medial vowels. This finding suggests that a longer pause time before writing medial vowels is an early marker of visuospatial dysfunction in patients with EOAD. Copyright © 2018 Korean Neurological Association.
Stress Domain Effects in French Phonology and Phonological Development.
Rose, Yvan; Dos Santos, Christophe
In this paper, we discuss two distinct data sets. The first relates to the so-called allophonic process of closed-syllable laxing in Québec French, which targets final (stressed) vowels even though these vowels are arguably syllabified in open syllables in lexical representations. The second is found in the forms produced by a first language learner of European French, who displays an asymmetry in her production of CVC versus CVCV target (adult) forms. The former display full preservation (with concomitant manner harmony) of both consonants. The latter undergoes deletion of the initial syllable if the consonants are not manner-harmonic in the input. We argue that both patterns can be explained through a phonological process of prosodic strengthening targeting the head of the prosodic domain which, in the contexts described above, yields the incorporation of final consonants into the coda of the stressed syllable.
Xia, Jing; Xu, Buye; Pentony, Shareka; Xu, Jingjing; Swaminathan, Jayaganesh
2018-03-01
Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.
Maternal Vocal Feedback to 9-Month-Old Infant Siblings of Children with ASD
Talbott, Meagan R.; Nelson, Charles A.; Tager-Flusberg, Helen
2016-01-01
Infant siblings of children with autism spectrum disorder display differences in early language and social communication skills beginning as early as the first year of life. While environmental influences on early language development are well documented in other infant populations, they have received relatively little attention inside of the infant sibling context. In this study, we analyzed home video diaries collected prospectively as part of a longitudinal study of infant siblings. Infant vowel and consonant-vowel vocalizations and maternal language-promoting and non-promoting verbal responses were scored for 30 infant siblings and 30 low risk control infants at 9 months of age. Analyses evaluated whether infant siblings or their mothers exhibited differences from low risk dyads in vocalization frequency or distribution, and whether mothers’ responses were associated with other features of the high risk context. Analyses were conducted with respect to both initial risk group and preliminary outcome classification. Overall, we found no differences in infants’ consonant-vowel vocalizations, the frequency of overall maternal utterances, or the distribution of mothers’ response types. Both groups of infants produced more vowel than consonant-vowel vocalizations, and both groups of mothers responded to consonant-vowel vocalizations with more language-promoting than non-promoting responses. These results indicate that as a group, mothers of high risk infants provide equally high quality linguistic input to their infants in the first year of life and suggest that impoverished maternal linguistic input does not contribute to high risk infants’ initial language difficulties. Implications for intervention strategies are also discussed. PMID:26174704
Aided and Unaided Speech Perception by Older Hearing Impaired Listeners
Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William
2015-01-01
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. PMID:25730423
ERIC Educational Resources Information Center
Khanbeiki, Ruhollah; Abdolmanafi-Rokni, Seyed Jalal
2015-01-01
The present study was aimed at providing the English teachers across Iran with a good and fruitful method of teaching pronunciation. To this end, sixty female intermediate EFL learners were put in three different but equivalent groups of 20 based on the results of a pronunciation pre-test. One of the groups received explicit instruction including…
Individual Differences in the Acquisition of a Complex L2 Phonology: A Training Study
ERIC Educational Resources Information Center
Hanulikova, Adriana; Dediu, Dan; Fang, Zhou; Basnakova, Jana; Huettig, Falk
2012-01-01
Many learners of a foreign language (L2) struggle to correctly pronounce newly learned speech sounds, yet many others achieve this with apparent ease. Here we explored how a training study of learning complex consonant clusters at the very onset of L2 acquisition can inform us about L2 learning in general and individual differences in particular.…
Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M
2009-04-01
Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.
Shi, Lu-Feng; Morozova, Natalia
2012-08-01
Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.
Lin, Chi-Yueh; Wang, Hsiao-Chuan
2011-07-01
The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated. © 2011 Acoustical Society of America
Controller design and consonantal contrast coding using a multi-finger tactual display1
Israr, Ali; Meckl, Peter H.; Reed, Charlotte M.; Tan, Hong Z.
2009-01-01
This paper presents the design and evaluation of a new controller for a multi-finger tactual display in speech communication. A two-degree-of-freedom controller consisting of a feedback controller and a prefilter and its application in a consonant contrasting experiment are presented. The feedback controller provides stable, fast, and robust response of the fingerpad interface and the prefilter shapes the frequency-response of the closed-loop system to match with the human detection-threshold function. The controller is subsequently used in a speech communication system that extracts spectral features from recorded speech signals and presents them as vibrational-motional waveforms to three digits on a receiver’s left hand. Performance from a consonantal contrast test suggests that participants are able to identify tactual cues necessary for discriminating consonants in the initial position of consonant-vowel-consonant (CVC) segments. The average sensitivity indices for contrasting voicing, place, and manner features are 3.5, 2.7, and 3.4, respectively. The results show that the consonantal features can be successfully transmitted by utilizing a broad range of the kinesthetic-cutaneous sensory system. The present study also demonstrates the validity of designing controllers that take into account not only the electromechanical properties of the hardware, but the sensory characteristics of the human user. PMID:19507975
EMA analysis of tongue function in children with dysarthria following traumatic brain injury.
Murdoch, Bruce E; Goozée, Justine V
2003-01-01
To investigate the speed and accuracy of tongue movements exhibited by a sample of children with dysarthria following severe traumatic brain injury (TBI) during speech using electromagnetic articulography (EMA). Four children, aged between 12.75-17.17 years with dysarthria following TBI, were assessed using the AG-100 electromagnetic articulography system (Carstens Medizinelektronik). The movement trajectories of receiver coils affixed to each child's tongue were examined during consonant productions, together with a range of quantitative kinematic parameters. The children's results were individually compared against the mean values obtained by a group of eight control children (mean age of 14.67 years, SD 1.60). All four TBI children were perceived to exhibit reduced rates of speech and increased word durations. Objective EMA analysis revealed that two of the TBI children exhibited significantly longer consonant durations compared to the control group, resulting from different underlying mechanisms relating to speed generation capabilities and distances travelled. The other two TBI children did not exhibit increased initial consonant movement durations, suggesting that the vowels and/or final consonants may have been contributing to the increased word durations. The finding of different underlying articulatory kinematic profiles has important implications for the treatment of speech rate disturbances in children with dysarthria following TBI.
ERIC Educational Resources Information Center
Forts, Ann M.; Luckasson, Ruth
2011-01-01
Reading and literacy are important not only for instrumental reasons such as knowing exit signs and recognizing initial consonants but also have tremendous human functioning implications in areas such as initiating and sustaining friendships, communicating care and affection, and enhancing work, leisure, and play. Many people with intellectual…
ERIC Educational Resources Information Center
McDaniel, Jena; Yoder, Paul; Watson, Linda R.
2017-01-01
We examined direct and indirect paths involving receptive vocabulary and diversity of key consonants used in communication (DKCC) to improve understanding of why previously identified value-added predictors are associated with later expressive vocabulary for initially preverbal children with autism spectrum disorder (ASD; n = 87). Intentional…
Discrimination of brief speech sounds is impaired in rats with auditory cortex lesions
Porter, Benjamin A.; Rosenthal, Tara R.; Ranasinghe, Kamalini G.; Kilgard, Michael P.
2011-01-01
Auditory cortex (AC) lesions impair complex sound discrimination. However, a recent study demonstrated spared performance on an acoustic startle response test of speech discrimination following AC lesions (Floody et al., 2010). The current study reports the effects of AC lesions on two operant speech discrimination tasks. AC lesions caused a modest and quickly recovered impairment in the ability of rats to discriminate consonant-vowel-consonant speech sounds. This result seems to suggest that AC does not play a role in speech discrimination. However, the speech sounds used in both studies differed in many acoustic dimensions and an adaptive change in discrimination strategy could allow the rats to use an acoustic difference that does not require an intact AC to discriminate. Based on our earlier observation that the first 40 ms of the spatiotemporal activity patterns elicited by speech sounds best correlate with behavioral discriminations of these sounds (Engineer et al., 2008), we predicted that eliminating additional cues by truncating speech sounds to the first 40 ms would render the stimuli indistinguishable to a rat with AC lesions. Although the initial discrimination of truncated sounds took longer to learn, the final performance paralleled rats using full-length consonant-vowel-consonant sounds. After 20 days of testing, half of the rats using speech onsets received bilateral AC lesions. Lesions severely impaired speech onset discrimination for at least one-month post lesion. These results support the hypothesis that auditory cortex is required to accurately discriminate the subtle differences between similar consonant and vowel sounds. PMID:21167211
Wagner, Monica; Shafer, Valerie L.; Martin, Brett; Steinschneider, Mitchell
2013-01-01
The effect of exposure to the contextual features of the /pt/ cluster was investigated in native-English and native-Polish listeners using behavioral and event-related potential (ERP) methodology. Both groups experience the /pt/ cluster in their languages, but only the Polish group experiences the cluster in the context of word onset examined in the current experiment. The /st/ cluster was used as an experimental control. ERPs were recorded while participants identified the number of syllables in the second word of nonsense word pairs. The results found that only Polish listeners accurately perceived the /pt/ cluster and perception was reflected within a late positive component of the ERP waveform. Furthermore, evidence of discrimination of /pt/ and /pǝt/ onsets in the neural signal was found even for non-native listeners who could not perceive the difference. These findings suggest that exposure to phoneme sequences in highly specific contexts may be necessary for accurate perception. PMID:22867752
Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training
Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William
2015-01-01
Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material. PMID:25730330
Multiband product rule and consonant identification.
Li, Feipeng; Allen, Jont B
2009-07-01
The multiband product rule, also known as band-independence, is a basic assumption of articulation index and its extension, the speech intelligibility index. Previously Fletcher showed its validity for a balanced mix of 20% consonant-vowel (CV), 20% vowel-consonant (VC), and 60% consonant-vowel-consonant (CVC) sounds. This study repeats Miller and Nicely's version of the hi-/lo-pass experiment with minor changes to study band-independence for the 16 Miller-Nicely consonants. The cut-off frequencies are chosen such that the basilar membrane is evenly divided into 12 segments from 250 to 8000 Hz with the high-pass and low-pass filters sharing the same six cut-off frequencies in the middle. Results show that the multiband product rule is statistically valid for consonants on average. It also applies to subgroups of consonants, such as stops and fricatives, which are characterized by a flat distribution of speech cues along the frequency. It fails for individual consonants.
Speech outcomes in Cantonese patients after glossectomy.
Wong, Ripley Kit; Poon, Esther Sok-Man; Woo, Cynthia Yuen-Man; Chan, Sabina Ching-Shun; Wong, Elsa Siu-Ping; Chu, Ada Wai-Sze
2007-08-01
We sought to determine the major factors affecting speech production of Cantonese-speaking glossectomized patients. Error pattern was analyzed. Forty-one Cantonese-speaking subjects who had undergone glossectomy > or = 6 months previously were recruited. Speech production evaluation included (1) phonetic error analysis in nonsense syllable; (2) speech intelligibility in sentences evaluated by naive listeners; (3) overall speech intelligibility in conversation evaluated by experienced speech therapists. Patients receiving adjuvant radiotherapy had significantly poorer segmental and connected speech production. Total or subtotal glossectomy also resulted in poor speech outcomes. Patients having free flap reconstruction showed the best speech outcomes. Patients without lymph node metastasis had significantly better speech scores when compared with patients with lymph node metastasis. Initial consonant production had the worst scores, while vowel production was the least affected. Speech outcomes of Cantonese-speaking glossectomized patients depended on the severity of the disease. Initial consonants had the greatest effect on speech intelligibility.
Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.
Shi, Lu-Feng
2017-04-01
Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly fewer errors with /e/ and more errors with word-final /p, k/. Data reported in the present study lead to a twofold conclusion. On the one hand, normal-hearing heritage speakers of Spanish may misidentify English phonemes in patterns different from those of English monolingual listeners. Not all phoneme errors can be readily understood by comparing Spanish and English phonology, suggesting that Spanish heritage speakers differ in performance from other Spanish-English bilingual listeners. On the other hand, the absolute number of errors and the error pattern of most phonemes were comparable between English monolingual listeners and Spanish heritage speakers, suggesting that audiologists may assess word recognition in quiet in the same way for these two groups of listeners, if diagnosis is based on words, not phonemes. American Academy of Audiology
ERIC Educational Resources Information Center
Friedrich, Claudia K.; Lahiri, Aditi; Eulitz, Carsten
2008-01-01
How does the mental lexicon cope with phonetic variants in recognition of spoken words? Using a lexical decision task with and without fragment priming, the authors compared the processing of German words and pseudowords that differed only in the place of articulation of the initial consonant (place). Across both experiments, event-related brain…
Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry
2015-07-01
Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of consonants and vowels at the onset of lexical acquisition was assessed in French-learning 5-month-olds by testing sensitivity to minimal phonetic changes in their own name. Infants' reactions to mispronunciations revealed sensitivity to vowel but not consonant changes. Vowels were also more salient (on duration and intensity) but less distinct (on spectrally based measures) than consonants. Lastly, vowel (but not consonant) mispronunciation detection was modulated by acoustic factors, in particular spectrally based distance. These results establish that consonant changes do not affect lexical recognition at 5 months, while vowel changes do; the consonant bias observed later in development does not emerge until after 5 months through additional language exposure. © 2014 John Wiley & Sons Ltd.
Consonant Acquisition in Young Cochlear Implant Recipients and Their Typically Developing Peers
Jung, Jongmin; Ertmer, David J.
2017-01-01
Purpose Consonant acquisition was examined in 13 young cochlear implant (CI) recipients and 11 typically developing (TD) children. Method A longitudinal research design was implemented to determine the rate and nature of consonant acquisition during the first 2 years of robust hearing experience. Twenty-minute adult–child (typically a parent) interactions were video and audio recorded at 3-month intervals following implantation until 24 months of robust hearing experience was achieved. TD children were similarly recorded between 6 and 24 months of age. Consonants that were produced twice within a 50-utterance sample were considered “established” within a child's consonant inventory. Results Although the groups showed similar trajectories, the CI group produced larger consonant inventories than the TD group at each interval except for 21 and 24 months. A majority of children with CIs also showed more rapid acquisition of consonants and more diverse consonant inventories than TD children. Conclusions These results suggest that early auditory deprivation does not significantly affect consonant acquisition for most CI recipients. Tracking early consonant development appears to be a useful way to assess the effectiveness of cochlear implantation in young recipients. PMID:28474085
ERIC Educational Resources Information Center
Lupi, Marsha Mead
1979-01-01
The article illustrates the use of commercial jingles as high interest, low-level reading and language arts materials for primary age mildly retarded students. It is pointed out that jingles can be used in teaching initial consonants, vocabulary words, and arithmetic concepts. (SBH)
Segmentation of Vowel-Initial Words Is Facilitated by Function Words
ERIC Educational Resources Information Center
Kim, Yun Jung; Sundara, Megha
2015-01-01
Within the first year of life, infants learn to segment words from fluent speech. Previous research has shown that infants at 0;7·5 can segment consonant-initial words, yet the ability to segment vowel-initial words does not emerge until the age of 1;1-1;4 (0;11 in some restricted cases). In five experiments, we show that infants aged 0;11 but not…
Adaptation to an electropalatograph palate: acoustic, impressionistic, and perceptual data.
McLeod, Sharynne; Searl, Jeff
2006-05-01
The purpose of this study was to evaluate adaptation to the electropalatograph (EPG) from the perspective of consonant acoustics, listener perceptions, and speaker ratings. Seven adults with typical speech wore an EPG and pseudo-EPG palate over 2 days and produced syllables, read a passage, counted, and rated their adaptation to the palate. Consonant acoustics, listener ratings, and speaker ratings were analyzed. The spectral mean for the burst (/t/) and frication (/s/) was reduced for the first 60-120 min of wearing the pseudo-EPG palate. Temporal features (stop gap, frication, and syllable duration) were unaffected by wearing the pseudo-EPG palate. The EPG palate had a similar effect on consonant acoustics as the pseudo-EPG palate. Expert listener ratings indicated minimal to no change in speech naturalness or distortion from the pseudo-EPG or EPG palate. The sounds [see text] were most likely to be affected. Speaker self-ratings related to oral comfort, speech, tongue movement, appearance, and oral sensation were negatively affected by the presence of the palatal devices. Speakers detected a substantial difference when wearing a palatal device, but the effects on speech were minimal based on listener ratings. Spectral features of consonants were initially affected, although adaptation occurred. Wearing an EPG or pseudo-EPG palate for approximately 2 hr results in relatively normal-sounding speech with acoustic features similar to a no-palate condition.
Now you hear it, now you don't: vowel devoicing in Japanese infant-directed speech.
Fais, Laurel; Kajikawa, Sachiyo; Amano, Shigeaki; Werker, Janet F
2010-03-01
In this work, we examine a context in which a conflict arises between two roles that infant-directed speech (IDS) plays: making language structure salient and modeling the adult form of a language. Vowel devoicing in fluent adult Japanese creates violations of the canonical Japanese consonant-vowel word structure pattern by systematically devoicing particular vowels, yielding surface consonant clusters. We measured vowel devoicing rates in a corpus of infant- and adult-directed Japanese speech, for both read and spontaneous speech, and found that the mothers in our study preserve the fluent adult form of the language and mask underlying phonological structure by devoicing vowels in infant-directed speech at virtually the same rates as those for adult-directed speech. The results highlight the complex interrelationships among the modifications to adult speech that comprise infant-directed speech, and that form the input from which infants begin to build the eventual mature form of their native language.
Cho, Taehong; McQueen, James M
2011-08-01
Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or / k/, deleted or preserved) in the second word of a two-word phrase with an underlying /l/-C2-/t/ sequence. In Experiment 1 the target-bearing words had contextual lexical-semantic support. Listeners recovered deleted targets as fast and as accurately as preserved targets with both Word and Intonational Phrase (IP) boundaries between the two words. In Experiment 2, contexts were low-pass filtered. Listeners were still able to recover deleted targets as well as preserved targets in IP-boundary contexts, but better with physically-present targets than with deleted targets in Word-boundary contexts. This suggests that the benefit of having target acoustic-phonetic information emerges only when higher-order (contextual and phrase-boundary) information is not available. The strikingly efficient recovery of deleted phonemes with neither acoustic-phonetic cues nor contextual support demonstrates that language-specific phonological knowledge, rather than language-universal perceptual processes which rely on fine-grained phonetic details, is employed when the listener perceives the results of a continuous-speech process in which reduction is phonetically complete.
Does perceived stress mediate the effect of cultural consonance on depression?
Balieiro, Mauro C; Dos Santos, Manoel Antônio; Dos Santos, José Ernesto; Dressler, William W
2011-11-01
The importance of appraisal in the stress process is unquestioned. Experience in the social environment that impacts outcomes such as depression are thought to have these effects because they are appraised as a threat to the individual and overwhelm the individual's capacity to cope. In terms of the nature of social experience that is associated with depression, several recent studies have examined the impact of cultural consonance. Cultural consonance is the degree to which individuals, in their own beliefs and behaviors, approximate the prototypes for belief and behavior encoded in shared cultural models. Low cultural consonance is associated with more depressive symptoms both cross-sectionally and longitudinally. In this paper we ask the question: does perceived stress mediate the effects of cultural consonance on depression? Data are drawn from a longitudinal study of depressive symptoms in the urban community of Ribeirão Preto, Brazil. A sample of 210 individuals was followed for 2 years. Cultural consonance was assessed in four cultural domains, using a mixed-methods research design that integrated techniques of cultural domain analysis with social survey research. Perceived stress was measured with Cohen's Perceived Stress Scale. When cultural consonance was examined separately for each domain, perceived stress partially mediated the impact of cultural consonance in family life and cultural consonance in lifestyle on depressive symptoms. When generalized cultural consonance (combining consonance in all four domains) was examined, there was no evidence of mediation. These results raise questions about how culturally salient experience rises to the level of conscious reflection.
Rødvik, Arne Kirkhorn; von Koss Torkildsen, Janne; Wie, Ona Bø; Storaker, Marit Aarvaag; Silvola, Juha Tapio
2018-04-17
The purpose of this systematic review and meta-analysis was to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually deaf users of multichannel cochlear implants (CIs) tested with consonant-vowel-consonant and vowel-consonant-vowel nonsense syllables. Six electronic databases were searched for peer-reviewed articles reporting consonant and vowel identification scores in CI users measured by nonsense words. Relevant studies were independently assessed and screened by 2 reviewers. Consonant and vowel identification scores were presented in forest plots and compared between studies in a meta-analysis. Forty-seven articles with 50 studies, including 647 participants, thereof 581 postlingually deaf and 66 prelingually deaf, met the inclusion criteria of this study. The mean performance on vowel identification tasks for the postlingually deaf CI users was 76.8% (N = 5), which was higher than the mean performance for the prelingually deaf CI users (67.7%; N = 1). The mean performance on consonant identification tasks for the postlingually deaf CI users was higher (58.4%; N = 44) than for the prelingually deaf CI users (46.7%; N = 6). The most common consonant confusions were found between those with same manner of articulation (/k/ as /t/, /m/ as /n/, and /p/ as /t/). The mean performance on consonant identification tasks for the prelingually and postlingually deaf CI users was found. There were no statistically significant differences between the scores for prelingually and postlingually deaf CI users. The consonants that were incorrectly identified were typically confused with other consonants with the same acoustic properties, namely, voicing, duration, nasality, and silent gaps. A univariate metaregression model, although not statistically significant, indicated that duration of implant use in postlingually deaf adults predict a substantial portion of their consonant identification ability. As there is no ceiling effect, a nonsense syllable identification test may be a useful addition to the standard test battery in audiology clinics when assessing the speech perception of CI users.
Phase locked neural activity in the human brainstem predicts preference for musical consonance.
Bones, Oliver; Hopkins, Kathryn; Krishnan, Ananthanarayan; Plack, Christopher J
2014-05-01
When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to a single harmonic series (the 'harmonicity' of the chord) predicts the perceived consonance (how pleasant and stable the chord sounds; McDermott, Lehr, & Oxenham, 2010). The distinction between consonance and dissonance is central to Western musical form. Harmonicity is represented in the temporal firing patterns of populations of brainstem neurons. The current study investigates the role of brainstem temporal coding of harmonicity in the perception of consonance. Individual preference for consonant over dissonant chords was measured using a rating scale for pairs of simultaneous notes. In order to investigate the effects of cochlear interactions, notes were presented in two ways: both notes to both ears or each note to different ears. The electrophysiological frequency following response (FFR), reflecting sustained neural activity in the brainstem synchronised to the stimulus, was also measured. When both notes were presented to both ears the perceptual distinction between consonant and dissonant chords was stronger than when the notes were presented to different ears. In the condition in which both notes were presented to the both ears additional low-frequency components, corresponding to difference tones resulting from nonlinear cochlear processing, were observable in the FFR effectively enhancing the neural harmonicity of consonant chords but not dissonant chords. Suppressing the cochlear envelope component of the FFR also suppressed the additional frequency components. This suggests that, in the case of consonant chords, difference tones generated by interactions between notes in the cochlea enhance the perception of consonance. Furthermore, individuals with a greater distinction between consonant and dissonant chords in the FFR to individual harmonics had a stronger preference for consonant over dissonant chords. Overall, the results provide compelling evidence for the role of neural temporal coding in the perception of consonance, and suggest that the representation of harmonicity in phase locked neural firing drives the perception of consonance. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Crespo-Bojorque, Paola; Toro, Juan M
2016-05-01
Consonance is a salient perceptual feature in harmonic music associated with pleasantness. Besides being deeply rooted in how we experience music, research suggests consonant intervals are more easily processed than dissonant intervals. In the present work we explore from a comparative perspective if such processing advantage extends to more complex tasks such as the detection of abstract rules. We ran experiments on rule learning over consonant and dissonant intervals with nonhuman animals and human participants. Results show differences across species regarding the extent to which they benefit from differences in consonance. Animals learn abstract rules with the same ease independently of whether they are implemented over consonant intervals (Experiment 1), dissonant intervals (Experiment 2), or over a combination of them (Experiment 3). Humans, on the contrary, learn an abstract rule better when it is implemented over consonant (Experiment 4) than over dissonant intervals (Experiment 5). Moreover, their performance improves when there is a mapping between abstract categories defining a rule and consonant and dissonant intervals (Experiments 6 and 7). Results suggest that for humans, consonance might be used as a perceptual anchor for other cognitive processes as to facilitate the detection of abstract patterns. Lacking extensive experience with harmonic stimuli, nonhuman animals tested here do not seem to benefit from a processing advantage for consonant intervals. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi
2015-11-01
Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
The privileged status of locality in consonant harmony
Finley, Sara
2011-01-01
While the vast majority of linguistic processes apply locally, consonant harmony appears to be an exception. In this phonological process, consonants share the same value of a phonological feature, such as secondary place of articulation. In sibilant harmony, [s] and [ʃ] (‘sh’) alternate such that if a word contains the sound [ʃ], all [s] sounds become [ʃ]. This can apply locally as a first-order or non-locally as a second-order pattern. In the first-order case, no consonants intervene between the two sibilants (e.g., [pisasu], [piʃaʃu]). In second-order case, a consonant may intervene (e.g., [sipasu], [ʃipaʃu]). The fact that there are languages that allow second-order non-local agreement of consonant features has led some to question whether locality constraints apply to consonant harmony. This paper presents the results from two artificial grammar learning experiments that demonstrate the privileged role of locality constraints, even in patterns that allow second-order non-local interactions. In Experiment 1, we show that learners do not extend first-order non-local relationships in consonant harmony to second-order nonlocal relationships. In Experiment 2, we show that learners will extend a consonant harmony pattern with second-order long distance relationships to a consonant harmony with first-order long distance relationships. Because second-order non-local application implies first-order non-local application, but first-order non-local application does not imply second-order non-local application, we establish that local constraints are privileged even in consonant harmony. PMID:21686094
Structural Generalizations over Consonants and Vowels in 11-Month-Old Infants
ERIC Educational Resources Information Center
Pons, Ferran; Toro, Juan M.
2010-01-01
Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we…
A mathematical model of medial consonant identification by cochlear implant users.
Svirsky, Mario A; Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi
2011-04-01
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.
Differential processing of consonants and vowels in lexical access through reading.
New, Boris; Araújo, Verónica; Nazzi, Thierry
2008-12-01
Do consonants and vowels have the same importance during reading? Recently, it has been proposed that consonants play a more important role than vowels for language acquisition and adult speech processing. This proposal has started receiving developmental support from studies showing that infants are better at processing specific consonantal than vocalic information while learning new words. This proposal also received support from adult speech processing. In our study, we directly investigated the relative contributions of consonants and vowels to lexical access while reading by using a visual masked-priming lexical decision task. Test items were presented following four different primes: identity (e.g., for the word joli, joli), unrelated (vabu), consonant-related (jalu), and vowel-related (vobi). Priming was found for the identity and consonant-related conditions, but not for the vowel-related condition. These results establish the privileged role of consonants during lexical access while reading.
A mathematical model of medial consonant identification by cochlear implant users
Svirsky, Mario A.; Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi
2011-01-01
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects’ ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects’ consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech. PMID:21476674
Lidestam, Björn; Rönnberg, Jerker
2016-01-01
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667
Early lexical characteristics of toddlers with cleft lip and palate.
Hardin-Jones, Mary; Chapman, Kathy L
2014-11-01
Objective : To examine development of early expressive lexicons in toddlers with cleft palate to determine whether they differ from those of noncleft toddlers in terms of size and lexical selectivity. Design : Retrospective. Patients : A total of 37 toddlers with cleft palate and 22 noncleft toddlers. Main Outcome Measures : The groups were compared for size of expressive lexicon reported on the MacArthur Communicative Development Inventory and the percentage of words beginning with obstruents and sonorants produced in a language sample. Differences between groups in the percentage of word initial consonants correct on the language sample were also examined. Results : Although expressive vocabulary was comparable at 13 months of age for both groups, size of the lexicon for the cleft group was significantly smaller than that for the noncleft group at 21 and 27 months of age. Toddlers with cleft palate produced significantly more words beginning with sonorants and fewer words beginning with obstruents in their spontaneous speech samples. They were also less accurate when producing word initial obstruents compared with the noncleft group. Conclusions : Toddlers with cleft palate demonstrate a slower rate of lexical development compared with their noncleft peers. The preference that toddlers with cleft palate demonstrate for words beginning with sonorants could suggest they are selecting words that begin with consonants that are easier for them to produce. An alternative explanation might be that because these children are less accurate in the production of obstruent consonants, listeners may not always identify obstruents when they occur.
Perception of resyllabification in French.
Gaskell, M Gareth; Spinelli, Elsa; Meunier, Fanny
2002-07-01
In three experiments, we examined the effects of phonological resyllabification processes on the perception of French speech. Enchainment involves the resyllabification of a word-final consonant across a syllable boundary (e.g., in chaque avion, the /k/ crosses the syllable boundary to become syllable initial). Liaison involves a further process of realization of a latent consonant, alongside resyllabification (e.g., the /t/ in petit avion). If the syllable is a dominant unit of perception in French (Mehler, Dommergues, Frauenfelder, & Segui, 1981), these processes should cause problems for recognition of the following word. A cross-modal priming experiment showed no cost attached to either type of resyllabification in terms of reduced activation of the following word. Furthermore, word- and sequence-monitoring experiments again showed no cost and suggested that the recognition of vowel-initial words may be facilitated when they are preceded by a word that had undergone resyllabification through enchainment or liaison. We examine the sources of information that could underpin facilitation and propose a refinement of the syllable's role in the perception of French speech.
Moradi, Shahram; Lidestam, Björn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Rönnberg, Jerker
2017-09-18
We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.
The Effect of Orthography on the Lexical Encoding of Palatalized Consonants in L2 Russian.
Simonchyk, Ala; Darcy, Isabelle
2018-03-01
The current study investigated the potential facilitative or inhibiting effects of orthography on the lexical encoding of palatalized consonants in L2 Russian. We hypothesized that learners with stable knowledge of orthographic and metalinguistic representations of palatalized consonants would display more accurate lexical encoding of the plain/palatalized contrast. The participants of the study were 40 American learners of Russian. Ten Russian native speakers served as a control group. The materials of the study comprised 20 real words, familiar to the participants, with target coronal consonants alternating in word-final and intervocalic positions. The participants performed three tasks: written picture naming, metalinguistic, and auditory word-picture matching. Results showed that learners were not entirely familiar with the grapheme-phoneme correspondences in L2 Russian. Even though they spelled almost all of these familiar Russian words accurately, they were able to identify the plain/palatalized status of the target consonants in these words with about 80% accuracy on a metalinguistic task. The effect of orthography on the lexical encoding was found to be dependent on the syllable position of the target consonants. In intervocalic position, learners erroneously relied on vowels following the target consonants rather than the consonants themselves to encode words with plain/palatalized consonants. In word-final position, although learners possessed the orthographic and metalinguistic knowledge of the difference in the palatalization status of the target consonants-and hence had established some aspects of the lexical representations for the words-those representations appeared to lack in phonological granularity and detail, perhaps due to the lack of perceptual salience.
Non-Adjacent Consonant Sequence Patterns in English Target Words during the First-Word Period
ERIC Educational Resources Information Center
Aoyama, Katsura; Davis, Barbara L.
2017-01-01
The goal of this study was to investigate non-adjacent consonant sequence patterns in target words during the first-word period in infants learning American English. In the spontaneous speech of eighteen participants, target words with a Consonant-Vowel-Consonant (C[subscript 1]VC[subscript 2]) shape were analyzed. Target words were grouped into…
The Perceptibility of Duration in the Phonetics and Phonology of Contrastive Consonant Length
ERIC Educational Resources Information Center
Hansen, Benjamin Bozzell
2012-01-01
This dissertation investigates the hypothesis that the more vowel-like a consonant is, the more difficult it is for listeners to classify it as geminate or singleton. A perceptual account of this observation holds that more vowel-like consonants lack clear markers to signal the beginning and ending of the consonant, so listeners don't perceive the…
Li, Chuchu; Wang, Min
2017-08-01
Three sets of experiments using the picture naming tasks with the form preparation paradigm investigated the influence of orthographic experience on the development of phonological preparation unit in spoken word production in native Mandarin-speaking children. Participants included kindergarten children who have not received formal literacy instruction, Grade 1 children who are comparatively more exposed to the alphabetic pinyin system and have very limited Chinese character knowledge, Grades 2 and 4 children who have better character knowledge and more exposure to characters, and skilled adult readers who have the most advanced character knowledge and most exposure to characters. Only Grade 1 children showed the form preparation effect in the same initial consonant condition (i.e., when a list of target words shared the initial consonant). Both Grade 4 children and adults showed the preparation effect when the initial syllable (but not tone) among target words was shared. Kindergartners and Grade 2 children only showed the preparation effect when the initial syllable including tonal information was shared. These developmental changes in phonological preparation could be interpreted as a joint function of the modification of phonological representation and attentional shift. Extensive pinyin experience encourages speakers to attend to and select onset phoneme in phonological preparation, whereas extensive character experience encourages speakers to prepare spoken words in syllables.
Concept of Tone in Mandarin Revisited: A Perceptual Study on Tonal Coarticulation.
ERIC Educational Resources Information Center
Shen, Xiaonan Susan; Lin, Maocan
1991-01-01
Examination of the perceptibility of carryover coarticulatory perturbations occurring at syllabic vowels in Mandarin Chinese suggests that, in connected speech, a portion of fundamental frequency at intertonemic onset is perturbed, including initial voiced consonants and vowels, and that the perturbations result from preservative as well as…
Perception of the Voicing Distinction in Speech Produced during Simultaneous Communication
ERIC Educational Resources Information Center
MacKenzie, Douglas J.; Schiavetti, Nicholas; Whitehead, Robert L.; Metz, Dale Evan
2006-01-01
This study investigated the perception of voice onset time (VOT) in speech produced during simultaneous communication (SC). Four normally hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking stimulus words with voiced and voiceless initial consonants embedded in a sentence. Twelve…
Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker
2016-06-17
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.
Consonants, vowels and tones across Vietnamese dialects.
PhȦm, Ben; McLeod, Sharynne
2016-04-01
Vietnamese is spoken by over 89 million people in Vietnam and it is one of the most commonly spoken languages other than English in the US, Canada and Australia. This study defines between one and nine different dialects of Vietnamese spoken in Vietnam. In Vietnamese schools, children learn Standard Vietnamese which is based on the northern dialect; however, if they live in other regions they may speak a different dialect at home. This paper describes the differences between the consonants, semivowels, vowels, diphthongs and tones for four dialects: Standard, northern, central and southern Vietnamese. The number and type of initial consonants differs per dialect (i.e. Standard = 23, northern = 20, central = 23, southern = 21). For example, the letter "r" is pronounced in the Standard and central dialects as the retroflex /ʐ/, northern dialect as the voiced alveolar fricative /z/ or the trilled /r/ and in the southern dialect as the voiced velar fricative /ɣ/. Additionally, the letter "v" is pronounced in the Standard, northern and central dialects as the voiced bilabial fricative /v/, the southern dialect as the voiced palatal approximant /j/ and in the lower northern dialect (Ninh Binh) as the voiceless bilabial fricative /f/. Similarly, the number of final consonants differs per dialect (i.e. Standard = 6, northern = 10, central = 10, southern = 8). Finally, the number and type of tones differs per dialect (i.e. Standard = 6, northern = 6, central = 5, southern = 5). Understanding differences between Vietnamese dialects is important so that speech-language pathologists and educators provide appropriate services to people who speak Vietnamese.
Maruthy, Santosh; Feng, Yongqiang; Max, Ludo
2018-03-01
A longstanding hypothesis about the sensorimotor mechanisms underlying stuttering suggests that stuttered speech dysfluencies result from a lack of coarticulation. Formant-based measures of either the stuttered or fluent speech of children and adults who stutter have generally failed to obtain compelling evidence in support of the hypothesis that these individuals differ in the timing or degree of coarticulation. Here, we used a sensitive acoustic technique-spectral coefficient analyses-that allowed us to compare stuttering and nonstuttering speakers with regard to vowel-dependent anticipatory influences as early as the onset burst of a preceding voiceless stop consonant. Eight adults who stutter and eight matched adults who do not stutter produced C 1 VC 2 words, and the first four spectral coefficients were calculated for one analysis window centered on the burst of C 1 and two subsequent windows covering the beginning of the aspiration phase. Findings confirmed that the combined use of four spectral coefficients is an effective method for detecting the anticipatory influence of a vowel on the initial burst of a preceding voiceless stop consonant. However, the observed patterns of anticipatory coarticulation showed no statistically significant differences, or trends toward such differences, between the stuttering and nonstuttering groups. Combining the present results for fluent speech in one given phonetic context with prior findings from both stuttered and fluent speech in a variety of other contexts, we conclude that there is currently no support for the hypothesis that the fluent speech of individuals who stutter is characterized by limited coarticulation.
Non Linear Assessment of Musical Consonance
NASA Astrophysics Data System (ADS)
Trulla, Lluis Lligoña; Guiliani, Alessandro; Zimatore, Giovanna; Colosimo, Alfredo; Zbilut, Joseph P.
2005-12-01
The position of intervals and the degree of musical consonance can be objectively explained by temporal series formed by mixing two pure sounds covering an octave. This result is achieved by means of Recurrence Quantification Analysis (RQA) without considering neither overtones nor physiological hypotheses. The obtained prediction of a consonance can be considered a novel solution to Galileo's conjecture on the nature of consonance. It constitutes an objective link between musical performance and listeners' hearing activity..
Kuwaiti Arabic: acquisition of singleton consonants.
Ayyad, Hadeel Salama; Bernhardt, B May; Stemberger, Joseph P
2016-09-01
Arabic, a Semitic language of the Afro-Asiatic variety, has a rich consonant inventory. Previous studies on Arabic phonological acquisition have focused primarily on dialects in Jordan and Egypt. Because Arabic varies considerably across regions, information is also needed for other dialects. To determine acquisition benchmarks for singleton consonants for Kuwaiti Arabic-speaking 4-year-olds. Participants were 80 monolingual Kuwaiti Arabic-speaking children divided into two age groups: 46-54 and 55-62 months. Post-hoc, eight children were identified as possibly at risk for protracted phonological development. A native Kuwaiti Arabic speaker audio-recorded and transcribed single-word speech samples (88 words) that tested consonants across word positions within a variety of word lengths and structures. Transcription reliability (point-to-point) was 95% amongst the authors, and 87% with an external consultant. Three acquisition levels were designated that indicated the proportion of children with no mismatches ('errors') for a given consonant: 90%+ of children, 75-89%, fewer than 75%. Mismatch patterns were described in terms of a phonological feature framework previously described in the literature. The Kuwaiti 4-year-olds produced many singleton consonants accurately, including pharyngeals and uvulars. Although the older age group had fewer manner and laryngeal mismatches than the younger age group, consonants still developing at age 5 included coronal fricatives and affricates, trilled /r/ and some uvularized consonants ('emphatics'). The possible at-risk group showed mastery of fewer consonants than the other children. By feature category, place mismatches were the most common, primarily de-emphasis and lack of contrast for [coronal, grooved] (distinguishing alveolar from interdental fricatives). Manner mismatches were next most common: the most frequent substitutions were [+lateral] [l] or other rhotics for /r/, and stops for fricatives. Laryngeal mismatches were few, and involved partial or full devoicing. Group differences generally reflected proportions of mismatches rather than types. Compared with studies for Jordanian and Egyptian Arabic, Kuwaiti 4-year-olds showed a somewhat more advanced consonant inventory than same age peers, especially with respect to uvulars, pharyngeals and uvularized (emphatic) consonants. Similar to the other studies, consonant categories yet to master were: [+trilled] /r/, coronal fricative feature [grooved], [+voiced] fricatives /ʕ, z/ and the affricate /d͡͡ʒ/ and some emphatics. Common mismatch patterns generally accorded with previous studies. This study provides criterion reference benchmarks for Kuwaiti Arabic consonant singleton acquisition in 4-year-olds. © 2016 Royal College of Speech and Language Therapists.
Brennan, Marc A; Lewis, Dawna; McCreery, Ryan; Kopun, Judy; Alexander, Joshua M
2017-10-01
Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL). To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL. Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification. Fourteen children (8-16 yr) and 14 adults (19-65 yr) with mild-to-severe SNHL. Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure. Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT. Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age. Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds. American Academy of Audiology
The basis of musical consonance as revealed by congenital amusia
Cousineau, Marion; McDermott, Josh H.; Peretz, Isabelle
2012-01-01
Some combinations of musical notes sound pleasing and are termed “consonant,” but others sound unpleasant and are termed “dissonant.” The distinction between consonance and dissonance plays a central role in Western music, and its origins have posed one of the oldest and most debated problems in perception. In modern times, dissonance has been widely believed to be the product of “beating”: interference between frequency components in the cochlea that has been believed to be more pronounced in dissonant than consonant sounds. However, harmonic frequency relations, a higher-order sound attribute closely related to pitch perception, has also been proposed to account for consonance. To tease apart theories of musical consonance, we tested sound preferences in individuals with congenital amusia, a neurogenetic disorder characterized by abnormal pitch perception. We assessed amusics’ preferences for musical chords as well as for the isolated acoustic properties of beating and harmonicity. In contrast to control subjects, amusic listeners showed no preference for consonance, rating the pleasantness of consonant chords no higher than that of dissonant chords. Amusics also failed to exhibit the normally observed preference for harmonic over inharmonic tones, nor could they discriminate such tones from each other. Despite these abnormalities, amusics exhibited normal preferences and discrimination for stimuli with and without beating. This dissociation indicates that, contrary to classic theories, beating is unlikely to underlie consonance. Our results instead suggest the need to integrate harmonicity as a foundation of music preferences, and illustrate how amusia may be used to investigate normal auditory function. PMID:23150582
TESL Reporter, Vol. 3, Nos. 1-4.
ERIC Educational Resources Information Center
Pack, Alice C., Ed.
Four issues of "TESL Reporter" are presented. Contents include the following articles: "Feedback: An Anti-Madeirization Compound" by Henry M. Schaafsma; "Using the Personal Pronoun 'I' as a Compound Subject" by G. Pang and D. Chu; "The Consonant'L' in Initial and Final Positions" by Maybelle Chong; "Sentence Expansion for the Elementary Level" by…
French Liaison: Linguistic and Sociolinguistic Influences on Speech Perception
ERIC Educational Resources Information Center
Dautricourt, Robin Guillaume
2010-01-01
French liaison is a phonological process that takes place when an otherwise silent word-final consonant is pronounced before a following vowel-initial word. It is a process that has been evolving for centuries, and whose patterns of realization are influenced by a wide range of interacting linguistic and social factors. French speakers therefore…
Most, Tova; Gaon-Sivan, Gal; Shpak, Talma; Luntz, Michal
2012-01-01
Binaural hearing in cochlear implant (CI) users can be achieved either by bilateral implantation or bimodally with a contralateral hearing aid (HA). Binaural-bimodal hearing has the advantage of complementing the high-frequency electric information from the CI by low-frequency acoustic information from the HA. We examined the contribution of a contralateral HA in 25 adult implantees to their perception of fundamental frequency-cued speech characteristics (initial consonant voicing, intonation, and emotions). Testing with CI alone, HA alone, and bimodal hearing showed that all three characteristics were best perceived under the bimodal condition. Significant differences were recorded between bimodal and HA conditions in the initial voicing test, between bimodal and CI conditions in the intonation test, and between both bimodal and CI conditions and between bimodal and HA conditions in the emotion-in-speech test. These findings confirmed that such binaural-bimodal hearing enhances perception of these speech characteristics and suggest that implantees with residual hearing in the contralateral ear may benefit from a HA in that ear.
Kinematic analysis of jaw function in children following traumatic brain injury.
Loh, E W L; Goozée, J V; Murdoch, B E
2005-07-01
To investigate jaw movements in children following traumatic brain injury (TBI) during speech using electromagnetic articulography (EMA). Jaw movements of two non-dysarthric children (aged 12.75 and 13.08 years) who had sustained a TBI were recorded using the AG-100 EMA system (Carstens Medizineletronik) during word-initial consonant productions. Mean quantitative kinematic parameters and coefficient of variation (variability) values were calculated and individually compared to the mean values obtained by a group of six control children (mean age 12.57 years, SD 1.52). The two children with TBI exhibited word-initial consonant jaw movement durations that were comparable to the control children, with sub-clinical reductions in speed being offset by reduced distances. Differences were observed between the two children in jaw kinematic variability, with one child exhibiting increased variability, while the other child demonstrated reduced or comparable variability compared to the control group. Possible sub-clinical impairments of jaw movement for speech were exhibited by two children who had sustained a TBI, providing insight into the consequences of TBI on speech motor control development.
Vergara-Martínez, Marta; Perea, Manuel; Marín, Alejandro; Carreiras, Manuel
2011-09-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in a lexical decision task. The stimuli were displayed under different conditions in a masked priming paradigm with a 50-ms SOA: (i) identity/baseline condition e.g., chocolate-CHOCOLATE); (ii) vowels-delayed condition (e.g., choc_l_te-CHOCOLATE); (iii) consonants-delayed condition (cho_o_ate-CHOCOLATE); (iv) consonants-transposed condition (cholocate-CHOCOLATE); (v) vowels-transposed condition (chocalote-CHOCOLATE), and (vi) unrelated condition (editorial-CHOCOLATE). Results showed earlier ERP effects and longer reaction times for the delayed-letter compared to the transposed-letter conditions. Furthermore, at early stages of processing, consonants may play a greater role during letter identity processing. Differences between vowels and consonants regarding letter position assignment are discussed in terms of a later phonological level involved in lexical retrieval. Copyright © 2010 Elsevier Inc. All rights reserved.
Effects of Word Position on the Acoustic Realization of Vietnamese Final Consonants.
Tran, Thi Thuy Hien; Vallée, Nathalie; Granjon, Lionel
2018-05-28
A variety of studies have shown differences between phonetic features of consonants according to their prosodic and/or syllable (onset vs. coda) positions. However, differences are not always found, and interactions between the various factors involved are complex and not well understood. Our study compares acoustical characteristics of coda consonants in Vietnamese taking into account their position within words. Traditionally described as monosyllabic, Vietnamese is partially polysyllabic at the lexical level. In this language, tautosyllabic consonant sequences are prohibited, and adjacent consonants are only found at syllable boundaries either within polysyllabic words (CVC.CVC) or across monosyllabic words (CVC#CVC). This study is designed to examine whether or not syllable boundary types (interword vs. intraword) have an effect on the acoustic realization of codas. The results show significant acoustic differences in consonant realizations according to syllable boundary type, suggesting different coarticulation patterns between nuclei and codas. In addition, as Vietnamese voiceless stops are generally unreleased in coda position, with no burst to carry consonantal information, our results show that a vowel's second half contains acoustic cues which are available to aid in the discrimination of place of articulation of the vowel's following consonant. © 2018 S. Karger AG, Basel.
Measuring Musical Consonance and Dissonance
ERIC Educational Resources Information Center
LoPresto, Michael C.
2015-01-01
Most combinations of musical tones are perceived as either "consonant," "pleasing" to the human ear, or "dissonant," which is "not pleasing." Despite being largely subjective in nature, sensations of consonance and dissonance can be quantified and then compared to the judgments of human subjects. The…
Wang, M D; Reed, C M; Bilger, R C
1978-03-01
It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.
Individual differences reveal the basis of consonance.
McDermott, Josh H; Lehr, Andriana J; Oxenham, Andrew J
2010-06-08
Some combinations of musical notes are consonant (pleasant), whereas others are dissonant (unpleasant), a distinction central to music. Explanations of consonance in terms of acoustics, auditory neuroscience, and enculturation have been debated for centuries. We utilized individual differences to distinguish the candidate theories. We measured preferences for musical chords as well as nonmusical sounds that isolated particular acoustic factors--specifically, the beating and the harmonic relationships between frequency components, two factors that have long been thought to potentially underlie consonance. Listeners preferred stimuli without beats and with harmonic spectra, but across more than 250 subjects, only the preference for harmonic spectra was consistently correlated with preferences for consonant over dissonant chords. Harmonicity preferences were also correlated with the number of years subjects had spent playing a musical instrument, suggesting that exposure to music amplifies preferences for harmonic frequencies because of their musical importance. Harmonic spectra are prominent features of natural sounds, and our results indicate that they also underlie the perception of consonance. 2010 Elsevier Ltd. All rights reserved.
Dressler, William W; Balieiro, Mauro C; Ribeiro, Rosane P; Dos Santos, José Ernesto
2016-06-01
In this article, we examine the distribution of a marker of immune system stimulation-C-reactive protein-in urban Brazil. Social relationships are associated with immunostimulation, and we argue that cultural dimensions of social support, assessed by cultural consonance, are important in this process. Cultural consonance is the degree to which individuals, in their own beliefs and behaviors, approximate shared cultural models. A measure of cultural consonance in social support, based on a cultural consensus analysis regarding sources and patterns of social support in Brazil, was developed. In a survey of 258 persons, the association of cultural consonance in social support and C-reactive protein was examined, controlling for age, sex, body mass index, low-density lipoprotein cholesterol, depressive symptoms, and a social network index. Lower cultural consonance in social support was associated with higher C-reactive protein. Implications of these results for future research are discussed. © 2016 by the American Anthropological Association.
Mapping the cortical representation of speech sounds in a syllable repetition task.
Markiewicz, Christopher J; Bohland, Jason W
2016-11-01
Speech repetition relies on a series of distributed cortical representations and functional pathways. A speaker must map auditory representations of incoming sounds onto learned speech items, maintain an accurate representation of those items in short-term memory, interface that representation with the motor output system, and fluently articulate the target sequence. A "dorsal stream" consisting of posterior temporal, inferior parietal and premotor regions is thought to mediate auditory-motor representations and transformations, but the nature and activation of these representations for different portions of speech repetition tasks remains unclear. Here we mapped the correlates of phonetic and/or phonological information related to the specific phonemes and syllables that were heard, remembered, and produced using a series of cortical searchlight multi-voxel pattern analyses trained on estimates of BOLD responses from individual trials. Based on responses linked to input events (auditory syllable presentation), predictive vowel-level information was found in the left inferior frontal sulcus, while syllable prediction revealed significant clusters in the left ventral premotor cortex and central sulcus and the left mid superior temporal sulcus. Responses linked to output events (the GO signal cueing overt production) revealed strong clusters of vowel-related information bilaterally in the mid to posterior superior temporal sulcus. For the prediction of onset and coda consonants, input-linked responses yielded distributed clusters in the superior temporal cortices, which were further informative for classifiers trained on output-linked responses. Output-linked responses in the Rolandic cortex made strong predictions for the syllables and consonants produced, but their predictive power was reduced for vowels. The results of this study provide a systematic survey of how cortical response patterns covary with the identity of speech sounds, which will help to constrain and guide theoretical models of speech perception, speech production, and phonological working memory. Copyright © 2016 Elsevier Inc. All rights reserved.
How culture shapes the body: cultural consonance and body mass in urban Brazil.
Dressler, William W; Oths, Kathryn S; Balieiro, Mauro C; Ribeiro, Rosane P; Dos Santos, José Ernesto
2012-01-01
The aim of this article is to develop a model of how culture shapes the body, based on two studies conducted in urban Brazil. Research was conducted in 1991 and 2001 in four socioeconomically distinct neighborhoods. First, cultural domain analyses were conducted with samples of key informants. The cultural domains investigated included lifestyle, social support, family life, national identity, and food. Cultural consensus analysis was used to confirm shared knowledge in each domain and to derive measures of cultural consonance. Cultural consonance assesses how closely an individual matches the cultural consensus model for each domain. Second, body composition, cultural consonance, and related variables were assessed in community surveys. Multiple regression analysis was used to examine the association of cultural consonance and body composition, controlling for standard covariates and competing explanatory variables. In 1991, in a survey of 260 individuals, cultural consonance had a curvilinear association with the body mass index that differed for men and women, controlling for sociodemographic and dietary variables. In 2001, in a survey of 267 individuals, cultural consonance had a linear association with abdominal circumference that differed for men and women, controlling for sociodemographic and dietary variables. In general, as cultural consonance increases, body mass index and abdominal circumference decline, more strongly for women than men. As individuals, in their own beliefs and behaviors, more closely approximate shared cultural models in socially salient domains, body composition also more closely approximates the cultural prototype of the body. Copyright © 2012 Wiley Periodicals, Inc.
Cousineau, Marion; Bidelman, Gavin M.; Peretz, Isabelle; Lehmann, Alexandre
2015-01-01
Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception. PMID:26720000
Spencer, Caroline; Weber-Fox, Christine
2014-01-01
Purpose In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Methods Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9–5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. Results CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Conclusion Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. PMID:25173455
Phonetic Aspects of Children's Elicited Word Revisions.
ERIC Educational Resources Information Center
Paul-Brown, Diane; Yeni-Komshian, Grace H.
A study of the phonetic changes occurring when a speaker attempts to revise an unclear word for a listener focuses on changes made in the sound segment duration to maximize differences between phonemes. In the study, five-year-olds were asked by adults to revise words differing in voicing of initial and final stop consonants; a control group of…
Twenty-Four-Month-Olds' Perception of Word-Medial Onsets and Codas
ERIC Educational Resources Information Center
Wang, Yuanyuan; Seidl, Amanda
2016-01-01
Recent work has shown that children have detailed phonological representations of consonants at both word-initial and word-final edges. Nonetheless, it remains unclear whether onsets and codas are equally represented by young learners since word edges are isomorphic with syllable edges in this work. The current study sought to explore toddler's…
Data from Russian Help to Determine in Which Languages the Possible Word Constraint Applies
ERIC Educational Resources Information Center
Alexeeva, Svetlana; Frolova, Anastasia; Slioussar, Natalia
2017-01-01
The Possible Word Constraint, or PWC, is a speech segmentation principle prohibiting to postulate word boundaries if a remaining segment contains only consonants. The PWC was initially formulated for English where all words contain a vowel and claimed to hold universally after being confirmed for various other languages. However, it is crucial to…
Production of Consonants by Prelinguistically Deaf Children with Cochlear Implants
ERIC Educational Resources Information Center
Bouchard, Marie-Eve Gaul; Le Normand, Marie-Therese; Cohen, Henri
2007-01-01
Consonant production following the sensory restoration of audition was investigated in 22 prelinguistically deaf French children who received cochlear implants. Spontaneous speech productions were recorded at 6, 12, and 18 months post-surgery and consonant inventories were derived from both glossable and non-glossable phones using two acquisition…
Palatalization in Romanian: Experimental and Theoretical Approaches
ERIC Educational Resources Information Center
Spinu, Laura
2010-01-01
Within the larger context of the Romance languages, Romanian stands alone in exhibiting a surface contrast between plain and palatalized consonants (that is, consonants with a secondary palatal articulation). While the properties of secondary palatalization are well known for language families in which the set of palatalized consonants is…
A Vowel Is a Vowel: Generalizing Newly Learned Phonotactic Constraints to New Contexts
ERIC Educational Resources Information Center
Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia
2010-01-01
Adults can learn novel phonotactic constraints from brief listening experience. We investigated the representations underlying phonotactic learning by testing generalization to syllables containing new vowels. Adults heard consonant-vowel-consonant study syllables in which particular consonants were artificially restricted to the onset or coda…
An EPG Study of Palatal Consonants in Two Australian Languages
ERIC Educational Resources Information Center
Tabain, Marija; Fletcher, Janet; Butcher, Andrew
2011-01-01
This study presents EPG (electro-palatographic) data on (alveo-)palatal consonants from two Australian languages, Arrernte and Warlpiri. (Alveo-)palatal consonants are phonemic for stop, lateral and nasal manners of articulation in both languages, and are laminal articulations. However, in Arrernte, these lamino-(alveo-)palatals contrast with…
Bones, Oliver; Plack, Christopher J
2015-03-04
When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological "frequency-following response." The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding. Copyright © 2015 Bones and Plack.
Locus equations and coarticulation in three Australian languages.
Graetzer, Simone; Fletcher, Janet; Hajek, John
2015-02-01
Locus equations were applied to F2 data for bilabial, alveolar, retroflex, palatal, and velar plosives in three Australian languages. In addition, F2 variance at the vowel-consonant boundary, and, by extension, consonantal coarticulatory sensitivity, was measured. The locus equation slopes revealed that there were place-dependent differences in the magnitude of vowel-to-consonant coarticulation. As in previous studies, the non-coronal (bilabial and velar) consonants tended to be associated with the highest slopes, palatal consonants tended to be associated with the lowest slopes, and alveolar and retroflex slopes tended to be low to intermediate. Similarly, F2 variance measurements indicated that non-coronals displayed greater coarticulatory sensitivity to adjacent vowels than did coronals. Thus, both the magnitude of vowel-to-consonant coarticulation and the magnitude of consonantal coarticulatory sensitivity were seen to vary inversely with the magnitude of consonantal articulatory constraint. The findings indicated that, unlike results reported previously for European languages such as English, anticipatory vowel-to-consonant coarticulation tends to exceed carryover coarticulation in these Australian languages. Accordingly, on the F2 variance measure, consonants tended to be more sensitive to the coarticulatory effects of the following vowel. Prosodic prominence of vowels was a less significant factor in general, although certain language-specific patterns were observed.
Perception of initial obstruent voicing is influenced by gestural organization
Best, Catherine T.; Hallé, Pierre A.
2009-01-01
Cross-language differences in phonetic settings for phonological contrasts of stop voicing have posed a challenge for attempts to relate specific phonological features to specific phonetic details. We probe the phonetic-phonological relationship for voicing contrasts more broadly, analyzing in particular their relevance to nonnative speech perception, from two theoretical perspectives: feature geometry and articulatory phonology. Because these perspectives differ in assumptions about temporal/phasing relationships among features/gestures within syllable onsets, we undertook a cross-language investigation on perception of obstruent (stop, fricative) voicing contrasts in three nonnative onsets that use a common set of features/gestures but with differing time-coupling. Listeners of English and French, which differ in their phonetic settings for word-initial stop voicing distinctions, were tested on perception of three onset types, all nonnative to both English and French, that differ in how initial obstruent voicing is coordinated with a lateral feature/gesture and additional obstruent features/gestures. The targets, listed from least complex to most complex onsets, were: a lateral fricative voicing distinction (Zulu /ɬ/-ɮ/), a laterally-released affricate voicing distinction (Tlingit /tɬ/-/dɮ/), and a coronal stop voicing distinction in stop+/l/ clusters (Hebrew /tl/-/dl/). English and French listeners' performance reflected the differences in their native languages' stop voicing distinctions, compatible with prior perceptual studies on singleton consonant onsets. However, both groups' abilities to perceive voicing as a separable parameter also varied systematically with the structure of the target onsets, supporting the notion that the gestural organization of syllable onsets systematically affects perception of initial voicing distinctions. PMID:20228878
Hodges, Rosemary; Munro, Natalie; Baker, Elise; McGregor, Karla; Heard, Rob
2017-01-01
Although verbal imitation can provide a valuable window into the developing language abilities of toddlers, some toddlers find verbal imitation challenging and will not comply with tests that involve elicited verbal imitation. The characteristics of stimuli that are offered to toddlers for imitation may influence how easy or hard it is for them to imitate. This study presents a new test of elicited imitation-the Monosyllable Imitation Test for Toddlers (MITT)-comprising stimuli of varying characteristics and test features designed to optimize compliance. To investigate whether the stimulus characteristics of neighbourhood density and consonant complexity have independent and/or convergent influences on imitation accuracy; and to examine non-compliance rates and diagnostic accuracy of the MITT and an existing test, the Test of Early Nonword Repetition (TENR) (Stokes and Klee 2009a). Fifty-two toddlers (25-35 months) participated. Twenty-six had typically developing language (TDs) and 26 were defined as late talkers (LTs) based on parent-reported vocabulary. The MITT stimuli were created by manipulating both neighbourhood density (dense or sparse) and consonant complexity (early- or late-developing initial consonant). The MITT was designed to maximize compliance by: (1) using eight monosyllabic stimuli, (2) providing three exposures to stimuli and (3) embedding imitation in a motivating context: a computer animation with reasons for imitation. Stimulus characteristics influenced imitation accuracy in TDs and LTs. For TDs, neighbourhood density had an independent influence, whereas for LTs consonant complexity had an independent influence. These characteristics also had convergent influences. For TDs, stimuli were all equally easy to imitate, except those that were both sparse and contained a late-developing consonant which were harder to imitate. For LTs, stimuli that were both dense and contained an early-developing consonant were easier to imitate than any other stimuli. Two LTs and no TDs were non-compliant with the MITT. With the TENR, five LTs and two TDs were non-compliant. The MITT and TENR yielded similar levels of diagnostic sensitivity, but the TENR offered higher specificity rates. Subsets of stimuli from the MITT and the TENR also showed diagnostic promise when explored post-hoc. Stimulus characteristics converge to influence imitation accuracy in both TD and LT toddlers and therefore should be considered when designing stimuli. The MITT resulted in better compliance than the TENR, but the TENR offered higher specificity. Insights about late talking, elicited imitation and speech production capabilities are discussed. © 2016 Royal College of Speech and Language Therapists.
Indifference to dissonance in native Amazonians reveals cultural variation in music perception.
McDermott, Josh H; Schultz, Alan F; Undurraga, Eduardo A; Godoy, Ricardo A
2016-07-28
by biology remains debated. One widely discussed phenomenon is that some combinations of notes are perceived by Westerners as pleasant, or consonant, whereas others are perceived as unpleasant,or dissonant. The contrast between consonance and dissonance is central to Western music and its origins have fascinated scholars since the ancient Greeks. Aesthetic responses to consonance are commonly assumed by scientists to have biological roots, and thus to be universally present in humans. Ethnomusicologists and composers, in contrast, have argued that consonance is a creation of Western musical culture. The issue has remained unresolved, partly because little is known about the extent of cross-cultural variation in consonance preferences. Here we report experiments with the Tsimane'--a native Amazonian society with minimal exposure to Western culture--and comparison populations in Bolivia and the United States that varied in exposure to Western music. Participants rated the pleasantness of sounds. Despite exhibiting Western-like discrimination abilities and Western-like aesthetic responses to familiar sounds and acoustic roughness, the Tsimane' rated consonant and dissonant chords and vocal harmonies as equally pleasant. By contrast, Bolivian city- and town-dwellers exhibited significant preferences for consonance,albeit to a lesser degree than US residents. The results indicate that consonance preferences can be absent in cultures sufficiently isolated from Western music, and are thus unlikely to reflect innate biases or exposure to harmonic natural sounds. The observed variation in preferences is presumably determined by exposure to musical harmony, suggesting that culture has a dominant role in shaping aesthetic responses to music.
Voice Onset Time Production in Speakers with Alzheimer's Disease
ERIC Educational Resources Information Center
Baker, Julie; Ryalls, Jack; Brice, Alejandro; Whiteside, Janet
2007-01-01
In the present study, voice onset time (VOT) measurements were compared between a group of individuals with moderate Alzheimer's disease (AD) and a group of healthy age- and gender-matched peers. Participants read a list of consonant-vowel-consonant (CVC) words, which included the six stop consonants. The VOT measurements were made from…
Cross-Linguistic Differences in the Immediate Serial Recall of Consonants versus Vowels
ERIC Educational Resources Information Center
Kissling, Elizabeth M.
2012-01-01
The current study investigated native English and native Arabic speakers' phonological short-term memory for sequences of consonants and vowels. Phonological short-term memory was assessed in immediate serial recall tasks conducted in Arabic and English for both groups. Participants (n = 39) heard series of six consonant-vowel syllables and wrote…
ERIC Educational Resources Information Center
Kurowski, Kathleen M.; Blumstein, Sheila E.; Palumbo, Carole L.; Waldstein, Robin S.; Burton, Martha W.
2007-01-01
The present study investigated the articulatory implementation deficits of Broca's and Wernicke's aphasics and their potential neuroanatomical correlates. Five Broca's aphasics, two Wernicke's aphasics, and four age-matched normal speakers produced consonant-vowel-(consonant) real word tokens consisting of [m, n] followed by [i, e, a, o, u]. Three…
ERIC Educational Resources Information Center
Knobel, Mark; Caramazza, Alfonso
2007-01-01
Caramazza et al. [Caramazza, A., Chialant, D., Capasso, R., & Miceli, G. (2000). Separable processing of consonants and vowels. "Nature," 403(6768), 428-430.] report two patients who exhibit a double dissociation between consonants and vowels in speech production. The patterning of this double dissociation cannot be explained by appealing to…
ERIC Educational Resources Information Center
Kambuziya, Aliyeh Kord-e Zafaranlu; Dehghan, Masoud
2011-01-01
This paper investigates epenthesis process in Persian to catch some results in relating to vowel and consonant insertion in Persian lexicon. This survey has a close relationship to the description of epenthetic consonants and the conditions in which these consonants are used. Since no word in Persian may begin with a vowel, so that hiatus can't be…
ERIC Educational Resources Information Center
Tamura, Shunsuke; Ito, Kazuhito; Hirose, Nobuyuki; Mori, Shuji
2018-01-01
Purpose: The purpose of this study was to investigate the psychophysical boundary used for categorization of voiced-voiceless stop consonants in native Japanese speakers. Method: Twelve native Japanese speakers participated in the experiment. The stimuli were synthetic stop consonant-vowel stimuli varying in voice onset time (VOT) with…
Double Consonants in English: Graphemic, Morphological, Prosodic and Etymological Determinants
ERIC Educational Resources Information Center
Berg, Kristian
2016-01-01
What determines consonant doubling in English? This question is pursued by using a large lexical database to establish systematic correlations between spelling, phonology and morphology. The main insights are: Consonant doubling is most regular at morpheme boundaries. It can be described in graphemic terms alone, i.e. without reference to…
ERIC Educational Resources Information Center
Bouchon, Camillia; Floccia, Caroline; Fux, Thibaut; Adda-Decker, Martine; Nazzi, Thierry
2015-01-01
Consonants and vowels differ acoustically and articulatorily, but also functionally: Consonants are more relevant for lexical processing, and vowels for prosodic/syntactic processing. These functional biases could be powerful bootstrapping mechanisms for learning language, but their developmental origin remains unclear. The relative importance of…
Vocalization Rate and Consonant Production in Toddlers at High and Low Risk for Autism
ERIC Educational Resources Information Center
Chenausky, Karen; Nelson, Charles, III.; Tager-Flusberg, Helen
2017-01-01
Background: Previous work has documented lower vocalization rate and consonant acquisition delays in toddlers with autism spectrum disorder (ASD). We investigated differences in these variables at 12, 18, and 24 months in toddlers at high and low risk for ASD. Method: Vocalization rate and number of different consonants were obtained from speech…
Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora
ERIC Educational Resources Information Center
Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Halle, Pierre; Goldstein, Louis M.
2012-01-01
Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and…
Perception of Voicing Cues by Children with Early Otitis Media with and without Language Impairment.
ERIC Educational Resources Information Center
Groenen, Paul; And Others
1996-01-01
This study examined identification and discrimination of initial bilabial stop consonants differing in voicing by 10 9-year-old children with a history of severe otitis media with effusion (OME). Long-term effects of OME were found for both identification and discrimination performance. In cases of language impairment with early OME, no additional…
Voice Onset Time for Female Trained and Untrained Singers during Speech and Singing
ERIC Educational Resources Information Center
McCrea, Christopher R.; Morris, Richard J.
2007-01-01
The purpose of this study was to examine the voice onset times of female trained and untrained singers during spoken and sung tasks. Thirty females were digitally recorded speaking and singing short phrases containing the English stop consonants /p/ and /b/ in the word-initial position. Voice onset time was measured for each phoneme and…
ERIC Educational Resources Information Center
Yeni-Komshian, Grace; And Others
This study was designed to compare children and adults on their initial ability to identify and reproduce novel speech sounds and to evaluate their performance after receiving several training sessions in producing these sounds. The novel speech sounds used were two voiceless fricatives which are consonant phonemes in Arabic but which are…
Orthography affects second language speech: Double letters and geminate production in English.
Bassetti, Bene
2017-11-01
Second languages (L2s) are often learned through spoken and written input, and L2 orthographic forms (spellings) can lead to non-native-like pronunciation. The present study investigated whether orthography can lead experienced learners of English L2 to make a phonological contrast in their speech production that does not exist in English. Double consonants represent geminate (long) consonants in Italian but not in English. In Experiment 1, native English speakers and English L2 speakers (Italians) were asked to read aloud English words spelled with a single or double target consonant letter, and consonant duration was compared. The English L2 speakers produced the same consonant as shorter when it was spelled with a single letter, and longer when spelled with a double letter. Spelling did not affect consonant duration in native English speakers. In Experiment 2, effects of orthographic input were investigated by comparing 2 groups of English L2 speakers (Italians) performing a delayed word repetition task with or without orthographic input; the same orthographic effects were found in both groups. These results provide arguably the first evidence that L2 orthographic forms can lead experienced L2 speakers to make a contrast in their L2 production that does not exist in the language. The effect arises because L2 speakers are affected by the interaction between the L2 orthographic form (number of letters), and their native orthography-phonology mappings, whereby double consonant letters represent geminate consonants. These results have important implications for future studies investigating the effects of orthography on native phonology and for L2 phonological development models. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Cannito, Michael P; Chorna, Lesya B; Kahane, Joel C; Dworkin, James P
2014-05-01
This study evaluated the hypotheses that sentence production by speakers with adductor (AD) and abductor (AB) spasmodic dysphonia (SD) may be differentially influenced by consonant voicing and manner features, in comparison with healthy, matched, nondysphonic controls. This was a prospective, single blind study, using a between-groups, repeated measures design for the independent variables of perceived voice quality and sentence duration. Sixteen subjects with ADSD and 10 subjects with ABSD, as well as 26 matched healthy controls produced four short, simple sentences that were systematically loaded with voiced or voiceless consonants of either obstruant or continuant manner categories. Experienced voice clinicians, who were "blind" as to speakers' group affixations, used visual analog scaling to judge the overall voice quality of each sentence. Acoustic sentence durations were also measured. Speakers with ABSD or ADSD demonstrated significantly poorer than normal voice quality on all sentences. Speakers with ABSD exhibited longer than normal duration for voiceless consonant sentences. Speakers with ADSD had poorer voice quality for voiced than for voiceless consonant sentences. Speakers with ABSD had longer durations for voiceless than for voiced consonant sentences. The two subtypes of SD exhibit differential performance on the basis of consonant voicing in short, simple sentences; however, each subgroup manifested voicing-related differences on a different variable (voice quality vs sentence duration). Findings suggest different underlying pathophysiological mechanisms for ABSD and ADSD. Findings also support inclusion of short, simple sentences containing voiced or voiceless consonants as part of the diagnostic protocol for SD, with measurement of sentence duration in addition to judments of voice quality severity. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Li, Feipeng; Trevino, Andrea; Menon, Anjali; Allen, Jont B
2012-10-01
In a previous study on plosives, the 3-Dimensional Deep Search (3DDS) method for the exploration of the necessary and sufficient cues for speech perception was introduced (Li et al., (2010). J. Acoust. Soc. Am. 127(4), 2599-2610). Here, this method is used to isolate the spectral cue regions for perception of the American English fricatives /∫, 3, s, z, f, v, θ, δ in time, frequency, and intensity. The fricatives are analyzed in the context of consonant-vowel utterances, using the vowel /α/. The necessary cues were found to be contained in the frication noise for /∫, 3, s, z, f, v/. 3DDS analysis isolated the cue regions of /s, z/ between 3.6 and 8 [kHz] and /∫, 3/ between 1.4 and 4.2 [kHz]. Some utterances were found to contain acoustic components that were unnecessary for correct perception, but caused listeners to hear non-target consonants when the primary cue region was removed; such acoustic components are labeled "conflicting cue regions." The amplitude modulation of the high-frequency frication region by the fundamental F0 was found to be a sufficient cue for voicing. Overall, the 3DDS method allows one to analyze the effects of natural speech components without initial assumptions about where perceptual cues lie in time-frequency space or which elements of production they correspond to.
ERIC Educational Resources Information Center
Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel
2011-01-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…
ERIC Educational Resources Information Center
Recasens, Daniel
2015-01-01
Purpose: The goal of this study was to ascertain the effect of changes in stress and speech rate on vowel coarticulation in vowel-consonant-vowel sequences. Method: Data on second formant coarticulatory effects as a function of changing /i/ versus /a/ were collected for five Catalan speakers' productions of vowel-consonant-vowel sequences with the…
ERIC Educational Resources Information Center
Jin, Su-Hyun; Liu, Chang
2014-01-01
Purpose: The purpose of this study was to examine the intelligibility of English consonants and vowels produced by Chinese-native (CN), and Korean-native (KN) students enrolled in American universities. Method: 16 English-native (EN), 32 CN, and 32 KN speakers participated in this study. The intelligibility of 16 American English consonants and 16…
Cognitive interference can be mitigated by consonant music and facilitated by dissonant music.
Masataka, Nobuo; Perlovsky, Leonid
2013-01-01
Debates on the origins of consonance and dissonance in music have a long history. While some scientists argue that consonance judgments are an acquired competence based on exposure to the musical-system-specific knowledge of a particular culture, others favor a biological explanation for the observed preference for consonance. Here we provide experimental confirmation that this preference plays an adaptive role in human cognition: it reduces cognitive interference. The results of our experiment reveal that exposure to a Mozart minuet mitigates interference, whereas, conversely, when the music is modified to consist of mostly dissonant intervals the interference effect is intensified.
Cognitive interference can be mitigated by consonant music and facilitated by dissonant music
Masataka, Nobuo; Perlovsky, Leonid
2013-01-01
Debates on the origins of consonance and dissonance in music have a long history. While some scientists argue that consonance judgments are an acquired competence based on exposure to the musical-system-specific knowledge of a particular culture, others favor a biological explanation for the observed preference for consonance. Here we provide experimental confirmation that this preference plays an adaptive role in human cognition: it reduces cognitive interference. The results of our experiment reveal that exposure to a Mozart minuet mitigates interference, whereas, conversely, when the music is modified to consist of mostly dissonant intervals the interference effect is intensified. PMID:23778307
Mild Dissonance Preferred Over Consonance in Single Chord Perception
Eerola, Tuomas
2016-01-01
Previous research on harmony perception has mainly been concerned with horizontal aspects of harmony, turning less attention to how listeners perceive psychoacoustic qualities and emotions in single isolated chords. A recent study found mild dissonances to be more preferred than consonances in single chord perception, although the authors did not systematically vary register and consonance in their study; these omissions were explored here. An online empirical experiment was conducted where participants (N = 410) evaluated chords on the dimensions of Valence, Tension, Energy, Consonance, and Preference; 15 different chords were played with piano timbre across two octaves. The results suggest significant differences on all dimensions across chord types, and a strong correlation between perceived dissonance and tension. The register and inversions contributed to the evaluations significantly, nonmusicians distinguishing between triadic inversions similarly to musicians. The mildly dissonant minor ninth, major ninth, and minor seventh chords were rated highest for preference, regardless of musical sophistication. The role of theoretical explanations such as aggregate dyadic consonance, the inverted-U hypothesis, and psychoacoustic roughness, harmonicity, and sharpness will be discussed to account for the preference of mild dissonance over consonance in single chord perception. PMID:27433333
Perea, Manuel; Acha, Joana
2009-02-01
Recently, a number of input coding schemes (e.g., SOLAR model, SERIOL model, open-bigram model, overlap model) have been proposed that capture the transposed-letter priming effect (i.e., faster response times for jugde-JUDGE than for jupte-JUDGE). In their current version, these coding schemes do not assume any processing differences between vowels and consonants. However, in a lexical decision task, Perea and Lupker (2004, JML; Lupker, Perea, & Davis, 2008, L&CP) reported that transposed-letter priming effects occurred for consonant transpositions but not for vowel transpositions. This finding poses a challenge for these recently proposed coding schemes. Here, we report four masked priming experiments that examine whether this consonant/vowel dissociation in transposed-letter priming is task-specific. In Experiment 1, we used a lexical decision task and found a transposed-letter priming effect only for consonant transpositions. In Experiments 2-4, we employed a same-different task - a task which taps early perceptual processes - and found a robust transposed-letter priming effect that did not interact with consonant/vowel status. We examine the implications of these findings for the front-end of the models of visual word recognition.
Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.
Bidelman, Gavin M; Grall, Jeremy
2014-11-01
Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes. Copyright © 2014 Elsevier Inc. All rights reserved.
Dickinson, Ann-Marie; Baker, Richard; Siciliano, Catherine; Munro, Kevin J
2014-10-01
To identify which training approach, if any, is most effective for improving perception of frequency-compressed speech. A between-subject design using repeated measures. Forty young adults with normal hearing were randomly allocated to one of four groups: a training group (sentence or consonant) or a control group (passive exposure or test-only). Test and training material differed in terms of material and speaker. On average, sentence training and passive exposure led to significantly improved sentence recognition (11.0% and 11.7%, respectively) compared with the consonant training group (2.5%) and test-only group (0.4%), whilst, consonant training led to significantly improved consonant recognition (8.8%) compared with the sentence training group (1.9%), passive exposure group (2.8%), and test-only group (0.8%). Sentence training led to improved sentence recognition, whilst consonant training led to improved consonant recognition. This suggests learning transferred between speakers and material but not stimuli. Passive exposure to sentence material led to an improvement in sentence recognition that was equivalent to gains from active training. This suggests that it may be possible to adapt passively to frequency-compressed speech.
Psychophysical basis for consonant musical intervals
NASA Astrophysics Data System (ADS)
Resnick, L.
1981-06-01
A suggestion is made to explain the acceptance of certain musical intervals as consonant and others as dissonant. The proposed explanation involves the relation between the time required to perceive a definite pitch and the period of a complex tone. If the former time is greater than the latter, the tone is consonant; otherwise it is dissonant. A quantitative examination leads to agreement with empirical data.
ERIC Educational Resources Information Center
Bedoin, Nathalie; Ferragne, Emmanuel; Marsico, Egidio
2010-01-01
Dichotic listening experiments show a right-ear advantage (REA), reflecting a left-hemisphere (LH) dominance. However, we found a decrease in REA when the initial stop consonants of two simultaneous French CVC words differed in voicing rather than place of articulation (Experiment 1). This result suggests that the right hemisphere (RH) is more…
ERIC Educational Resources Information Center
Misiurski, Cara; Blumstein, Sheila E.; Rissman, Jesse; Berman, Daniel
2005-01-01
This study examined the effects that the acoustic-phonetic structure of a stimulus exerts on the processes by which lexical candidates compete for activation. An auditory lexical decision paradigm was used to investigate whether shortening the VOT of an initial voiceless stop consonant in a real word results in the activation of the…
The Pedagogical Use of Mobile Speech Synthesis (TTS): Focus on French Liaison
ERIC Educational Resources Information Center
Liakin, Denis; Cardoso, Walcir; Liakina, Natallia
2017-01-01
We examine the impact of the pedagogical use of mobile TTS on the L2 acquisition of French liaison, a process by which a word-final consonant is pronounced at the beginning of the following word if the latter is vowel-initial (e.g. peti/t.a/mi = > peti[ta]mi "boyfriend"). The study compares three groups of L2 French students learning…
Chenausky, Karen; Kernbach, Julius; Norton, Andrea; Schlaug, Gottfried
2017-01-01
We investigated the relationship between imaging variables for two language/speech-motor tracts and speech fluency variables in 10 minimally verbal (MV) children with autism. Specifically, we tested whether measures of white matter integrity-fractional anisotropy (FA) of the arcuate fasciculus (AF) and frontal aslant tract (FAT)-were related to change in percent syllable-initial consonants correct, percent items responded to, and percent syllable insertion errors (from best baseline to post 25 treatment sessions). Twenty-three MV children with autism spectrum disorder (ASD) received Auditory-Motor Mapping Training (AMMT), an intonation-based treatment to improve fluency in spoken output, and we report on seven who received a matched control treatment. Ten of the AMMT participants were able to undergo a magnetic resonance imaging study at baseline; their performance on baseline speech production measures is compared to that of the other two groups. No baseline differences were found between groups. A canonical correlation analysis (CCA) relating FA values for left- and right-hemisphere AF and FAT to speech production measures showed that FA of the left AF and right FAT were the largest contributors to the synthetic independent imaging-related variable. Change in percent syllable-initial consonants correct and percent syllable-insertion errors were the largest contributors to the synthetic dependent fluency-related variable. Regression analyses showed that FA values in left AF significantly predicted change in percent syllable-initial consonants correct, no FA variables significantly predicted change in percent items responded to, and FA of right FAT significantly predicted change in percent syllable-insertion errors. Results are consistent with previously identified roles for the AF in mediating bidirectional mapping between articulation and acoustics, and the FAT in its relationship to speech initiation and fluency. They further suggest a division of labor between the hemispheres, implicating the left hemisphere in accuracy of speech production and the right hemisphere in fluency in this population. Changes in response rate are interpreted as stemming from factors other than the integrity of these two fiber tracts. This study is the first to document the existence of a subgroup of MV children who experience increases in syllable- insertion errors as their speech develops in response to therapy.
Different Timescales for the Neural Coding of Consonant and Vowel Sounds
Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.
2013-01-01
Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334
Comesaña, Montserrat; Soares, Ana P; Marcet, Ana; Perea, Manuel
2016-11-01
In skilled adult readers, transposed-letter effects (jugde-JUDGE) are greater for consonant than for vowel transpositions. These differences are often attributed to phonological rather than orthographic processing. To examine this issue, we employed a scenario in which phonological involvement varies as a function of reading experience: A masked priming lexical decision task with 50-ms primes in adult and developing readers. Indeed, masked phonological priming at this prime duration has been consistently reported in adults, but not in developing readers (Davis, Castles, & Iakovidis, 1998). Thus, if consonant/vowel asymmetries in letter position coding with adults are due to phonological influences, transposed-letter priming should occur for both consonant and vowel transpositions in developing readers. Results with adults (Experiment 1) replicated the usual consonant/vowel asymmetry in transposed-letter priming. In contrast, no signs of an asymmetry were found with developing readers (Experiments 2-3). However, Experiments 1-3 did not directly test the existence of phonological involvement. To study this question, Experiment 4 manipulated the phonological prime-target relationship in developing readers. As expected, we found no signs of masked phonological priming. Thus, the present data favour an interpretation of the consonant/vowel dissociation in letter position coding as due to phonological rather than orthographic processing. © 2016 The British Psychological Society.
Chen, Fei; Loizou, Philipos C.
2012-01-01
Recent evidence suggests that spectral change, as measured by cochlea-scaled entropy (CSE), predicts speech intelligibility better than the information carried by vowels or consonants in sentences. Motivated by this finding, the present study investigates whether intelligibility indices implemented to include segments marked with significant spectral change better predict speech intelligibility in noise than measures that include all phonetic segments paying no attention to vowels/consonants or spectral change. The prediction of two intelligibility measures [normalized covariance measure (NCM), coherence-based speech intelligibility index (CSII)] is investigated using three sentence-segmentation methods: relative root-mean-square (RMS) levels, CSE, and traditional phonetic segmentation of obstruents and sonorants. While the CSE method makes no distinction between spectral changes occurring within vowels/consonants, the RMS-level segmentation method places more emphasis on the vowel-consonant boundaries wherein the spectral change is often most prominent, and perhaps most robust, in the presence of noise. Higher correlation with intelligibility scores was obtained when including sentence segments containing a large number of consonant-vowel boundaries than when including segments with highest entropy or segments based on obstruent/sonorant classification. These data suggest that in the context of intelligibility measures the type of spectral change captured by the measure is important. PMID:22559382
Fogerty, Daniel
2014-01-01
The present study investigated the importance of overall segment amplitude and intrinsic segment amplitude modulation of consonants and vowels to sentence intelligibility. Sentences were processed according to three conditions that replaced consonant or vowel segments with noise matched to the long-term average speech spectrum. Segments were replaced with (1) low-level noise that distorted the overall sentence envelope, (2) segment-level noise that restored the overall syllabic amplitude modulation of the sentence, and (3) segment-modulated noise that further restored faster temporal envelope modulations during the vowel. Results from the first experiment demonstrated an incremental benefit with increasing resolution of the vowel temporal envelope. However, amplitude modulations of replaced consonant segments had a comparatively minimal effect on overall sentence intelligibility scores. A second experiment selectively noise-masked preserved vowel segments in order to equate overall performance of consonant-replaced sentences to that of the vowel-replaced sentences. Results demonstrated no significant effect of restoring consonant modulations during the interrupting noise when existing vowel cues were degraded. A third experiment demonstrated greater perceived sentence continuity with the preservation or addition of vowel envelope modulations. Overall, results support previous investigations demonstrating the importance of vowel envelope modulations to the intelligibility of interrupted sentences. PMID:24606291
Plack, Christopher J.
2015-01-01
When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or “consonance”. Complex frequency ratios, on the other hand, evoke feelings of tension or “dissonance”. Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological “frequency-following response.” The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding. PMID:25740534
Koda, Hiroki; Basile, Muriel; Olivier, Marion; Remeuf, Kevin; Nagumo, Sumiharu; Blois-Heulin, Catherine; Lemasson, Alban
2013-08-01
The central position and universality of music in human societies raises the question of its phylogenetic origin. One of the most important properties of music involves harmonic musical intervals, in response to which humans show a spontaneous preference for consonant over dissonant sounds starting from early human infancy. Comparative studies conducted with organisms at different levels of the primate lineage are needed to understand the evolutionary scenario under which this phenomenon emerged. Although previous research found no preference for consonance in a New World monkey species, the question remained opened for Old World monkeys. We used an experimental paradigm based on a sensory reinforcement procedure to test auditory preferences for consonant sounds in Campbell's monkeys (Cercopithecus campbelli campbelli), an Old World monkey species. Although a systematic preference for soft (70 dB) over loud (90 dB) control white noise was found, Campbell's monkeys showed no preference for either consonant or dissonant sounds. The preference for soft white noise validates our noninvasive experimental paradigm, which can be easily reused in any captive facility to test for auditory preferences. This would suggest that human preference for consonant sounds is not systematically shared with New and Old World monkeys. The sensitivity for harmonic musical intervals emerged probably very late in the primate lineage.
Speech-Like Rhythm in a Voiced and Voiceless Orangutan Call
Lameira, Adriano R.; Hardus, Madeleine E.; Bartlett, Adrian M.; Shumaker, Robert W.; Wich, Serge A.; Menken, Steph B. J.
2015-01-01
The evolutionary origins of speech remain obscure. Recently, it was proposed that speech derived from monkey facial signals which exhibit a speech-like rhythm of ∼5 open-close lip cycles per second. In monkeys, these signals may also be vocalized, offering a plausible evolutionary stepping stone towards speech. Three essential predictions remain, however, to be tested to assess this hypothesis' validity; (i) Great apes, our closest relatives, should likewise produce 5Hz-rhythm signals, (ii) speech-like rhythm should involve calls articulatorily similar to consonants and vowels given that speech rhythm is the direct product of stringing together these two basic elements, and (iii) speech-like rhythm should be experience-based. Via cinematic analyses we demonstrate that an ex-entertainment orangutan produces two calls at a speech-like rhythm, coined “clicks” and “faux-speech.” Like voiceless consonants, clicks required no vocal fold action, but did involve independent manoeuvring over lips and tongue. In parallel to vowels, faux-speech showed harmonic and formant modulations, implying vocal fold and supralaryngeal action. This rhythm was several times faster than orangutan chewing rates, as observed in monkeys and humans. Critically, this rhythm was seven-fold faster, and contextually distinct, than any other known rhythmic calls described to date in the largest database of the orangutan repertoire ever assembled. The first two predictions advanced by this study are validated and, based on parsimony and exclusion of potential alternative explanations, initial support is given to the third prediction. Irrespectively of the putative origins of these calls and underlying mechanisms, our findings demonstrate irrevocably that great apes are not respiratorily, articulatorilly, or neurologically constrained for the production of consonant- and vowel-like calls at speech rhythm. Orangutan clicks and faux-speech confirm the importance of rhythmic speech antecedents within the primate lineage, and highlight potential articulatory homologies between great ape calls and human consonants and vowels. PMID:25569211
Intra-oral pressure-based voicing control of electrolaryngeal speech with intra-oral vibrator.
Takahashi, Hirokazu; Nakao, Masayuki; Kikuchi, Yataro; Kaga, Kimitaka
2008-07-01
In normal speech, coordinated activities of intrinsic laryngeal muscles suspend a glottal sound at utterance of voiceless consonants, automatically realizing a voicing control. In electrolaryngeal speech, however, the lack of voicing control is one of the causes of unclear voice, voiceless consonants tending to be misheard as the corresponding voiced consonants. In the present work, we developed an intra-oral vibrator with an intra-oral pressure sensor that detected utterance of voiceless phonemes during the intra-oral electrolaryngeal speech, and demonstrated that an intra-oral pressure-based voicing control could improve the intelligibility of the speech. The test voices were obtained from one electrolaryngeal speaker and one normal speaker. We first investigated on the speech analysis software how a voice onset time (VOT) and first formant (F1) transition of the test consonant-vowel syllables contributed to voiceless/voiced contrasts, and developed an adequate voicing control strategy. We then compared the intelligibility of consonant-vowel syllables among the intra-oral electrolaryngeal speech with and without online voicing control. The increase of intra-oral pressure, typically with a peak ranging from 10 to 50 gf/cm2, could reliably identify utterance of voiceless consonants. The speech analysis and intelligibility test then demonstrated that a short VOT caused the misidentification of the voiced consonants due to a clear F1 transition. Finally, taking these results together, the online voicing control, which suspended the prosthetic tone while the intra-oral pressure exceeded 2.5 gf/cm2 and during the 35 milliseconds that followed, proved efficient to improve the voiceless/voiced contrast.
Getting the beat: entrainment of brain activity by musical rhythm and pleasantness.
Trost, Wiebke; Frühholz, Sascha; Schön, Daniele; Labbé, Carolina; Pichon, Swann; Grandjean, Didier; Vuilleumier, Patrik
2014-12-01
Rhythmic entrainment is an important component of emotion induction by music, but brain circuits recruited during spontaneous entrainment of attention by music and the influence of the subjective emotional feelings evoked by music remain still largely unresolved. In this study we used fMRI to test whether the metric structure of music entrains brain activity and how music pleasantness influences such entrainment. Participants listened to piano music while performing a speeded visuomotor detection task in which targets appeared time-locked to either strong or weak beats. Each musical piece was presented in both a consonant/pleasant and dissonant/unpleasant version. Consonant music facilitated target detection and targets presented synchronously with strong beats were detected faster. FMRI showed increased activation of bilateral caudate nucleus when responding on strong beats, whereas consonance enhanced activity in attentional networks. Meter and consonance selectively interacted in the caudate nucleus, with greater meter effects during dissonant than consonant music. These results reveal that the basal ganglia, involved both in emotion and rhythm processing, critically contribute to rhythmic entrainment of subcortical brain circuits by music. Copyright © 2014 Elsevier Inc. All rights reserved.
Lin, Mengxi; Francis, Alexander L
2014-11-01
Both long-term native language experience and immediate linguistic expectations can affect listeners' use of acoustic information when making a phonetic decision. In this study, a Garner selective attention task was used to investigate differences in attention to consonants and tones by American English-speaking listeners (N = 20) and Mandarin Chinese-speaking listeners hearing speech in either American English (N = 17) or Mandarin Chinese (N = 20). To minimize the effects of lexical differences and differences in the linguistic status of pitch across the two languages, stimuli and response conditions were selected such that all tokens constitute legitimate words in both languages and all responses required listeners to make decisions that were linguistically meaningful in their native language. Results showed that regardless of ambient language, Chinese listeners processed consonant and tone in a combined manner, consistent with previous research. In contrast, English listeners treated tones and consonants as perceptually separable. Results are discussed in terms of the role of sub-phonemic differences in acoustic cues across language, and the linguistic status of consonants and pitch contours in the two languages.
Consonant-recognition patterns and self-assessment of hearing handicap.
Hustedde, C G; Wiley, T L
1991-12-01
Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory--Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal-hearing and hearing-impaired listeners.
Analysis of Spanish consonant recognition in 8-talker babble.
Moreno-Torres, Ignacio; Otero, Pablo; Luna-Ramírez, Salvador; Garayzábal Heinze, Elena
2017-05-01
This paper presents the results of a closed-set recognition task for 80 Spanish consonant-vowel sounds (16 C × 5 V, spoken by 2 talkers) in 8-talker babble (-6, -2, +2 dB). A ranking of resistance to noise was obtained using the signal detection d' measure, and confusion patterns were analyzed using a graphical method (confusion graphs). The resulting ranking indicated the existence of three resistance groups: (1) high resistance: /ʧ, s, ʝ/; (2) mid resistance: /r, l, m, n/; and (3) low resistance: /t, θ, x, ɡ, b, d, k, f, p/. Confusions involved mostly place of articulation and voicing errors, and occurred especially among consonants in the same resistance group. Three perceptual confusion groups were identified: the three low-energy fricatives (i.e., /f, θ, x/), the six stops (i.e., /p, t, k, b, d, ɡ/), and three consonants with clear formant structure (i.e., /m, n, l/). The factors underlying consonant resistance and confusion patterns are discussed. The results are compared with data from other languages.
Speech sound disorders in a community study of preschool children.
McLeod, Sharynne; Harrison, Linda J; McAllister, Lindy; McCormack, Jane
2013-08-01
To undertake a community (nonclinical) study to describe the speech of preschool children who had been identified by parents/teachers as having difficulties "talking and making speech sounds" and compare the speech characteristics of those who had and had not accessed the services of a speech-language pathologist (SLP). Stage 1: Parent/teacher concern regarding the speech skills of 1,097 4- to 5-year-old children attending early childhood centers was documented. Stage 2a: One hundred forty-three children who had been identified with concerns were assessed. Stage 2b: Parents returned questionnaires about service access for 109 children. The majority of the 143 children (86.7%) achieved a standard score below the normal range for the percentage of consonants correct (PCC) on the Diagnostic Evaluation of Articulation and Phonology (Dodd, Hua, Crosbie, Holm, & Ozanne, 2002). Consonants produced incorrectly were consistent with the late-8 phonemes ( Shriberg, 1993). Common phonological patterns were fricative simplification (82.5%), cluster simplification (49.0%)/reduction (19.6%), gliding (41.3%), and palatal fronting (15.4%). Interdental lisps on /s/ and /z/ were produced by 39.9% of the children, dentalization of other sibilants by 17.5%, and lateral lisps by 13.3%. Despite parent/teacher concern, only 41/109 children had contact with an SLP. These children were more likely to be unintelligible to strangers, to express distress about their speech, and to have a lower PCC and a smaller consonant inventory compared to the children who had no contact with an SLP. A significant number of preschool-age children with speech sound disorders (SSD) have not had contact with an SLP. These children have mild-severe SSD and would benefit from SLP intervention. Integrated SLP services within early childhood communities would enable earlier identification of SSD and access to intervention to reduce potential educational and social impacts affiliated with SSD.
ERIC Educational Resources Information Center
Haskins Labs., New Haven, CT.
This report is one of a regular series about the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. The 11 papers discuss the dissociation of spectral and temporal cues to the voicing distinction in initial stopped consonants; perceptual integration and selective attention in…
González-García, Nadia; Rendón, Pablo L
2017-05-23
The neural correlates of consonance and dissonance perception have been widely studied, but not the neural correlates of consonance and dissonance production. The most straightforward manner of musical production is singing, but, from an imaging perspective, it still presents more challenges than listening because it involves motor activity. The accurate singing of musical intervals requires integration between auditory feedback processing and vocal motor control in order to correctly produce each note. This protocol presents a method that permits the monitoring of neural activations associated with the vocal production of consonant and dissonant intervals. Four musical intervals, two consonant and two dissonant, are used as stimuli, both for an auditory discrimination test and a task that involves first listening to and then reproducing given intervals. Participants, all female vocal students at the conservatory level, were studied using functional Magnetic Resonance Imaging (fMRI) during the performance of the singing task, with the listening task serving as a control condition. In this manner, the activity of both the motor and auditory systems was observed, and a measure of vocal accuracy during the singing task was also obtained. Thus, the protocol can also be used to track activations associated with singing different types of intervals or with singing the required notes more accurately. The results indicate that singing dissonant intervals requires greater participation of the neural mechanisms responsible for the integration of external feedback from the auditory and sensorimotor systems than does singing consonant intervals.
Computational Approach to Musical Consonance and Dissonance
Trulla, Lluis L.; Di Stefano, Nicola; Giuliani, Alessandro
2018-01-01
In sixth century BC, Pythagoras discovered the mathematical foundation of musical consonance and dissonance. When auditory frequencies in small-integer ratios are combined, the result is a harmonious perception. In contrast, most frequency combinations result in audible, off-centered by-products labeled “beating” or “roughness;” these are reported by most listeners to sound dissonant. In this paper, we consider second-order beats, a kind of beating recognized as a product of neural processing, and demonstrate that the data-driven approach of Recurrence Quantification Analysis (RQA) allows for the reconstruction of the order in which interval ratios are ranked in music theory and harmony. We take advantage of computer-generated sounds containing all intervals over the span of an octave. To visualize second-order beats, we use a glissando from the unison to the octave. This procedure produces a profile of recurrence values that correspond to subsequent epochs along the original signal. We find that the higher recurrence peaks exactly match the epochs corresponding to just intonation frequency ratios. This result indicates a link between consonance and the dynamical features of the signal. Our findings integrate a new element into the existing theoretical models of consonance, thus providing a computational account of consonance in terms of dynamical systems theory. Finally, as it considers general features of acoustic signals, the present approach demonstrates a universal aspect of consonance and dissonance perception and provides a simple mathematical tool that could serve as a common framework for further neuro-psychological and music theory research. PMID:29670552
Impaired Perception of Sensory Consonance and Dissonance in Cochlear Implant Users.
Caldwell, Meredith T; Jiradejvong, Patpong; Limb, Charles J
2016-03-01
In light of previous research demonstrating poor pitch perception in cochlear implant (CI) users, we hypothesized that the presence of consonant versus dissonant chord accompaniment in real-world musical stimuli would not impact subjective assessment of degree of pleasantness in CI users. Consonance/dissonance are perceptual features of harmony resulting from pitch relationships between simultaneously presented musical notes. Generally, consonant sounds are perceived as pleasant and dissonant ones as unpleasant. CI users exhibit impairments in pitch perception, making music listening difficult and often unenjoyable. To our knowledge, consonance/dissonance perception has not been studied in the CI population. Twelve novel melodies were created for this study. By altering the harmonic structures of the accompanying chords, we created three permutations of varying dissonance for each melody (36 stimuli in all). Ten CI users and 12 NH listeners provided Likert scale ratings from -5 (very unpleasant) to +5 (very pleasant) for each of the stimuli. A two-way ANOVA showed main effects for Dissonance Level and Subject Type as well as a two-way interaction between the two. Pairwise comparisons indicated that NH stimuli pleasantness ratings decreased with increasing dissonance, whereas CI ratings did not. NH pleasantness ratings were consistently lower than CI ratings. For CI users, consonant versus dissonant chord accompaniment had no significant impact on whether a melody was considered pleasant or unpleasant. This finding may be partially responsible for the decreased enjoyment of many CI users during music perception and is another manifestation of impaired pitch perception in CI users.
The perception of syllable affiliation of singleton stops in repetitive speech.
de Jong, Kenneth J; Lim, Byung-Jin; Nagao, Kyoko
2004-01-01
Stetson (1951) noted that repeating singleton coda consonants at fast speech rates makes them be perceived as onset consonants affiliated with a following vowel. The current study documents the perception of rate-induced resyllabification, as well as what temporal properties give rise to the perception of syllable affiliation. Stimuli were extracted from a previous study of repeated stop + vowel and vowel + stop syllables (de Jong, 2001a, 2001b). Forced-choice identification tasks show that slow repetitions are clearly distinguished. As speakers increase rate, they reach a point after which listeners disagree as to the affiliation of the stop. This pattern is found for voiced and voiceless consonants using different stimulus extraction techniques. Acoustic models of the identifications indicate that the sudden shift in syllabification occurs with the loss of an acoustic hiatus between successive syllables. Acoustic models of the fast rate identifications indicate various other qualities, such as consonant voicing, affect the probability that the consonants will be perceived as onsets. These results indicate a model of syllabic affiliation where specific juncture-marking aspects of the signal dominate parsing, and in their absence other differences provide additional, weaker cues to syllabic affiliation.
An acoustic study of nasal consonants in three Central Australian languages.
Tabain, Marija; Butcher, Andrew; Breen, Gavan; Beare, Richard
2016-02-01
This study presents nasal consonant data from 21 speakers of three Central Australian languages: Arrernte, Pitjantjatjara and Warlpiri. The six nasals considered are bilabial /m/, dental /n/, alveolar /n/, retroflex /ɳ/, alveo-palatal /ɲ/, and velar /ŋ/. Nasal formant and bandwidth values are examined, as are the locations of spectral minima. Several differences are found between the bilabial /m/ and the velar /ŋ/, and also the palatal /ɲ/. The remaining coronal nasals /n n ɳ/ are not well differentiated within the nasal murmur, but their average bandwidths are lower than for the other nasal consonants. Broader spectral shape measures (Centre of Gravity and Standard Deviation) are also considered, and comparisons are made with data for stops and laterals in these languages based on the same spectral measures. It is suggested that nasals are not as easily differentiated using the various measures examined here as are stops and laterals. It is also suggested that existing models of nasal consonants do not fully account for the observed differences between the various nasal places of articulation; and that oral formants, in addition to anti-formants, contribute substantially to the output spectrum of nasal consonants.
Relationship between consonant recognition in noise and hearing threshold.
Yoon, Yang-soo; Allen, Jont B; Gooler, David M
2012-04-01
Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, HT (f), grouping HI listeners with HT (f) is widely practiced. In this article, the relationship between consonant recognition and HT (f) is considered over a range of signal-to-noise ratios (SNRs). Confusion matrices (CMs) from 25 HI ears were generated in response to 16 consonant-vowel syllables presented at 6 different SNRs. Individual differences scaling (INDSCAL) was applied to both feature-based matrices and CMs in order to evaluate the relationship between HT (f) and consonant recognition among HI listeners. The results showed no predictive relationship between the percent error scores (Pe) and HT (f) across SNRs. The multiple regression models showed that the HT (f) accounted for 39% of the total variance of the slopes of the Pe. Feature-based INDSCAL analysis showed consistent grouping of listeners across SNRs, but not in terms of HT (f). Systematic relationship between measures was also not defined by CM-based INDSCAL analysis across SNRs. HT (f) did not account for the majority of the variance (39%) in consonant recognition in noise when the complete body of the CM was considered.
Lexical representation of novel L2 contrasts
NASA Astrophysics Data System (ADS)
Hayes-Harb, Rachel; Masuda, Kyoko
2005-04-01
There is much interest among psychologists and linguists in the influence of the native language sound system on the acquisition of second languages (Best, 1995; Flege, 1995). Most studies of second language (L2) speech focus on how learners perceive and produce L2 sounds, but we know of only two that have considered how novel sound contrasts are encoded in learners' lexical representations of L2 words (Pallier et al., 2001; Ota et al., 2002). In this study we investigated how native speakers of English encode Japanese consonant quantity contrasts in their developing Japanese lexicons at different stages of acquisition (Japanese contrasts singleton versus geminate consonants but English does not). Monolingual English speakers, native English speakers learning Japanese for one year, and native speakers of Japanese were taught a set of Japanese nonwords containing singleton and geminate consonants. Subjects then performed memory tasks eliciting perception and production data to determine whether they encoded the Japanese consonant quantity contrast lexically. Overall accuracy in these tasks was a function of Japanese language experience, and acoustic analysis of the production data revealed non-native-like patterns of differentiation of singleton and geminate consonants among the L2 learners of Japanese. Implications for theories of L2 speech are discussed.
Are vowel errors influenced by consonantal context in the speech of persons with aphasia?
NASA Astrophysics Data System (ADS)
Gelfer, Carole E.; Bell-Berti, Fredericka; Boyle, Mary
2004-05-01
The literature suggests that vowels and consonants may be affected differently in the speech of persons with conduction aphasia (CA) or nonfluent aphasia with apraxia of speech (AOS). Persons with CA have shown similar error rates across vowels and consonants, while those with AOS have shown more errors for consonants than vowels. These data have been interpreted to suggest that consonants have greater gestural complexity than vowels. However, recent research [M. Boyle et al., Proc. International Cong. Phon. Sci., 3265-3268 (2003)] does not support this interpretation: persons with AOS and CA both had a high proportion of vowel errors, and vowel errors almost always occurred in the context of consonantal errors. To examine the notion that vowels are inherently less complex than consonants and are differentially affected in different types of aphasia, vowel production in different consonantal contexts for speakers with AOS or CA was examined. The target utterances, produced in carrier phrases, were bVC and bV syllables, allowing us to examine whether vowel production is influenced by consonantal context. Listener judgments were obtained for each token, and error productions were grouped according to the intended utterance and error type. Acoustical measurements were made from spectrographic displays.
Vowel bias in Danish word-learning: processing biases are language-specific.
Højen, Anders; Nazzi, Thierry
2016-01-01
The present study explored whether the phonological bias favoring consonants found in French-learning infants and children when learning new words (Havy & Nazzi, 2009; Nazzi, 2005) is language-general, as proposed by Nespor, Peña and Mehler (2003), or varies across languages, perhaps as a function of the phonological or lexical properties of the language in acquisition. To do so, we used the interactive word-learning task set up by Havy and Nazzi (2009), teaching Danish-learning 20-month-olds pairs of phonetically similar words that contrasted either on one of their consonants or one of their vowels, by either one or two phonological features. Danish was chosen because it has more vowels than consonants, and is characterized by extensive consonant lenition. Both phenomena could disfavor a consonant bias. Evidence of word-learning was found only for vocalic information, irrespective of whether one or two phonological features were changed. The implication of these findings is that the phonological biases found in early lexical processing are not language-general but develop during language acquisition, depending on the phonological or lexical properties of the native language. © 2015 John Wiley & Sons Ltd.
Stop identity cue as a cue to language identity
NASA Astrophysics Data System (ADS)
Castonguay, Paula Lisa
The purpose of the present study was to determine whether language membership could potentially be cued by the acoustic-phonetic detail of word-initial stops and retained all the way through the process of lexical access to aid in language identification. Of particular interest were language-specific differences in CE and CF word-initial stops. Experiment 1 consisted of an interlingual homophone production task. The purpose of this study was to examine how word-initial stop consonants differ in terms of acoustic properties in Canadian English (CE) and Canadian French (CF) interlingual homophones. The analyses from the bilingual speakers in Experiment 1 indicate that bilinguals do produce language-specific differences in CE and CF word-initial stops, and that closure duration, voice onset time, and burst spectral SD may provide cues to language identity in CE and CF stops. Experiment 2 consisted of a Phoneme and Language Categorization task. The purpose of this study was to examine how stop identity cues, such as VOT and closure duration, influence a listener to identify word-initial stop consonants as belonging to Canadian English (CE) or Canadian French (CF). The RTs from the bilingual listeners in this study indicate that bilinguals do perceive language-specific differences in CE and CF word-initial stops, and that voice onset time may provide cues to phoneme and language membership in CE and CF stops. Experiment 3 consisted of a Phonological-Semantic priming task. The purpose of this study was to examine how subphonetic variations, such as changes in the VOT, affect lexical access. The results of Experiment 3 suggest that language-specific cues, such as VOT, affects the composition of the bilingual cohort and that the extent to which English and/or French words are activated is dependent on the language-specific cues present in a word. The findings of this study enhanced our theoretical understanding of lexical structure and lexical access in bilingual speakers. In addition, this study provides further insight on cross-language effects at the subphonetic level.
ERIC Educational Resources Information Center
Rochette, Claude; Simard, Claude
A study of the phonetic combination of a constrictive consonant (specifically, [f], [v], and [r]) and a vowel in French using x-ray and oscillograph technology focused on the speed and process of articulation between the consonant and the vowel. The study considered aperture size, nasality, labiality, and accent. Articulation of a total of 407…
Hashemi, Nassim; Ghorbani, Ali; Soleymani, Zahra; Kamali, Mohmmad; Ahmadi, Zohreh Ziatabar; Mahmoudian, Saeid
2018-07-01
Auditory discrimination of speech sounds is an important perceptual ability and a precursor to the acquisition of language. Auditory information is at least partially necessary for the acquisition and organization of phonological rules. There are few standardized behavioral tests to evaluate phonemic distinctive features in children with or without speech and language disorders. The main objective of the present study was the development, validity, and reliability of the Persian version of auditory word discrimination test (P-AWDT) for 4-8-year-old children. A total of 120 typical children and 40 children with speech sound disorder (SSD) participated in the present study. The test comprised of 160 monosyllabic paired-words distributed in the Forms A-1 and the Form A-2 for the initial consonants (80 words) and the Forms B-1 and the Form B-2 for the final consonants (80 words). Moreover, the discrimination of vowels was randomly included in all forms. Content validity was calculated and 50 children repeated the test twice with two weeks of interval (test-retest reliability). Further analysis was also implemented including validity, intraclass correlation coefficient (ICC), Cronbach's alpha (internal consistency), age groups, and gender. The content validity index (CVI) and the test-retest reliability of the P-AWDT were achieved 63%-86% and 81%-96%, respectively. Moreover, the total Cronbach's alpha for the internal consistency was estimated relatively high (0.93). Comparison of the mean scores of the P-AWDT in the typical children and the children with SSD revealed a significant difference. The results revealed that the group with SSD had greater severity of deficit than the typical group in auditory word discrimination. In addition, the difference between the age groups was statistically significant, especially in 4-4.11-year-old children. The performance of the two gender groups was relatively same. The comparison of the P-AWDT scores between the typical children and the children with SSD demonstrated differences in the capabilities of auditory phonological discrimination in both initial and final positions. It supposed that the P-AWDT meets the appropriate validity and reliability criteria. The P-AWDT test can be utilized to measure the distinctive features of phonemes, the auditory discrimination of initial and final consonants and middle vowels of words in 4-8-year-old typical children and children with SSD. Copyright © 2018. Published by Elsevier B.V.
Minicucci, Domenic; Guediche, Sara; Blumstein, Sheila E
2013-08-01
The current study explored how factors of acoustic-phonetic and lexical competition affect access to the lexical-semantic network during spoken word recognition. An auditory semantic priming lexical decision task was presented to subjects while in the MR scanner. Prime-target pairs consisted of prime words with the initial voiceless stop consonants /p/, /t/, and /k/ followed by word and nonword targets. To examine the neural consequences of lexical and sound structure competition, primes either had voiced minimal pair competitors or they did not, and they were either acoustically modified to be poorer exemplars of the voiceless phonetic category or not. Neural activation associated with semantic priming (Unrelated-Related conditions) revealed a bilateral fronto-temporo-parietal network. Within this network, clusters in the left insula/inferior frontal gyrus (IFG), left superior temporal gyrus (STG), and left posterior middle temporal gyrus (pMTG) showed sensitivity to lexical competition. The pMTG also demonstrated sensitivity to acoustic modification, and the insula/IFG showed an interaction between lexical competition and acoustic modification. These findings suggest the posterior lexical-semantic network is modulated by both acoustic-phonetic and lexical structure, and that the resolution of these two sources of competition recruits frontal structures. Copyright © 2013 Elsevier Ltd. All rights reserved.
Vocal effort modulates the motor planning of short speech structures
NASA Astrophysics Data System (ADS)
Taitz, Alan; Shalom, Diego E.; Trevisan, Marcos A.
2018-05-01
Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.
NASA Astrophysics Data System (ADS)
Pei, Xiaomei; Barbour, Dennis L.; Leuthardt, Eric C.; Schalk, Gerwin
2011-08-01
Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.
Onomatopoeias: a new perspective around space, image schemas and phoneme clusters.
Catricalà, Maria; Guidi, Annarita
2015-09-01
Onomatopoeias (
Consonant and Vowel Processing in Word Form Segmentation: An Infant ERP Study.
Von Holzen, Katie; Nishibayashi, Leo-Lyuki; Nazzi, Thierry
2018-01-31
Segmentation skill and the preferential processing of consonants (C-bias) develop during the second half of the first year of life and it has been proposed that these facilitate language acquisition. We used Event-related brain potentials (ERPs) to investigate the neural bases of early word form segmentation, and of the early processing of onset consonants, medial vowels, and coda consonants, exploring how differences in these early skills might be related to later language outcomes. Our results with French-learning eight-month-old infants primarily support previous studies that found that the word familiarity effect in segmentation is developing from a positive to a negative polarity at this age. Although as a group infants exhibited an anterior-localized negative effect, inspection of individual results revealed that a majority of infants showed a negative-going response (Negative Responders), while a minority showed a positive-going response (Positive Responders). Furthermore, all infants demonstrated sensitivity to onset consonant mispronunciations, while Negative Responders demonstrated a lack of sensitivity to vowel mispronunciations, a developmental pattern similar to previous literature. Responses to coda consonant mispronunciations revealed neither sensitivity nor lack of sensitivity. We found that infants showing a more mature, negative response to newly segmented words compared to control words (evaluating segmentation skill) and mispronunciations (evaluating phonological processing) at test also had greater growth in word production over the second year of life than infants showing a more positive response. These results establish a relationship between early segmentation skills and phonological processing (not modulated by the type of mispronunciation) and later lexical skills.
Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus.
Foo, Francine; King-Stephens, David; Weber, Peter; Laxer, Kenneth; Parvizi, Josef; Knight, Robert T
2016-01-01
The auditory cortex is well-known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG) signals directly from the lateral surface of either the left or right temporal lobe of eight patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70-150 Hz) frequency range within the superior temporal gyrus (STG) and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75 and 200 ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords, and a positive correlation between changes in γhigh power and the degree of stimulus roughness was observed in both hemispheres. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity toward dissonance.
Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus
Foo, Francine; King-Stephens, David; Weber, Peter; Laxer, Kenneth; Parvizi, Josef; Knight, Robert T.
2016-01-01
The auditory cortex is well-known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG) signals directly from the lateral surface of either the left or right temporal lobe of eight patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70–150 Hz) frequency range within the superior temporal gyrus (STG) and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75 and 200 ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords, and a positive correlation between changes in γhigh power and the degree of stimulus roughness was observed in both hemispheres. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity toward dissonance. PMID:27148011
Dressler, William W; Balieiro, Mauro C; Ribeiro, Rosane P; Dos Santos, José Ernesto
2009-01-01
In this study in urban Brazil we examine, as a predictor of depressive symptoms, the interaction between a single nucleotide polymorphism in the 2A receptor in the serotonin system (-1438G/A) and cultural consonance in family life, a measure of the degree to which an individual perceives her family as corresponding to a widely shared cultural model of the prototypical family. A community sample of 144 adults was followed over a 2-year-period. Cultural consonance in family life was assessed by linking individuals' perceptions of their own families with a shared cultural model of the family derived from cultural consensus analysis. The -1438G/A polymorphism in the 2A serotonin receptor was genotyped using a standard protocol for DNA extracted from leukocytes. Covariates included age, sex, socioeconomic status, and stressful life events. Cultural consonance in family life was prospectively associated with depressive symptoms. In addition, the interaction between genotype and cultural consonance in family life was significant. For individuals with the A/A variant of the -1438G/A polymorphism of the 2A receptor gene, the effect of cultural consonance in family life on depressive symptoms over a 2-year-period was larger (beta = -0.533, P < 0.01) than those effects for individuals with either the G/A (beta = -0.280, P < 0.10) or G/G (beta = -0.272, P < 0.05) variants. These results are consistent with a process in which genotype moderates the effects of culturally meaningful social experience on depressive symptoms. (c) 2008 Wiley-Liss, Inc.
On the unity of children’s phonological error patterns: Distinguishing symptoms from the problem
Dinnsen, Daniel A.
2012-01-01
This article compares the claims of rule- and constraint-based accounts of three seemingly distinct error patterns, namely, Deaffrication, Consonant Harmony and Assibilation, in the sound system of a child with a phonological delay. It is argued that these error patterns are not separate problems, but rather are symptoms of a larger conspiracy to avoid word-initial coronal stops. The clinical implications of these findings are also considered. PMID:21787147
Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.
2014-01-01
Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We point out that while AV training could be an impediment to immediate unisensory perceptual learning in cochlear implant patients, it was also associated with higher scores during training. PMID:25206344
Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T
2014-01-01
Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We point out that while AV training could be an impediment to immediate unisensory perceptual learning in cochlear implant patients, it was also associated with higher scores during training.
Steenbeek-Planting, Esther G; van Bon, Wim H J; Schreuder, Robert
2012-10-01
The effect of two training procedures on the development of reading speed in poor readers is examined. One training concentrates on the words the children read correctly (successes), the other on the words they read incorrectly (failures). Children were either informed or not informed about the training focus. A randomized controlled trial was conducted with 79 poor readers. They repeatedly read regularly spelled Dutch consonant-vowel-consonant words, some children their successes, others their failures. The training used a computerized flashcards format. The exposure duration of the words was varied to maintain an accuracy rate at a constant level. Reading speed improved and transferred to untrained, orthographically more complex words. These transfer effects were characterized by an Aptitude-Treatment Interaction. Poor readers with a low initial reading level improved most in the training focused on successes. For poor readers with a high initial reading level, however, it appeared to be more profitable to practice with their failures. Informing students about the focus of the training positively affected training: The exposure duration needed for children informed about the focus of the training decreased more than for children who were not informed. This study suggests that neither of the two interventions is superior to the other in general. Rather, the improvement of general reading speed in a transparent orthography is closely related to both the children's initial reading level and the type of words they practice with: common and familiar words when training their successes and uncommon and less familiar words with training their failures.
Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart
2013-01-01
Purpose To determine whether children with dyslexia (DYS) are more affected than age-matched average readers (AR) by talker and intonation variability when perceiving speech in noise. Method Thirty-four DYS and 25 AR children were tested on their perception of consonants in naturally-produced consonant-vowel (CV) tokens in multi-talker babble. Twelve CVs were presented for identification in four conditions varying in the degree of talker and intonation variability. Consonant place (/bi/-/di/) and voicing (/bi/-/pi/) discrimination was investigated with the same conditions. Results DYS children made slightly more identification errors than AR children but only for conditions with variable intonation. Errors were more frequent for a subset of consonants, generally weakly-encoded for AR children, for tokens with intonation patterns (steady and rise-fall) that occur infrequently in connected discourse. In discrimination tasks, which have a greater memory and cognitive load, DYS children scored lower than AR children across all conditions. Conclusions Unusual intonation patterns had a disproportionate (but small) effect on consonant intelligibility in noise for DYS children but adding talker variability did not. DYS children do not appear to have a general problem in perceiving speech in degraded conditions, which makes it unlikely that they lack robust phonological representations. PMID:22761322
Fritz, Thomas Hans; Renders, Wiske; Müller, Karsten; Schmude, Paul; Leman, Marc; Turner, Robert; Villringer, Arno
2013-10-01
Helmholtz himself speculated about a role of the cochlea in the perception of musical dissonance. Here we indirectly investigated this issue, assessing the valence judgment of musical stimuli with variable consonance/dissonance and presented diotically (exactly the same dissonant signal was presented to both ears) or dichotically (a consonant signal was presented to each ear--both consonant signals were rhythmically identical but differed by a semitone in pitch). Differences in brain organisation underlying inter-subject differences in the percept of dichotically presented dissonance were determined with voxel-based morphometry. Behavioral results showed that diotic dissonant stimuli were perceived as more unpleasant than dichotically presented dissonance, indicating that interactions within the cochlea modulated the valence percept during dissonance. However, the behavioral data also suggested that the dissonance percept did not depend crucially on the cochlea, but also occurred as a result of binaural integration when listening to dichotic dissonance. These results also showed substantial between-participant variations in the valence response to dichotic dissonance. These differences were in a voxel-based morphometry analysis related to differences in gray matter density in the inferior colliculus, which strongly substantiated a key role of the inferior colliculus in consonance/dissonance representation in humans. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Stippekohl, Bastian; Winkler, Markus H; Walter, Bertram; Kagerer, Sabine; Mucha, Ronald F; Pauli, Paul; Vaitl, Dieter; Stark, Rudolf
2012-01-01
An important feature of addiction is the high drug craving that may promote the continuation of consumption. Environmental stimuli classically conditioned to drug-intake have a strong motivational power for addicts and can elicit craving. However, addicts differ in the attitudes towards their own consumption behavior: some are content with drug taking (consonant users) whereas others are discontent (dissonant users). Such differences may be important for clinical practice because the experience of dissonance might enhance the likelihood to consider treatment. This fMRI study investigated in smokers whether these different attitudes influence subjective and neural responses to smoking stimuli. Based on self-characterization, smokers were divided into consonant and dissonant smokers. These two groups were presented smoking stimuli and neutral stimuli. Former studies have suggested differences in the impact of smoking stimuli depending on the temporal stage of the smoking ritual they are associated with. Therefore, we used stimuli associated with the beginning (BEGIN-smoking-stimuli) and stimuli associated with the terminal stage (END-smoking-stimuli) of the smoking ritual as distinct stimulus categories. Stimulus ratings did not differ between both groups. Brain data showed that BEGIN-smoking-stimuli led to enhanced mesolimbic responses (amygdala, hippocampus, insula) in dissonant compared to consonant smokers. In response to END-smoking-stimuli, dissonant smokers showed reduced mesocortical responses (orbitofrontal cortex, subcallosal cortex) compared to consonant smokers. These results suggest that smoking stimuli with a high incentive value (BEGIN-smoking-stimuli) are more appetitive for dissonant than consonant smokers at least on the neural level. To the contrary, smoking stimuli with low incentive value (END-smoking-stimuli) seem to be less appetitive for dissonant smokers than consonant smokers. These differences might be one reason why dissonant smokers experience difficulties in translating their attitudes into an actual behavior change.
Stippekohl, Bastian; Winkler, Markus H.; Walter, Bertram; Kagerer, Sabine; Mucha, Ronald F.; Pauli, Paul; Vaitl, Dieter; Stark, Rudolf
2012-01-01
An important feature of addiction is the high drug craving that may promote the continuation of consumption. Environmental stimuli classically conditioned to drug-intake have a strong motivational power for addicts and can elicit craving. However, addicts differ in the attitudes towards their own consumption behavior: some are content with drug taking (consonant users) whereas others are discontent (dissonant users). Such differences may be important for clinical practice because the experience of dissonance might enhance the likelihood to consider treatment. This fMRI study investigated in smokers whether these different attitudes influence subjective and neural responses to smoking stimuli. Based on self-characterization, smokers were divided into consonant and dissonant smokers. These two groups were presented smoking stimuli and neutral stimuli. Former studies have suggested differences in the impact of smoking stimuli depending on the temporal stage of the smoking ritual they are associated with. Therefore, we used stimuli associated with the beginning (BEGIN-smoking-stimuli) and stimuli associated with the terminal stage (END-smoking-stimuli) of the smoking ritual as distinct stimulus categories. Stimulus ratings did not differ between both groups. Brain data showed that BEGIN-smoking-stimuli led to enhanced mesolimbic responses (amygdala, hippocampus, insula) in dissonant compared to consonant smokers. In response to END-smoking-stimuli, dissonant smokers showed reduced mesocortical responses (orbitofrontal cortex, subcallosal cortex) compared to consonant smokers. These results suggest that smoking stimuli with a high incentive value (BEGIN-smoking-stimuli) are more appetitive for dissonant than consonant smokers at least on the neural level. To the contrary, smoking stimuli with low incentive value (END-smoking-stimuli) seem to be less appetitive for dissonant smokers than consonant smokers. These differences might be one reason why dissonant smokers experience difficulties in translating their attitudes into an actual behavior change. PMID:23155368
Theoretical Aspects of Speech Production.
ERIC Educational Resources Information Center
Stevens, Kenneth N.
1992-01-01
This paper on speech production in children and youth with hearing impairments summarizes theoretical aspects, including the speech production process, sound sources in the vocal tract, vowel production, and consonant production. Examples of spectra for several classes of vowel and consonant sounds in simple syllables are given. (DB)
Hisagi, Miwako; Shafer, Valerie L.; Strange, Winifred; Sussman, Elyse S.
2015-01-01
This study examined automaticity of discrimination of a Japanese length contrast for consonants (miʃi vs. miʃʃi) in native (Japanese) and non-native (American-English) listeners using behavioral measures and the event-related potential (ERP) mismatch negativity (MMN). Attention to the auditory input was manipulated either away from the auditory input via a visual oddball task (Visual Attend), or to the input by asking the listeners to count auditory deviants (Auditory Attend). Results showed a larger MMN when attention was focused on the consonant contrast than away from it for both groups. The MMN was larger for consonant duration increments than decrements. No difference in MMN between the language groups was observed, but the Japanese listeners did show better behavioral discrimination than the American English listeners. In addition, behavioral responses showed a weak, but significant correlation with MMN amplitude. These findings suggest that both acoustic-phonetic properties and phonological experience affects automaticity of speech processing. PMID:26119918
Presentation of words to separate hemispheres prevents interword illusory conjunctions.
Liederman, J; Sohn, Y S
1999-03-01
We tested the hypothesis that division of inputs between the hemispheres could prevent interword letter migrations in the form of illusory conjunctions. The task was to decide whether a centrally-presented consonant-vowel-consonant (CVC) target word matched one of four CVC words presented to a single hemisphere or divided between the hemispheres in a subsequent test display. During half of the target-absent trials, known as conjunction trials, letters from two separate words (e.g., "tag" and "cop") in the test display could be mistaken for a target word (e.g., "top"). For the other half of the target-absent trails, the test display did not match any target consonants (Experiment 1, N = 16) or it matched one target consonant (Experiment 2, N = 29), the latter constituting true "feature" trials. Bi- as compared to unihemispheric presentation significantly reduced the number of conjunction, but not feature, errors. Illusory conjunctions did not occur when the words were presented to separate hemispheres.
Visual Influences on Perception of Speech and Nonspeech Vocal-Tract Events
Brancazio, Lawrence; Best, Catherine T.; Fowler, Carol A.
2009-01-01
We report four experiments designed to determine whether visual information affects judgments of acoustically-specified nonspeech events as well as speech events (the “McGurk effect”). Previous findings have shown only weak McGurk effects for nonspeech stimuli, whereas strong effects are found for consonants. We used click sounds that serve as consonants in some African languages, but that are perceived as nonspeech by American English listeners. We found a significant McGurk effect for clicks presented in isolation that was much smaller than that found for stop-consonant-vowel syllables. In subsequent experiments, we found strong McGurk effects, comparable to those found for English syllables, for click-vowel syllables, and weak effects, comparable to those found for isolated clicks, for excised release bursts of stop consonants presented in isolation. We interpret these findings as evidence that the potential contributions of speech-specific processes on the McGurk effect are limited, and discuss the results in relation to current explanations for the McGurk effect. PMID:16922061
The influence of the level formants on the perception of synthetic vowel sounds
NASA Astrophysics Data System (ADS)
Kubzdela, Henryk; Owsianny, Mariuz
A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.
The Unexpected Side-Effects of Dissonance
ERIC Educational Resources Information Center
Bodner, Ehud; Gilboa, Avi; Amir, Dorit
2007-01-01
The effects of dissonant and consonant music on cognitive performance were examined. Situational dissonance and consonance were also tested and determined as the state where one's opinion is contrasted or matched with the majority's opinion, respectively. Subjects performed several cognitive tasks while listening to a melody arranged dissonantly,…
[Velopharyngeal closure pattern and speech performance among submucous cleft palate patients].
Heng, Yin; Chunli, Guo; Bing, Shi; Yang, Li; Jingtao, Li
2017-06-01
To characterize the velopharyngeal closure patterns and speech performance among submucous cleft palate patients. Patients with submucous cleft palate visiting the Department of Cleft Lip and Palate Surgery, West China Hospital of Stomatology, Sichuan University between 2008 and 2016 were reviewed. Outcomes of subjective speech evaluation including velopharyngeal function, consonant articulation, and objective nasopharyngeal endoscopy including the mobility of soft palate, pharyngeal walls were retrospectively analyzed. A total of 353 cases were retrieved in this study, among which 138 (39.09%) demonstrated velopharyngeal competence, 176 (49.86%) velopharyngeal incompetence, and 39 (11.05%) marginal velopharyngeal incompetence. A total of 268 cases were subjected to nasopharyngeal endoscopy examination, where 167 (62.31%) demonstrated circular closure pattern, 89 (33.21%) coronal pattern, and 12 (4.48%) sagittal pattern. Passavant's ridge existed in 45.51% (76/167) patients with circular closure and 13.48% (12/89) patients with coronal closure. Among the 353 patients included in this study, 137 (38.81%) presented normal articulation, 124 (35.13%) consonant elimination, 51 (14.45%) compensatory articulation, 36 (10.20%) consonant weakening, 25 (7.08%) consonant replacement, and 36 (10.20%) multiple articulation errors. Circular closure was the most prevalent velopharyngeal closure pattern among patients with submucous cleft palate, and high-pressure consonant deletion was the most common articulation abnormality. Articulation error occurred more frequently among patients with a low velopharyngeal closure rate.
Consonance in Information System Projects: A Relationship Marketing Perspective
ERIC Educational Resources Information Center
Lin, Pei-Ying
2010-01-01
Different stakeholders in the information system project usually have different perceptions and expectations of the projects. There is seldom consistency in the stakeholders' evaluations of the project outcome. Thus the outcomes of information system projects are usually disappointing to one or more stakeholders. Consonance is a process that can…
Factors Influencing Consonant Acquisition in Brazilian Portuguese-Speaking Children
ERIC Educational Resources Information Center
Ceron, Marizete Ilha; Gubiani, Marileda Barichello; de Oliveira, Camila Rosa; Keske-Soares, Márcia
2017-01-01
Purpose: We sought to provide valid and reliable data on the acquisition of consonant sounds in speakers of Brazilian Portuguese. Method: The sample comprised 733 typically developing monolingual speakers of Brazilian Portuguese (ages 3;0-8;11 [years;months]). The presence of surface speech error patterns, the revised percentage consonants…
Palatalization and Intrinsic Prosodic Vowel Features in Russian
ERIC Educational Resources Information Center
Ordin, Mikhail
2011-01-01
The presented study is aimed at investigating the interaction of palatalization and intrinsic prosodic features of the vowel in CVC (consonant+vowel+consonant) syllables in Russian. The universal nature of intrinsic prosodic vowel features was confirmed with the data from the Russian language. It was found that palatalization of the consonants…
Relationship between Consonant Recognition in Noise and Hearing Threshold
ERIC Educational Resources Information Center
Yoon, Yang-soo; Allen, Jont B.; Gooler, David M.
2012-01-01
Purpose: Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, "HT" (f), grouping HI listeners with "HT" (f) is widely practiced. In this article, the relationship between consonant recognition and "HT" (f) is…
ERIC Educational Resources Information Center
Bennett, Ruth, Ed.; And Others
This modified alphabet booklet belongs to a series of bilingual instructional materials in Hupa and English. The booklet begins with a Hupa Unifon alphabet chart giving the symbols used to reproduce the most simple version of the sounds in the Hupa language. Nearly 200 basic vocabulary words and phrases are given. A Hupa consonant is followed by…
ERIC Educational Resources Information Center
Vanden Bergh, Bruce G.; And Others
A study was conducted to determine if brand names that begin with consonants called "plosives" (B, C, D, G, K, P, and T) are more readily recalled and recognized than names that begin with other consonants or vowels. Additionally, the study investigated the relationship between name length and memorability, ability to associate names…
Hemispheric Differences in Processing Handwritten Cursive
ERIC Educational Resources Information Center
Hellige, Joseph B.; Adamson, Maheen M.
2007-01-01
Hemispheric asymmetry was examined for native English speakers identifying consonant-vowel-consonant (CVC) non-words presented in standard printed form, in standard handwritten cursive form or in handwritten cursive with the letters separated by small gaps. For all three conditions, fewer errors occurred when stimuli were presented to the right…
Variation in /?/ Outcomes in the Speech of U.S
ERIC Educational Resources Information Center
Figueroa, Nicholas James
2017-01-01
This dissertation investigated the speech productions of the implosive -r consonant by U.S.-born Puerto Rican and Dominican Heritage Language Spanish speakers in New York. The following main research questions were addressed: 1) Do heritage language Caribbean Spanish speakers evidence the same variation with the /?/ consonant in the implosive…
Linking working memory and long-term memory: a computational model of the learning of new words.
Jones, Gary; Gobet, Fernand; Pine, Julian M
2007-11-01
The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory (LTM) has been proposed. In this paper, we present a computational model of children's vocabulary acquisition (EPAM-VOC) that specifies how phonological working memory and LTM interact. The model learns phoneme sequences, which are stored in LTM and mediate how much information can be held in working memory. The model's behaviour is compared with that of children in a new study of NWR, conducted in order to ensure the same nonword stimuli and methodology across ages. EPAM-VOC shows a pattern of results similar to that of children: performance is better for shorter nonwords and for wordlike nonwords, and performance improves with age. EPAM-VOC also simulates the superior performance for single consonant nonwords over clustered consonant nonwords found in previous NWR studies. EPAM-VOC provides a simple and elegant computational account of some of the key processes involved in the learning of new words: it specifies how phonological working memory and LTM interact; makes testable predictions; and suggests that developmental changes in NWR performance may reflect differences in the amount of information that has been encoded in LTM rather than developmental changes in working memory capacity.
Cued Dichotic Listening with Right-Handed, Left-Handed, Bilingual and Learning-Disabled Children.
ERIC Educational Resources Information Center
Obrzut, John E.; And Others
This study used cued dichotic listening to investigate differences in language lateralization among right-handed (control), left handed, bilingual, and learning disabled children. Subjects (N=60) ranging in age from 7-13 years were administered a consonant-vowel-consonant dichotic paradigm with three experimental conditions (free recall, directed…
Speech-Language Pathologists' Knowledge of Tongue/Palate Contact for Consonants
ERIC Educational Resources Information Center
McLeod, Sharynne
2011-01-01
Speech-language pathologists (SLPs) rely on knowledge of tongue placement to assess and provide intervention. A total of 175 SLPs who worked with children with speech sound disorders (SSDs) drew coronal diagrams of tongue/palate contact for 24 English consonants. Comparisons were made between their responses and typical English-speaking adults'…
ERIC Educational Resources Information Center
Marcer, D.; And Others
1977-01-01
Compares the rates of forgetting of five-item sequences of acoustically similar and dissimilar consonants and words in the absence of proactive and retroactive interference in order to test whether within sequence similarity rather than stimulus length would have a greater influence on retention. (Author/RK)
Infants' Discrimination of Consonants: Interplay between Word Position and Acoustic Saliency
ERIC Educational Resources Information Center
Archer, Stephanie L.; Zamuner, Tania; Engel, Kathleen; Fais, Laurel; Curtin, Suzanne
2016-01-01
Research has shown that young infants use contrasting acoustic information to distinguish consonants. This has been used to argue that by 12 months, infants have homed in on their native language sound categories. However, this ability seems to be positionally constrained, with contrasts at the beginning of words (onsets) discriminated earlier.…
The Mechanics of Fingerspelling: Analyzing Ethiopian Sign Language
ERIC Educational Resources Information Center
Duarte, Kyle
2010-01-01
Ethiopian Sign Language utilizes a fingerspelling system that represents Amharic orthography. Just as each character of the Amharic abugida encodes a consonant-vowel sound pair, each sign in the Ethiopian Sign Language fingerspelling system uses handshape to encode a base consonant, as well as a combination of timing, placement, and orientation to…
The Effect of Anatomic Factors on Tongue Position Variability during Consonants
ERIC Educational Resources Information Center
Rudy, Krista; Yunusova, Yana
2013-01-01
Purpose: This study sought to investigate the effect of palate morphology and anthropometric measures of the head on positional variability of the tongue during consonants. Method: An electromagnetic tracking system was used to record tongue movements of 21 adults. Each talker produced a series of symmetrical VCV syllables containing one of the…
Vowel and Consonant Lessening: A Study of Articulating Reductions and Their Relations to Genders
ERIC Educational Resources Information Center
Lin, Grace Hui Chin; Chien, Paul Shih Chieh
2011-01-01
Using English as a global communicating tool makes Taiwanese people have to speak in English in diverse international situations. However, consonants and vowels in English are not all effortless for them to articulate. This phonological reduction study explores concepts about phonological (articulating system) approximation. From Taiwanese folks'…
Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing
ERIC Educational Resources Information Center
Wolf, Gail Marie
2016-01-01
This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…
Consonant Inventories in the Spontaneous Speech of Young Children: A Bootstrapping Procedure
ERIC Educational Resources Information Center
Van Severen, Lieve; Van Den Berg, Renate; Molemans, Inge; Gillis, Steven
2012-01-01
Consonant inventories are commonly drawn to assess the phonological acquisition of toddlers. However, the spontaneous speech data that are analysed often vary substantially in size and composition. Consequently, comparisons between children and across studies are fundamentally hampered. This study aims to examine the effect of sample size on the…
Psychoacoustic Assessment of Speech Communication Systems. The Diagnostic Discrimination Test.
ERIC Educational Resources Information Center
Grether, Craig Blaine
The present report traces the rationale, development and experimental evaluation of the Diagnostic Discrimination Test (DDT). The DDT is a three-choice test of consonant discriminability of the perceptual/acoustic dimensions of consonant phonemes within specific vowel contexts. The DDT was created and developed in an attempt to provide a…
When Less is More: Feedback, Priming, and the Pseudoword Superiority Effect
Massol, Stéphanie; Midgley, Katherine J.; Holcomb, Phillip J.; Grainger, Jonathan
2011-01-01
The present study combined masked priming with electrophysiological recordings to investigate orthographic priming effects with nonword targets. Targets were pronounceable nonwords (e.g., STRENG) or consonant strings (e.g., STRBNG), that both differed from a real word by a single letter substitution (STRONG). Targets were preceded by related primes that could be the same as the target (e.g., streng – STRENG, strbng-STRBNG) or the real word neighbor of the target (e.g., strong – STRENG, strong-STRBNG). Independently of priming, pronounceable nonwords were associated with larger negativities than consonant strings, starting at 290 ms post-target onset. Overall, priming effects were stronger and more long-lasting with pronounceable nonwords than consonant strings. However, consonant string targets showed an early effect of word neighbor priming in the absence of an effect of repetition priming, whereas pronounceable nonwords showed both repetition and word neighbor priming effects in the same time window. This pattern of priming effects is taken as evidence for feedback from whole-word orthographic representations activated by the prime stimulus that influences bottom-up processing of prelexical representations during target processing. PMID:21354110
Is Attention Shared Between the Ears?1
Shiffrin, Richard M.; Pisoni, David B.; Castaneda-Mendez, Kicab
2012-01-01
This study tests the locus of attention during selective listening for speech-like stimuli. Can processing be differentially allocated to the two ears? Two conditions were used. The simultaneous condition involved one of four randomly chosen stop-consonants being presented to one of the ears chosen at random. The sequential condition involved two intervals; in the first S listened to the right ear; in the second S listened to the left ear. One of the four consonants was presented to an attended ear during one of these intervals. Experiment I used no distracting stimuli. Experiment II utilized a distracting consonant not confusable with any of the four target consonants. This distractor was always presented to any ear not containing a target. In both experiments, simultaneous and sequential performance were essentially identical, despite the need for attention sharing between the two ears during the simultaneous condition. We conclude that selective attention does not occur during perceptual processing of speech sounds presented to the two ears. We suggest that attentive effects arise in short-term memory following processing. PMID:23226838
Stop and Fricative Devoicing in European Portuguese, Italian and German.
Pape, Daniel; Jesus, Luis M T
2015-06-01
This paper describes a cross-linguistic production study of devoicing for European Portuguese (EP), Italian, and German. We recorded all stops and fricatives in four vowel contexts and two word positions. We computed the devoicing of the time-varying patterns throughout the stop and fricative duration. Our results show that regarding devoicing behaviour, EP is more similar to German than Italian. While Italian shows almost no devoicing of all phonologically voiced consonants, both EP and German show strong and consistent devoicing through the entire consonant. Differences in consonant position showed no effect for EP and Italian, but were significantly different for German. The height of the vowel context had an effect for German and EP. For EP, we showed that a more posterior place of articulation and low vowel context lead to significantly more devoicing. However, in contrast to German, we could not find an influence of consonant position on devoicing. The high devoicing for all phonologically voiced stops and fricatives and the vowel context influence are a surprising new result. With respect to voicing maintenance, EP is more like German than other Romance languages.
Bach Is the Father of Harmony: Revealed by a 1/f Fluctuation Analysis across Musical Genres.
Wu, Dan; Kendrick, Keith M; Levitin, Daniel J; Li, Chaoyi; Yao, Dezhong
2015-01-01
Harmony is a fundamental attribute of music. Close connections exist between music and mathematics since both pursue harmony and unity. In music, the consonance of notes played simultaneously partly determines our perception of harmony; associates with aesthetic responses; and influences the emotion expression. The consonance could be considered as a window to understand and analyze harmony. Here for the first time we used a 1/f fluctuation analysis to investigate whether the consonance fluctuation structure in music with a wide range of composers and genres followed the scale free pattern that has been found for pitch, melody, rhythm, human body movements, brain activity, natural images and geographical features. We then used a network graph approach to investigate which composers were the most influential both within and across genres. Our results showed that patterns of consonance in music did follow scale-free characteristics, suggesting that this feature is a universally evolved one in both music and the living world. Furthermore, our network analysis revealed that Bach's harmony patterns were having the most influence on those used by other composers, followed closely by Mozart.
Crespo-Bojorque, Paola; Toro, Juan M
2015-02-01
Traditionally, physical features in musical chords have been proposed to be at the root of consonance perception. Alternatively, recent studies suggest that different types of experience modulate some perceptual foundations for musical sounds. The present study tested whether the mechanisms involved in the perception of consonance are present in an animal with no extensive experience with harmonic stimuli and a relatively limited vocal repertoire. In Experiment 1, rats were trained to discriminate consonant from dissonant chords and tested to explore whether they could generalize such discrimination to novel chords. In Experiment 2, we tested if rats could discriminate between chords differing only in their interval ratios and generalize them to different octaves. To contrast the observed pattern of results, human adults were tested with the same stimuli in Experiment 3. Rats successfully discriminated across chords in both experiments, but they did not generalize to novel items in either Experiment 1 or Experiment 2. On the contrary, humans not only discriminated among both consonance-dissonance categories, and among sets of interval ratios, they also generalized their responses to novel items. These results suggest that experience with harmonic sounds may be required for the construction of categories among stimuli varying in frequency ratios. However, the discriminative capacity observed in rats suggests that at least some components of auditory processing needed to distinguish chords based on their interval ratios are shared across species. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Development of a Serial Order in Speech Constrained by Articulatory Coordination
Oohashi, Hiroki; Watanabe, Hama; Taga, Gentaro
2013-01-01
Universal linguistic constraints seem to govern the organization of sound sequences in words. However, our understanding of the origin and development of these constraints is incomplete. One possibility is that the development of neuromuscular control of articulators acts as a constraint for the emergence of sequences in words. Repetitions of the same consonant observed in early infancy and an increase in variation of consonantal sequences over months of age have been interpreted as a consequence of the development of neuromuscular control. Yet, it is not clear how sequential coordination of articulators such as lips, tongue apex and tongue dorsum constrains sequences of labial, coronal and dorsal consonants in words over the course of development. We examined longitudinal development of consonant-vowel-consonant(-vowel) sequences produced by Japanese children between 7 and 60 months of age. The sequences were classified according to places of articulation for corresponding consonants. The analyses of individual and group data show that infants prefer repetitive and fronting articulations, as shown in previous studies. Furthermore, we reveal that serial order of different places of articulations within the same organ appears earlier and then gradually develops, whereas serial order of different articulatory organs appears later and then rapidly develops. In the same way, we also analyzed the sequences produced by English children and obtained similar developmental trends. These results suggest that the development of intra- and inter-articulator coordination constrains the acquisition of serial orders in speech with the complexity that characterizes adult language. PMID:24223827
Specht, Karsten; Baumgartner, Florian; Stadler, Jörg; Hugdahl, Kenneth; Pollmann, Stefan
2014-01-01
To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA) and voice-onset time (VOT) differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI) study takes advantage of the superior spatial resolution and high sensitivity of ultra-high-field 7 T MRI. Subjects were attentively listening to consonant–vowel (CV) syllables with an alveolar or bilabial stop-consonant and either a short or long VOT. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the CV syllables. This was however modulated strongest by PoA such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale (PT) onto the right auditory cortex (AC) during the processing of alveolar CV syllables. Furthermore, the connectivity result indicated also a directed information flow from the right to the left AC, and further to the left PT for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right ACs, with the left PT as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the CV syllables. PMID:24966841
Lohmander, A; Willadsen, E; Persson, C; Henningsson, G; Bowden, M; Hutters, B
2009-07-01
To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcomes across five included languages were developed and tested. PARTICIPANTS AND MATERIALS: Randomly selected video recordings of 10 5-year-old children from each language (n = 50) were included in the project. Speech material consisted of test consonants in single words, connected speech, and syllable chains with nasal consonants. Five experienced speech and language pathologists participated as observers. Narrow phonetic transcription of test consonants translated into cleft speech characteristics, ordinal scale rating of resonance, and perceived velopharyngeal closure (VPC). A velopharyngeal composite score (VPC-sum) was extrapolated from raw data. Intra-agreement comparisons were performed. Range for intra-agreement for consonant analysis was 53% to 89%, for hypernasality on high vowels in single words the range was 20% to 80%, and the agreement between the VPC-sum and the overall rating of VPC was 78%. Pooling data of speakers of different languages in the same trial and comparing speech outcome across trials seems possible if the assessment of speech concerns consonants and is confined to speech units that are phonetically similar across languages. Agreed conventions and rules are important. A composite variable for perceptual assessment of velopharyngeal function during speech seems usable; whereas, the method for hypernasality evaluation requires further testing.
Contrast effects on stop consonant identification.
Diehl, R L; Elman, J L; McCusker, S B
1978-11-01
Changes in the identification of speech sounds following selective adaptation are usually attributed to a reduction in sensitivity of auditory feature detectors. An alternative explanation of these effects is based on the notion of response contrast. In several experiments, subjects identified the initial segment of synthetic consonant-vowel syllables as either the voiced stop [b] or the voiceless stop [ph]. Each test syllable had a value of voice onset time (VOT) that placed it near the English voiced-voiceless boundary. When the test syllables were preceded by a single clear [b] (VOT = -100 msec), subjects tended to identify them as [ph], whereas when they were preceded by an unambiguous [ph] (VOT = 100 msec), the syllables were predominantly labeled [b]. This contrast effect occurred even when the contextual stimuli were velar and the test stimuli were bilabial, which suggests a featural rather than a phonemic basis for the effect. To discount the possibility that these might be instances of single-trial sensory adaptation, we conducted a similar experiment in which the contextual stimuli followed the test items. Reliable contrast effects were still obtained. In view of these results, it appears likely that response contrast accounts for at least some component of the adaptation effects reported in the literature.
Interaction of attention and acoustic factors in dichotic listening for fused words.
McCulloch, Katie; Lachner Bass, Natascha; Dial, Heather; Hiscock, Merrill; Jansen, Ben
2017-07-01
Two dichotic listening experiments examined the degree to which the right-ear advantage (REA) for linguistic stimuli is altered by a "top-down" variable (i.e., directed attention) in conjunction with selected "bottom-up" (acoustic) variables. Halwes fused dichotic words were administered to 99 right-handed adults with instructions to attend to the left or right ear, or to divide attention equally. Stimuli in Experiment 1 were presented without noise or mixed with noise that was high-pass or low-pass filtered, or unfiltered. The stimuli themselves in Experiment 2 were high-pass or low-pass filtered, or unfiltered. The initial consonants of each dichotic pair were categorized according to voice onset time (VOT) and place of articulation (PoA). White noise extinguished both the REA and selective attention, and filtered noise nullified selective attention without extinguishing the REA. Frequency filtering of the words themselves did not alter performance. VOT effects were inconsistent across experiments but PoA analyses indicated that paired velar consonants (/k/ and /g/) yield a left-ear advantage and paradoxical selective-attention results. The findings show that ear asymmetry and the effectiveness of directed attention can be altered by bottom-up variables.
Edwards, Jan; Beckman, Mary E.
2009-01-01
While broad-focus comparisons of consonant inventories across children acquiring different language can suggest that phonological development follows a universal sequence, finer-grained statistical comparisons can reveal systematic differences. This cross-linguistic study of word-initial lingual obstruents examined some effects of language-specific frequencies on consonant mastery. Repetitions of real words were elicited from 2- and 3-year-old children who were monolingual speakers of English, Cantonese, Greek, or Japanese. The repetitions were recorded and transcribed by an adult native speaker for each language. Results found support for both language-universal effects in phonological acquisition and for language-specific influences related to phoneme and phoneme sequence frequency. These results suggest that acquisition patterns that are common across languages arise in two ways. One influence is direct, via the universal constraints imposed by the physiology and physics of speech production and perception, and how these predict which contrasts will be easy and which will be difficult for the child to learn to control. The other influence is indirect, via the way universal principles of ease of perception and production tend to influence the lexicons of many languages through commonly attested sound changes. PMID:19890438
McCathren, R B; Yoder, P J; Warren, S F
1999-08-01
This study tested the relationship between prelinguistic vocalization and expressive vocabulary 1 year later in young children with mild to moderate developmental delays. Three vocalization variables were tested: rate of all vocalization, rate of vocalizations with consonants, and rate of vocalizations used interactively. The 58 toddlers in the study were 17-34 months old, not sensory impaired, and had Bayley Mental Development Indices (Bayley, 1969; Bayley, 1993) from 35-85. In addition, the children had fewer than 3 words in their expressive vocabularies and during classroom observation each showed at least one instance of intentional prelinguistic communication before testing. Selected sections of the Communication and Symbolic Behavior Scales procedures (CSBS; Wetherby & Prizant, 1993) were administered at the beginning and at the end of the study. The vocal measures were obtained in the initial CSBS session. One measure of expressive vocabulary was obtained in the CSBS session at the end of the study. In addition, expressive vocabulary was measured in a nonstructured play session at the end of the study. We predicted that rate of vocalization, rate of vocalizations with consonants, and rate of vocalizations used interactively would all be positively related to later expressive vocabulary. The results confirmed the predictions.
González-García, Nadia; González, Martha A; Rendón, Pablo L
2016-07-15
Relationships between musical pitches are described as either consonant, when associated with a pleasant and harmonious sensation, or dissonant, when associated with an inharmonious feeling. The accurate singing of musical intervals requires communication between auditory feedback processing and vocal motor control (i.e. audio-vocal integration) to ensure that each note is produced correctly. The objective of this study is to investigate the neural mechanisms through which trained musicians produce consonant and dissonant intervals. We utilized 4 musical intervals (specifically, an octave, a major seventh, a fifth, and a tritone) as the main stimuli for auditory discrimination testing, and we used the same interval tasks to assess vocal accuracy in a group of musicians (11 subjects, all female vocal students at conservatory level). The intervals were chosen so as to test for differences in recognition and production of consonant and dissonant intervals, as well as narrow and wide intervals. The subjects were studied using fMRI during performance of the interval tasks; the control condition consisted of passive listening. Singing dissonant intervals as opposed to singing consonant intervals led to an increase in activation in several regions, most notably the primary auditory cortex, the primary somatosensory cortex, the amygdala, the left putamen, and the right insula. Singing wide intervals as opposed to singing narrow intervals resulted in the activation of the right anterior insula. Moreover, we also observed a correlation between singing in tune and brain activity in the premotor cortex, and a positive correlation between training and activation of primary somatosensory cortex, primary motor cortex, and premotor cortex during singing. When singing dissonant intervals, a higher degree of training correlated with the right thalamus and the left putamen. Our results indicate that singing dissonant intervals requires greater involvement of neural mechanisms associated with integrating external feedback from auditory and sensorimotor systems than singing consonant intervals, and it would then seem likely that dissonant intervals are intoned by adjusting the neural mechanisms used for the production of consonant intervals. Singing wide intervals requires a greater degree of control than singing narrow intervals, as it involves neural mechanisms which again involve the integration of internal and external feedback. Copyright © 2016 Elsevier B.V. All rights reserved.
Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G
2001-06-01
The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.
An Analysis of the Most Frequently Occurring Words in Spoken American English.
ERIC Educational Resources Information Center
Plant, Geoff
1999-01-01
A study analyzed frequency of occurrence of consonants, vowels, and diphthongs, syllabic structure of the words, and segmental structure of the 311 monosyllabic words of 500 words that occur most frequently in English. Three mannerisms of articulation accounted for nearly 75 percent of all consonant occurrences: stops, semi-vowels, and nasals.…
Perception of Non-Native Consonant Length Contrast: The Role of Attention in Phonetic Processing
ERIC Educational Resources Information Center
Porretta, Vincent J.; Tucker, Benjamin V.
2015-01-01
The present investigation examines English speakers' ability to identify and discriminate non-native consonant length contrast. Three groups (L1 English No-Instruction, L1 English Instruction, and L1 Finnish control) performed a speeded forced-choice identification task and a speeded AX discrimination task on Finnish non-words (e.g.…
Level 2 Foundation Units. Key Stage 3: National Strategy.
ERIC Educational Resources Information Center
Department for Education and Skills, London (England).
These foundation units are aimed at pupils working within Level 2 entry to Year 7. They are designed to remind pupils what they know and take them forward. The units also will teach phonics knowledge from consonant-vowel-consonant (CVC) words to long vowel phonemes. The writing units focus on developing the following skills: understanding what a…
The Relative Position Priming Effect Depends on Whether Letters Are Vowels or Consonants
ERIC Educational Resources Information Center
Dunabeitia, Jon Andoni; Carreiras, Manuel
2011-01-01
The relative position priming effect is a type of subset priming in which target word recognition is facilitated as a consequence of priming the word with some of its letters, maintaining their relative position (e.g., "csn" as a prime for "casino"). Five experiments were conducted to test whether vowel-only and consonant-only…
Consonants and Vowels: Different Roles in Early Language Acquisition
ERIC Educational Resources Information Center
Hochmann, Jean-Remy; Benavides-Varela, Silvia; Nespor, Marina; Mehler, Jacques
2011-01-01
Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor…
Perceptual Confusions of American-English Vowels and Consonants by Native Arabic Bilinguals
ERIC Educational Resources Information Center
Shafiro, Valeriy; Levy, Erika S.; Khamis-Dakwar, Reem; Kharkhurin, Anatoliy
2013-01-01
This study investigated the perception of American-English (AE) vowels and consonants by young adults who were either (a) early Arabic-English bilinguals whose native language was Arabic or (b) native speakers of the English dialects spoken in the United Arab Emirates (UAE), where both groups were studying. In a closed-set format, participants…
Perception of Consonants in Reverberation and Noise by Adults Fitted with Bimodal Devices
ERIC Educational Resources Information Center
Mason, Michelle; Kokkinakis, Kostas
2014-01-01
Purpose: The purpose of this study was to evaluate the contribution of a contralateral hearing aid to the perception of consonants, in terms of voicing, manner, and place-of-articulation cues in reverberation and noise by adult cochlear implantees aided by bimodal fittings. Method: Eight postlingually deafened adult cochlear implant (CI) listeners…
Choosing between Alternative Spellings of Sounds: The Role of Context
ERIC Educational Resources Information Center
Treiman, Rebecca; Kessler, Brett
2016-01-01
We investigated how university students select between alternative spellings of phonemes in written production by asking them to spell nonwords whose final consonants have extended spellings (e.g., ‹ff› for /f/) and simpler spellings (e.g., ‹f› for /f/). Participants' choices of spellings for the final consonant were influenced by whether they…
Learning about Spelling Sequences: The Role of Onsets and Rimes in Analogies in Reading.
ERIC Educational Resources Information Center
Goswami, Usha
1991-01-01
In one experiment, children learned more about consonant blends at the onset than at the end of words. In a second experiment, children learned more about rhyming vowel-consonant blend sequences at the end of words than those at the beginning of words, where the vowel extended the onset. (BC)
The Effects of Background Noise on Dichotic Listening to Consonant-Vowel Syllables
ERIC Educational Resources Information Center
Sequeira, Sarah Dos Santos; Specht, Karsten; Hamalainen, Heikki; Hugdahl, Kenneth
2008-01-01
Lateralization of verbal processing is frequently studied with the dichotic listening technique, yielding a so called right ear advantage (REA) to consonant-vowel (CV) syllables. However, little is known about how background noise affects the REA. To address this issue, we presented CV-syllables either in silence or with traffic background noise…
Strategies for the Production of Spanish Stop Consonants by Native Speakers of English.
ERIC Educational Resources Information Center
Zampini, Mary L.
A study examined patterns in production of Spanish voiced and voiceless stop consonants by native English speakers, focusing on the interaction between two acoustic cues of stops: voice closure interval and voice onset time (VOT). The study investigated whether learners acquire the appropriate phonetic categories with regard to these stops and if…
Consonant Accuracy after Severe Pediatric Traumatic Brain Injury: A Prospective Cohort Study
ERIC Educational Resources Information Center
Campbell, Thomas F.; Dollaghan, Christine; Janosky, Janine; Rusiewicz, Heather Leavy; Small, Steven L.; Dick, Frederic; Vick, Jennell; Adelson, P. David
2013-01-01
Purpose: The authors sought to describe longitudinal changes in Percentage of Consonants Correct--Revised (PCC-R) after severe pediatric traumatic brain injury (TBI), to compare the odds of normal-range PCC-R in children injured at older and younger ages, and to correlate predictor variables and PCC-R outcomes. Method: In 56 children injured…
Treisman, A; Souther, J
1986-02-01
When attention is divided among four briefly exposed syllables, subjects mistakenly detect targets whose letters are present in the display but in the wrong combinations. These illusory conjunctions are somewhat more frequent when the target is a word and when the distractors are nonwords, but the effects of lexical status are small, and no longer reach significance in free report of the same displays. Search performance is further impaired if the nonwords are unpronounceable consonant strings rather than consonant-vowel-consonant strings, but the decrement is due to missed targets rather than to increased conjunction errors. The results are discussed in relation to feature-integration theory and to current models of word perception.
Articulation in schoolchildren and adults with neurofibromatosis type 1.
Cosyns, Marjan; Mortier, Geert; Janssens, Sandra; Bogaert, Famke; D'Hondt, Stephanie; Van Borsel, John
2012-01-01
Several authors mentioned the occurrence of articulation problems in the neurofibromatosis type 1 (NF1) population. However, few studies have undertaken a detailed analysis of the articulation skills of NF1 patients, especially in schoolchildren and adults. Therefore, the aim of the present study was to examine in depth the articulation skills of NF1 schoolchildren and adults, both phonetically and phonologically. Speech samples were collected from 43 Flemish NF1 patients (14 children and 29 adults), ranging in age between 7 and 53 years, using a standardized speech test in which all Flemish single speech sounds and most clusters occur in all their permissible syllable positions. Analyses concentrated on consonants only and included a phonetic inventory, a phonetic, and a phonological analysis. It was shown that phonetic inventories were incomplete in 16.28% (7/43) of participants, in which totally correct realizations of the sibilants /ʃ/ and/or /ʒ/ were missing. Phonetic analysis revealed that distortions were the predominant phonetic error type. Sigmatismus stridens, multiple ad- or interdentality, and, in children, rhotacismus non vibrans were frequently observed. From a phonological perspective, the most common error types were substitution and syllable structure errors. Particularly, devoicing, cluster simplification, and, in children, deletion of the final consonant of words were perceived. Further, it was demonstrated that significantly more men than women presented with an incomplete phonetic inventory, and that girls tended to display more articulation errors than boys. Additionally, children exhibited significantly more articulation errors than adults, suggesting that although the articulation skills of NF1 patients evolve positively with age, articulation problems do not resolve completely from childhood to adulthood. As such, the articulation errors made by NF1 adults may be regarded as residual articulation disorders. It can be concluded that the speech of NF1 patients is characterized by mild articulation disorders at an age where this is no longer expected. Readers will be able to describe neurofibromatosis type 1 (NF1) and explain the articulation errors displayed by schoolchildren and adults with this genetic syndrome. © 2011 Elsevier Inc. All rights reserved.
Raud Westberg, Liisi; Höglund Santamarta, Lena; Karlsson, Jenny; Nyberg, Jill; Neovius, Erik; Lohmander, Anette
2017-10-25
The aim of this study was to describe speech at 1, 1;6 and 3 years of age in children born with unilateral cleft lip and palate (UCLP) and relate the findings to operation method and amount of early intervention received. A prospective trial of children born with UCLP operated with a one-stage (OS) palatal repair at 12 months or a two-stage repair (TS) with soft palate closure at 3-4 months and hard palate closure at 12 months was undertaken (Scandcleft). At 1 and 1;6 years the place and manner of articulation and number of different consonants produced in babbling were reported in 33 children. At three years of age percentage consonants correct adjusted for age (PCC-A) and cleft speech errors were assessed in 26 of the 33 children. Early intervention was not provided as part of the trial but according to the clinical routine and was extracted from patient records. At age 3, the mean PCC-A was 68% and 46% of the children produced articulation errors with no significant difference between the two groups. At one year there was a significantly higher occurrence of oral stops and anterior place consonants in the TS group. There were significant correlations between the consonant production between one and three years of age, but not with amount of early intervention received. The TS method was beneficial for consonant production at age 1, but not shown at 1;6 or 3 years. Behaviourally based early intervention still needs to be evaluated.
The Role of the Auditory Brainstem in Processing Musically Relevant Pitch
Bidelman, Gavin M.
2013-01-01
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294
Coarticulation in Catalan Dark ["l"] and the Alveolar Trill: General Implications for Sound Change
ERIC Educational Resources Information Center
Recasens, Daniel
2013-01-01
Coarticulation data for Catalan reveal that, while being less sensitive to vowel effects at the consonant period, the alveolar trill [r] exerts more prominent effects than [dark "l"] on both adjacent [a] and [i]. This coarticulatory pattern may be related to strict manner demands on the production of the trill. Both consonants also differ…
Changes Over Time in Global Foreign Accent and Liquid Identifiability and Accuracy.
ERIC Educational Resources Information Center
Riney, Timothy J.; Flege, James E.
1998-01-01
Assessed global foreign accent in sentences and production of two English consonants by Japanese college students during their freshman and senior years (T1, T2). Auditory evaluations by native English-speaking listeners were used to determine to what extent the consonants produced could be identified as intended at T1 and T2; and whether the two…
ERIC Educational Resources Information Center
Hoover, Eric C.; Souza, Pamela E.; Gallun, Frederick J.
2012-01-01
Purpose: The benefits of amplitude compression in hearing aids may be limited by distortion resulting from rapid gain adjustment. To evaluate this, it is convenient to quantify distortion by using a metric that is sensitive to the changes in the processed signal that decrease consonant recognition, such as the Envelope Difference Index (EDI;…
ERIC Educational Resources Information Center
Zajac, David J.; Weissler, Mark C.
2004-01-01
Two studies were conducted to evaluate short-latency vocal tract air pressure responses to sudden pressure bleeds during production of voiceless bilabial stop consonants. It was hypothesized that the occurrence of respiratory reflexes would be indicated by distinct patterns of responses as a function of bleed magnitude. In Study 1, 19 adults…
The Prosodic Licensing of Coda Consonants in Early Speech: Interactions with Vowel Length
ERIC Educational Resources Information Center
Miles, Kelly; Yuen, Ivan; Cox, Felicity; Demuth, Katherine
2016-01-01
English has a word-minimality requirement that all open-class lexical items must contain at least two moras of structure, forming a bimoraic foot (Hayes, 1995).Thus, a word with either a long vowel, or a short vowel and a coda consonant, satisfies this requirement. This raises the question of when and how young children might learn this…
ERIC Educational Resources Information Center
Folker, Joanne E.; Murdoch, Bruce E.; Cahill, Louise M.; Delatycki, Martin B.; Corben, Louise A.; Vogel, Adam P.
2011-01-01
Articulatory kinematics were investigated using electromagnetic articulography (EMA) in four dysarthric speakers with Friedreich's ataxia (FRDA). Specifically, tongue-tip and tongue-back movements were recorded by the AG-200 EMA system during production of the consonants t and k as produced within a sentence utterance and during a rapid syllable…
On Pitch Lowering Not Linked to Voicing: Nguni and Shona Group Depressors
ERIC Educational Resources Information Center
Downing, Laura J.
2009-01-01
This paper tests how well two theories of tone-segment interactions account for the lowering effect of so-called depressor consonants on tone in languages of the Shona and Nguni groups of Southern Bantu. I show that single source theories, which propose that pitch lowering is inextricably linked to consonant voicing, as they are reflexes of the…
Harmonic Domains and Synchronization in Typically and Atypically Developing Hebrew-Speaking Children
ERIC Educational Resources Information Center
Bat-El, Outi
2009-01-01
This paper presents a comparative study of typical and atypical consonant harmony (onset-onset place harmony), with emphasis on (i) the size of the harmonic domain, (ii) the position of the harmonic domain within the prosodic word, and (iii) the maximal size of the prosodic word that exhibits consonant harmony. The data, drawn from typically and…
ERIC Educational Resources Information Center
Haapala, Sini; Niemitalo-Haapola, Elina; Raappana, Antti; Kujala, Tiia; Kujala, Teija; Jansson-Verkasalo, Eira
2015-01-01
Many children experience recurrent acute otitis media (RAOM) in early childhood. In a previous study, 2-year-old children with RAOM were shown to have immature neural patterns for speech sound discrimination. The present study further investigated the consonant inventories of these same children using natural speech samples. The results showed…
Children's Identification of Consonants in a Speech-Shaped Noise or a Two-Talker Masker
ERIC Educational Resources Information Center
Leibold, Lori J.; Buss, Emily
2013-01-01
Purpose: To evaluate child-adult differences for consonant identification in a noise or a 2-talker masker. Error patterns were compared across age and masker type to test the hypothesis that errors with the noise masker reflect limitations in the peripheral encoding of speech, whereas errors with the 2-talker masker reflect target-masker…
ERIC Educational Resources Information Center
Uiboleht, Kaire; Karm, Mari; Postareff, Liisa
2016-01-01
Teaching approaches in higher education are at the general level well researched and have identified not only the two broad categories of content-focused and learning-focused approaches to teaching but also consonance and dissonance between the aspects of teaching. Consonance means that theoretically coherent teaching practices are employed, but…
ERIC Educational Resources Information Center
Zascavage, Victoria Selden; McKenzie, Ginger Kelley; Buot, Max; Woods, Carol; Orton-Gillingham, Fellow
2012-01-01
This study compared word recognition for words written in a traditional flat font to the same words written in a three-dimensional appearing font determined to create a right hemispheric stimulation. The participants were emergent readers enrolled in Montessori schools in the United States learning to read basic CVC (consonant, vowel, consonant)…
ERIC Educational Resources Information Center
Shosted, Ryan; Hualde, Jose Ignacio; Scarpace, Daniel
2012-01-01
Are palatal consonants articulated by multiple tongue gestures (coronal and dorsal) or by a single gesture that brings the tongue into contact with the palate at several places of articulation? The lenition of palatal consonants (resulting in approximants) has been presented as evidence that palatals are simple, not complex: When reduced, they do…
ERIC Educational Resources Information Center
Redhair, Emily
2011-01-01
This study compared a stimulus fading (SF) procedure with a constant time delay (CTD) procedure for identification of consonant-vowel-consonant (CVC) nonsense words for a participant with autism. An alternating treatments design was utilized through a computer-based format. Receptive identification of target words was evaluated using a computer…
ERIC Educational Resources Information Center
Redhair, Emily I.; McCoy, Kathleen M.; Zucker, Stanley H.; Mathur, Sarup R.; Caterino, Linda
2013-01-01
This study compared a stimulus fading (SF) procedure with a constant time delay (CTD) procedure for identification of consonant-vowel-consonant (CVC) nonsense words for a participant with autism. An alternating treatments design was utilized through a computer-based format. Receptive identification of target words was evaluated using a computer…
Perfect harmony: A mathematical analysis of four historical tunings
NASA Astrophysics Data System (ADS)
Page, Michael F.
2004-10-01
In Western music, a musical interval defined by the frequency ratio of two notes is generally considered consonant when the ratio is composed of small integers. Perfect harmony or an ``ideal just scale,'' which has no exact solution, would require the division of an octave into 12 notes, each of which would be used to create six other consonant intervals. The purpose of this study is to analyze four well-known historical tunings to evaluate how well each one approximates perfect harmony. The analysis consists of a general evaluation in which all consonant intervals are given equal weighting and a specific evaluation for three preludes from Bach's ``Well-Tempered Clavier,'' for which intervals are weighted in proportion to the duration of their occurrence. The four tunings, 5-limit just intonation, quarter-comma meantone temperament, well temperament (Werckmeister III), and equal temperament, are evaluated by measures of centrality, dispersion, distance, and dissonance. When all keys and consonant intervals are equally weighted, equal temperament demonstrates the strongest performance across a variety of measures, although it is not always the best tuning. Given C as the starting note for each tuning, equal temperament and well temperament perform strongly for the three ``Well-Tempered Clavier'' preludes examined. .
2013-01-01
Background Previous studies have demonstrated functional and structural temporal lobe abnormalities located close to the auditory cortical regions in schizophrenia. The goal of this study was to determine whether functional abnormalities exist in the cortical processing of musical sound in schizophrenia. Methods Twelve schizophrenic patients and twelve age- and sex-matched healthy controls were recruited, and participants listened to a random sequence of two kinds of sonic entities, intervals (tritones and perfect fifths) and chords (atonal chords, diminished chords, and major triads), of varying degrees of complexity and consonance. The perception of musical sound was investigated by the auditory evoked potentials technique. Results Our results showed that schizophrenic patients exhibited significant reductions in the amplitudes of the N1 and P2 components elicited by musical stimuli, to which consonant sounds contributed more significantly than dissonant sounds. Schizophrenic patients could not perceive the dissimilarity between interval and chord stimuli based on the evoked potentials responses as compared with the healthy controls. Conclusion This study provided electrophysiological evidence of functional abnormalities in the cortical processing of sound complexity and music consonance in schizophrenia. The preliminary findings warrant further investigations for the underlying mechanisms. PMID:23721126
Verschuur, Carl
2009-03-01
Difficulties in speech recognition experienced by cochlear implant users may be attributed both to information loss caused by signal processing and to information loss associated with the interface between the electrode array and auditory nervous system, including cross-channel interaction. The objective of the work reported here was to attempt to partial out the relative contribution of these different factors to consonant recognition. This was achieved by comparing patterns of consonant feature recognition as a function of channel number and presence/absence of background noise in users of the Nucleus 24 device with normal hearing subjects listening to acoustic models that mimicked processing of that device. Additionally, in the acoustic model experiment, a simulation of cross-channel spread of excitation, or "channel interaction," was varied. Results showed that acoustic model experiments were highly correlated with patterns of performance in better-performing cochlear implant users. Deficits to consonant recognition in this subgroup could be attributed to cochlear implant processing, whereas channel interaction played a much smaller role in determining performance errors. The study also showed that large changes to channel number in the Advanced Combination Encoder signal processing strategy led to no substantial changes in performance.
Fels, S S; Hinton, G E
1997-01-01
Glove-Talk II is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-Talk II uses several input devices, a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. With Glove-Talk II, the subject can speak slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.
Skaalvik, Einar M; Skaalvik, Sidsel
2011-07-01
In their daily teaching and classroom management, teachers inevitably communicate and represent values. The purpose of this study was to explore relations between teachers' perception of school level values represented by the goal structure of the school and value consonance (the degree to which they felt that they shared the prevailing norms and values at the school), teachers' feeling of belonging, emotional exhaustion, job satisfaction, and motivation to leave the teaching profession. The participants were 231 Norwegian teachers in elementary school and middle school. Data were analyzed by means of structural equation modeling (SEM). Teachers' perception of mastery goal structure was strongly and positively related to value consonance and negatively related to emotional exhaustion, whereas performance goal structure, in the SEM model, was not significantly related to these constructs. Furthermore, value consonance was positively related to teachers' feeling of belonging and job satisfaction, whereas emotional exhaustion was negatively associated with job satisfaction. Job satisfaction was the strongest predictor of motivation to leave the teaching profession. A practical implication of the study is that educational goals and values should be explicitly discussed and clarified, both by education authorities and at the school level.
Rusz, Jan; Tykalová, Tereza; Klempíř, Jiří; Čmejla, Roman; Růžička, Evžen
2016-04-01
Although speech disorders represent an early and common manifestation of Parkinson's disease (PD), little is known about their progression and relationship to dopaminergic replacement therapy. The aim of the current study was to examine longitudinal motor speech changes after the initiation of pharmacotherapy in PD. Fifteen newly-diagnosed, untreated PD patients and ten healthy controls of comparable age were investigated. PD patients were tested before the introduction of antiparkinsonian therapy and then twice within the following 6 years. Quantitative acoustic analyses of seven key speech dimensions of hypokinetic dysarthria were performed. At baseline, PD patients showed significantly altered speech including imprecise consonants, monopitch, inappropriate silences, decreased quality of voice, slow alternating motion rates, imprecise vowels and monoloudness. At follow-up assessment, preservation or slight improvement of speech performance was objectively observed in two-thirds of PD patients within the first 3-6 years of dopaminergic treatment, primarily associated with the improvement of stop consonant articulation. The extent of speech improvement correlated with L-dopa equivalent dose (r = 0.66, p = 0.008) as well as with reduction in principal motor manifestations based on the Unified Parkinson's Disease Rating Scale (r = -0.61, p = 0.02), particularly reflecting treatment-related changes in bradykinesia but not in rigidity, tremor, or axial motor manifestations. While speech disorders are frequently present in drug-naive PD patients, they tend to improve or remain relatively stable after the initiation of dopaminergic treatment and appear to be related to the dopaminergic responsiveness of bradykinesia.
Correlational Analysis of Speech Intelligibility Tests and Metrics for Speech Transmission
2017-12-04
frequency scale (male voice; normal voice effort) ............................... 4 Fig. 2 Diagram of a speech communication system (Letowski...languages. Consonants contain mostly high frequency (above 1500 Hz) speech energy, but this energy is relatively small in comparison to that of the whole...voices (Letowski et al. 1993). Since the mid- frequency spectral region contains mostly vowel energy while consonants are high frequency sounds, an
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
[The role of inter-dental consonant si in treating articulation disorders].
Jiang, Li-Ping; Wang, Guo-Min; Yang, Yu-Sheng; Liu, Qiong
2010-12-01
The aim of this study was to rectify deviant tongue position and make accurate pronunciation via making use of the protrusion and containment effect of interdental consonant [si] for the tongue. One hundred and fifty-seven patients with articulation disorders (postpalatoplasty and non-cleft palate) which were diagnosed as velopharyngeal sufficiency were included in this study. There were 111 males and 46 females, aging from 5 to 28 years old. Among them,29 patients were pharyngeal fricative, 73 patients were palatalized misarticulation, 36 patients were lateralization misarticulation and 19 patients were misarticulation mixed with palatalized and lateralization. During the treatment, the patients were asked to stick out the tongue to make the tooth gently biting it and pronounce a interdental consonant si smoothly. When the tongue was fully protracted, the tongue was retracted to the lingual side of mandibular anterior teeth to produce a normal apex linguae consonant [s]. This training method had a significant effect for patients with articulation disorders. The effect was most significant for patients with pharyngeal fricative, with a effective rate of 96.55%(28/29), followed by 91.78%(67/73) in palatalized misarticulation, 84.21%(16/19) in palatalized mixed with lateralization misarticulation, and 77.78%(28/36) in lateralization misarticulation. Training the pronunciation of interdental consonant [si] may control the retrusion, arching and curling movement of tongue, which therefore provides an effective treatment for articulation disorders such as pharyngeal fricative, palatalized and lateralization misarticulation. Supported by Research Fund of Science and Technology Commission of Shanghai Municipality (Grant No.08DZ2271100), Shanghai leading Academic Discipline Project (Grant No.S30206), Research Fund of Bureau of Health of Shanghai Municipality(Grant No.2008160) and Phosphor Science Foundation of Educational Commission of Shanghai Municipality (Grant No.2000SG41).
Bentin, S; Mouchetant-Rostaing, Y; Giard, M H; Echallier, J F; Pernier, J
1999-05-01
The aim of the present study was to examine the time course and scalp distribution of electrophysiological manifestations of the visual word recognition mechanism. Event-related potentials (ERPs) elicited by visually presented lists of words were recorded while subjects were involved in a series of oddball tasks. The distinction between the designated target and nontarget stimuli was manipulated to induce a different level of processing in each session (visual, phonological/phonetic, phonological/lexical, and semantic). The ERPs of main interest in this study were those elicited by nontarget stimuli. In the visual task the targets were twice as big as the nontargets. Words, pseudowords, strings of consonants, strings of alphanumeric symbols, and strings of forms elicited a sharp negative peak at 170 msec (N170); their distribution was limited to the occipito-temporal sites. For the left hemisphere electrode sites, the N170 was larger for orthographic than for nonorthographic stimuli and vice versa for the right hemisphere. The ERPs elicited by all orthographic stimuli formed a clearly distinct cluster that was different from the ERPs elicited by nonorthographic stimuli. In the phonological/phonetic decision task the targets were words and pseudowords rhyming with the French word vitrail, whereas the nontargets were words, pseudowords, and strings of consonants that did not rhyme with vitrail. The most conspicuous potential was a negative peak at 320 msec, which was similarly elicited by pronounceable stimuli but not by nonpronounceable stimuli. The N320 was bilaterally distributed over the middle temporal lobe and was significantly larger over the left than over the right hemisphere. In the phonological/lexical processing task we compared the ERPs elicited by strings of consonants (among which words were selected), pseudowords (among which words were selected), and by words (among which pseudowords were selected). The most conspicuous potential in these tasks was a negative potential peaking at 350 msec (N350) elicited by phonologically legal but not by phonologically illegal stimuli. The distribution of the N350 was similar to that of the N320, but it was broader and including temporo-parietal areas that were not activated in the "rhyme" task. Finally, in the semantic task the targets were abstract words, and the nontargets were concrete words, pseudowords, and strings of consonants. The negative potential in this task peaked at 450 msec. Unlike the lexical decision, the negative peak in this task significantly distinguished not only between phonologically legal and illegal words but also between meaningful (words) and meaningless (pseudowords) phonologically legal structures. The distribution of the N450 included the areas activated in the lexical decision task but also areas in the fronto-central regions. The present data corroborated the functional neuroanatomy of word recognition systems suggested by other neuroimaging methods and described their timecourse, supporting a cascade-type process that involves different but interconnected neural modules, each responsible for a different level of processing word-related information.
Vocal similarity predicts the relative attraction of musical chords
Purves, Dale; Gill, Kamraan Z.
2018-01-01
Musical chords are combinations of two or more tones played together. While many different chords are used in music, some are heard as more attractive (consonant) than others. We have previously suggested that, for reasons of biological advantage, human tonal preferences can be understood in terms of the spectral similarity of tone combinations to harmonic human vocalizations. Using the chromatic scale, we tested this theory further by assessing the perceived consonance of all possible dyads, triads, and tetrads within a single octave. Our results show that the consonance of chords is predicted by their relative similarity to voiced speech sounds. These observations support the hypothesis that the relative attraction of musical tone combinations is due, at least in part, to the biological advantages that accrue from recognizing and responding to conspecific vocal stimuli. PMID:29255031
Francis, Alexander L.; Kaganovich, Natalya; Driscoll-Huber, Courtney
2008-01-01
In English, voiced and voiceless syllable-initial stop consonants differ in both fundamental frequency at the onset of voicing (onset F0) and voice onset time (VOT). Although both correlates, alone, can cue the voicing contrast, listeners weight VOT more heavily when both are available. Such differential weighting may arise from differences in the perceptual distance between voicing categories along the VOT versus onset F0 dimensions, or it may arise from a bias to pay more attention to VOT than to onset F0. The present experiment examines listeners’ use of these two cues when classifying stimuli in which perceptual distance was artificially equated along the two dimensions. Listeners were also trained to categorize stimuli based on one cue at the expense of another. Equating perceptual distance eliminated the expected bias toward VOT before training, but successfully learning to base decisions more on VOT and less on onset F0 was easier than vice versa. Perceptual distance along both dimensions increased for both groups after training, but only VOT-trained listeners showed a decrease in Garner interference. Results lend qualified support to an attentional model of phonetic learning in which learning involves strategic redeployment of selective attention across integral acoustic cues. PMID:18681610
Phonetic basis of phonemic paraphasias in aphasia: Evidence for cascading activation.
Kurowski, Kathleen; Blumstein, Sheila E
2016-02-01
Phonemic paraphasias are a common presenting symptom in aphasia and are thought to reflect a deficit in which selecting an incorrect phonemic segment results in the clear-cut substitution of one phonemic segment for another. The current study re-examines the basis of these paraphasias. Seven left hemisphere-damaged aphasics with a range of left hemisphere lesions and clinical diagnoses including Broca's, Conduction, and Wernicke's aphasia, were asked to produce syllable-initial voiced and voiceless fricative consonants, [z] and [s], in CV syllables followed by one of five vowels [i e a o u] in isolation and in a carrier phrase. Acoustic analyses were conducted focusing on two acoustic parameters signaling voicing in fricative consonants: duration and amplitude properties of the fricative noise. Results show that for all participants, regardless of clinical diagnosis or lesion site, phonemic paraphasias leave an acoustic trace of the original target in the error production. These findings challenge the view that phonemic paraphasias arise from a mis-selection of phonemic units followed by its correct implementation, as traditionally proposed. Rather, they appear to derive from a common mechanism with speech errors reflecting the co-activation of a target and competitor resulting in speech output that has some phonetic properties of both segments. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effects of blocking and presentation on the recognition of word and nonsense syllables in noise
NASA Astrophysics Data System (ADS)
Benkí, José R.
2003-10-01
Listener expectations may have significant effects on spoken word recognition, modulating word similarity effects from the lexicon. This study investigates the effect of blocking by lexical status on the recognition of word and nonsense syllables in noise. 240 phonemically matched word and nonsense CVC syllables [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84, 101-108 (1988)] were presented to listeners at different S/N ratios for identification. In the mixed condition, listeners were presented with blocks containing both words and nonwords, while listeners in the blocked condition were presented with the trials in blocks containing either words or nonwords. The targets were presented in isolation with 50 ms of preceding and following noise. Preliminary results indicate no effect of blocking on accuracy for either word or nonsense syllables; results from neighborhood density analyses will be presented. Consistent with previous studies, a j-factor analysis indicates that words are perceived as containing at least 0.5 fewer independent units than nonwords in both conditions. Relative to previous work on syllables presented in a frame sentence [Benkí, J. Acoust. Soc. Am. 113, 1689-1705 (2003)], initial consonants were perceived significantly less accurately, while vowels and final consonants were perceived at comparable rates.
Are stimulus-response rules represented phonologically for task-set preparation and maintenance?
van 't Wout, Félice; Lavric, Aureliu; Monsell, Stephen
2013-09-01
Accounts of task-set control generally assume that the current task's stimulus-response (S-R) rules must be elevated to a privileged state of activation. How are they represented in this state? In 3 task-cuing experiments, we tested the hypothesis that phonological working memory is used to represent S-R rules for task-set control by getting participants to switch between 2 sets of arbitrary S-R rules and manipulating the articulatory duration (Experiment 1) or phonological similarity (Experiments 2 and 3) of the names of the stimulus terms. The task cue specified which of 2 objects (Experiment 1) or consonants (Experiment 2) in a display to identify with a key press. In Experiment 3, participants switched between identifying an object/consonant and its color/visual texture. After practice, neither the duration nor the similarity of the stimulus terms had detectable effects on overall performance, task-switch cost, or its reduction with preparation. Only in the initial single-task training blocks was phonological similarity a significant handicap. Hence, beyond a very transient role, there is no evidence that (declarative) phonological working memory makes a functional contribution to representing S-R rules for task-set control, arguably because once learned, they are represented in nonlinguistic procedural working memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
ERIC Educational Resources Information Center
Lohmander, Anette; Lillvik, Malin; Friede, Hans
2004-01-01
The purpose of study was to investigate the impact of pre-surgical Infant Orthopaedics (IO) on consonant production at 18 months of age in children with Unilateral Cleft Lip and Palate (UCLP) and to compare the consonant production to that of age-matched children without clefts. The first ten children in a consecutive series of 20 with UCLP…
ERIC Educational Resources Information Center
Moradi, Shahram; Lidestam, Bjorn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Ronnberg, Jerker
2017-01-01
Purpose: We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels--in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands--in listeners with hearing impairment using hearing aids. Method: The study comprised 199…
ERIC Educational Resources Information Center
Meerschman, Iris; Van Lierde, Kristiane; Peeters, Karen; Meersman, Eline; Claeys, Sofie; D'haeseleer, Evelien
2017-01-01
Purpose: The purpose of this study was to determine the short-term effect of 2 semi-occluded vocal tract training programs, "resonant voice training using nasal consonants" versus "straw phonation," on the vocal quality of vocally healthy future occupational voice users. Method: A multigroup pretest--posttest randomized control…
Articulatory Control in Childhood Apraxia of Speech in a Novel Word-Learning Task.
Case, Julie; Grigos, Maria I
2016-12-01
Articulatory control and speech production accuracy were examined in children with childhood apraxia of speech (CAS) and typically developing (TD) controls within a novel word-learning task to better understand the influence of planning and programming deficits in the production of unfamiliar words. Participants included 16 children between the ages of 5 and 6 years (8 CAS, 8 TD). Short- and long-term changes in lip and jaw movement, consonant and vowel accuracy, and token-to-token consistency were measured for 2 novel words that differed in articulatory complexity. Children with CAS displayed short- and long-term changes in consonant accuracy and consistency. Lip and jaw movements did not change over time. Jaw movement duration was longer in children with CAS than in TD controls. Movement stability differed between low- and high-complexity words in both groups. Children with CAS displayed a learning effect for consonant accuracy and consistency. Lack of change in movement stability may indicate that children with CAS require additional practice to demonstrate changes in speech motor control, even within production of novel word targets with greater consonant and vowel accuracy and consistency. The longer movement duration observed in children with CAS is believed to give children additional time to plan and program movements within a novel skill.
Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.
2015-01-01
This study investigated how single-talker modulated noise impacts consonant and vowel cues to sentence intelligibility. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners completed speech recognition tests. All listeners received spectrally shaped speech matched to their individual audiometric thresholds to ensure sufficient audibility with the exception of a second younger listener group who received spectral shaping that matched the mean audiogram of the hearing-impaired listeners. Results demonstrated minimal declines in intelligibility for older listeners with normal hearing and more evident declines for older hearing-impaired listeners, possibly related to impaired temporal processing. A correlational analysis suggests a common underlying ability to process information during vowels that is predictive of speech-in-modulated noise abilities. Whereas, the ability to use consonant cues appears specific to the particular characteristics of the noise and interruption. Performance declines for older listeners were mostly confined to consonant conditions. Spectral shaping accounted for the primary contributions of audibility. However, comparison with the young spectral controls who received identical spectral shaping suggests that this procedure may reduce wideband temporal modulation cues due to frequency-specific amplification that affected high-frequency consonants more than low-frequency vowels. These spectral changes may impact speech intelligibility in certain modulation masking conditions. PMID:26093436
Dressler, William W.; Balieiro, Mauro C.; dos Santos, José E.
2018-01-01
Describing the link between culture (as a phenomenon pertaining to social aggregates) and the beliefs and behaviors of individuals has eluded satisfactory resolution; however, contemporary cognitive culture theory offers hope. In this theory, culture is conceptualized as cognitive models describing specific domains of life that are shared by members of a social group. It is sharing that gives culture its aggregate properties. There are two aspects to these cultural models at the level of the individual. Persons have their own representations of the world that correspond incompletely to the shared model—this is their ‘cultural competence.’ Persons are also variable in the degree to which they can put cultural models into practice in their own lives—this is their ‘cultural consonance.’ Low cultural consonance is a stressful experience and has been linked to higher psychological distress. The relationship of cultural competence per se and psychological distress is less clear. In the research reported here, cultural competence and cultural consonance are measured on the same sample and their associations with psychological distress are examined using multiple regression analysis. Results indicate that, with respect to psychological distress, while it is good to know the cultural model, it is better to put it into practice. PMID:29379460
Neural mechanisms underlying valence inferences to sound: The role of the right angular gyrus.
Bravo, Fernando; Cross, Ian; Hawkins, Sarah; Gonzalez, Nadia; Docampo, Jorge; Bruno, Claudio; Stamatakis, Emmanuel Andreas
2017-07-28
We frequently infer others' intentions based on non-verbal auditory cues. Although the brain underpinnings of social cognition have been extensively studied, no empirical work has yet examined the impact of musical structure manipulation on the neural processing of emotional valence during mental state inferences. We used a novel sound-based theory-of-mind paradigm in which participants categorized stimuli of different sensory dissonance level in terms of positive/negative valence. Whilst consistent with previous studies which propose facilitated encoding of consonances, our results demonstrated that distinct levels of consonance/dissonance elicited differential influences on the right angular gyrus, an area implicated in mental state attribution and attention reorienting processes. Functional and effective connectivity analyses further showed that consonances modulated a specific inhibitory interaction from associative memory to mental state attribution substrates. Following evidence suggesting that individuals with autism may process social affective cues differently, we assessed the relationship between participants' task performance and self-reported autistic traits in clinically typical adults. Higher scores on the social cognition scales of the AQ were associated with deficits in recognising positive valence in consonant sound cues. These findings are discussed with respect to Bayesian perspectives on autistic perception, which highlight a functional failure to optimize precision in relation to prior beliefs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart
2013-02-01
In this study, the authors aimed to determine whether children with dyslexia (hereafter referred to as "DYS children") are more affected than children with average reading ability (hereafter referred to as "AR children") by talker and intonation variability when perceiving speech in noise. Thirty-four DYS and 25 AR children were tested on their perception of consonants in naturally produced CV tokens in multitalker babble. Twelve CVs were presented for identification in four conditions varying in the degree of talker and intonation variability. Consonant place (/bi/-/di/) and voicing (/bi/-/pi/) discrimination were investigated with the same conditions. DYS children made slightly more identification errors than AR children but only for conditions with variable intonation. Errors were more frequent for a subset of consonants, generally weakly encoded for AR children, for tokens with intonation patterns (steady and rise-fall) that occur infrequently in connected discourse. In discrimination tasks, which have a greater memory and cognitive load, DYS children scored lower than AR children across all conditions. Unusual intonation patterns had a disproportionate (but small) effect on consonant intelligibility in noise for DYS children, but adding talker variability did not. DYS children do not appear to have a general problem in perceiving speech in degraded conditions, which makes it unlikely that they lack robust phonological representations.
Fels, S S; Hinton, G E
1998-01-01
Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three-space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.
The production and phonetic representation of fake geminates in English
Oh, Grace E.; Redford, Melissa A.
2011-01-01
The current study focused on the production of non-contrastive geminates across different boundary types in English to investigate the hypothesis that word-internal heteromorphemic geminates may differ from those that arise across a word boundary. In this study, word-internal geminates arising from affixation, and described as either assimilated or concatenated, were matched to heteromorphemic geminates arising from sequences of identical consonants that spanned a word boundary and to word-internal singletons. Word-internal geminates were found to be longer than matched singletons in absolute and relative terms. By contrast, heteromorphemic geminates that occurred at word boundaries were only longer than matched singletons in absolute terms. In addition, heteromorphemic geminates in two word phrases were typically “pulled apart” in careful speech; that is, speakers marked the boundaries between free morphemes with pitch changes and pauses. Morpheme boundaries in words with bound affixes were very rarely highlighted in this way. These results are taken to indicate that most word-internal heteromorphemic geminates are represented as a single long consonant in the speech plan rather than as a consonant sequence. Only those geminates that arise in two word phrases exhibit phonetic characteristics that are fully consistent with the representation of two identical consonants crossing a morpheme boundary. PMID:22611293
Sætrevik, Bjørn
2012-01-01
The dichotic listening task is typically administered by presenting a consonant-vowel (CV) syllable to each ear and asking the participant to report the syllable heard most clearly. The results tend to show more reports of the right ear syllable than of the left ear syllable, an effect called the right ear advantage (REA). The REA is assumed to be due to the crossing over of auditory fibres and the processing of language stimuli being lateralised to left temporal areas. However, the tendency for most dichotic listening experiments to use only CV syllable stimuli limits the extent to which the conclusions can be generalised to also apply to other speech phonemes. The current study re-examines the REA in dichotic listening by using both CV and vowel-consonant (VC) syllables and combinations thereof. Results showed a replication of the REA response pattern for both CV and VC syllables, thus indicating that the general assumption of left-side localisation of processing can be applied for both types of stimuli. Further, on trials where a CV is presented in one ear and a VC is presented in the other ear, the CV is selected more often than the VC, indicating that these phonemes have an acoustic or processing advantage.
Léger, Agnès C.; Reed, Charlotte M.; Desloge, Joseph G.; Swaminathan, Jayaganesh; Braida, Louis D.
2015-01-01
Consonant-identification ability was examined in normal-hearing (NH) and hearing-impaired (HI) listeners in the presence of steady-state and 10-Hz square-wave interrupted speech-shaped noise. The Hilbert transform was used to process speech stimuli (16 consonants in a-C-a syllables) to present envelope cues, temporal fine-structure (TFS) cues, or envelope cues recovered from TFS speech. The performance of the HI listeners was inferior to that of the NH listeners both in terms of lower levels of performance in the baseline condition and in the need for higher signal-to-noise ratio to yield a given level of performance. For NH listeners, scores were higher in interrupted noise than in steady-state noise for all speech types (indicating substantial masking release). For HI listeners, masking release was typically observed for TFS and recovered-envelope speech but not for unprocessed and envelope speech. For both groups of listeners, TFS and recovered-envelope speech yielded similar levels of performance and consonant confusion patterns. The masking release observed for TFS and recovered-envelope speech may be related to level effects associated with the manner in which the TFS processing interacts with the interrupted noise signal, rather than to the contributions of TFS cues per se. PMID:26233038
ERIC Educational Resources Information Center
Yurtbasi, Metin
2016-01-01
The voiceless allophones of (alveolo) palatal stop consonant [c] and velar stop consonant [k] of the phoneme /k/ represented by the letter "K" exists in almost all languages of the world. Which of these will be sounded in speech is determined by the type of the vowel that are adjacent to them. In Turkish, the dark variant [k] occurs…
Donaldson, Gail S; Dawson, Patricia K; Borden, Lamar Z
2011-01-01
Previous studies have confirmed that current steering can increase the number of discriminable pitches available to many cochlear implant (CI) users; however, the ability to perceive additional pitches has not been linked to improved speech perception. The primary goals of this study were to determine (1) whether adult CI users can achieve higher levels of spectral cue transmission with a speech processing strategy that implements current steering (Fidelity120) than with a predecessor strategy (HiRes) and, if so, (2) whether the magnitude of improvement can be predicted from individual differences in place-pitch sensitivity. A secondary goal was to determine whether Fidelity120 supports higher levels of speech recognition in noise than HiRes. A within-subjects repeated measures design evaluated speech perception performance with Fidelity120 relative to HiRes in 10 adult CI users. Subjects used the novel strategy (either HiRes or Fidelity120) for 8 wks during the main study; a subset of five subjects used Fidelity120 for three additional months after the main study. Speech perception was assessed for the spectral cues related to vowel F1 frequency, vowel F2 frequency, and consonant place of articulation; overall transmitted information for vowels and consonants; and sentence recognition in noise. Place-pitch sensitivity was measured for electrode pairs in the apical, middle, and basal regions of the implanted array using a psychophysical pitch-ranking task. With one exception, there was no effect of strategy (HiRes versus Fidelity120) on the speech measures tested, either during the main study (N = 10) or after extended use of Fidelity120 (N = 5). The exception was a small but significant advantage for HiRes over Fidelity120 for consonant perception during the main study. Examination of individual subjects' data revealed that 3 of 10 subjects demonstrated improved perception of one or more spectral cues with Fidelity120 relative to HiRes after 8 wks or longer experience with Fidelity120. Another three subjects exhibited initial decrements in spectral cue perception with Fidelity120 at the 8-wk time point; however, evidence from one subject suggested that such decrements may resolve with additional experience. Place-pitch thresholds were inversely related to improvements in vowel F2 frequency perception with Fidelity120 relative to HiRes. However, no relationship was observed between place-pitch thresholds and the other spectral measures (vowel F1 frequency or consonant place of articulation). Findings suggest that Fidelity120 supports small improvements in the perception of spectral speech cues in some Advanced Bionics CI users; however, many users show no clear benefit. Benefits are more likely to occur for vowel spectral cues (related to F1 and F2 frequency) than for consonant spectral cues (related to place of articulation). There was an inconsistent relationship between place-pitch sensitivity and improvements in spectral cue perception with Fidelity120 relative to HiRes. This may partly reflect the small number of sites at which place-pitch thresholds were measured. Contrary to some previous reports, there was no clear evidence that Fidelity120 supports improved sentence recognition in noise.
Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A
2003-07-01
Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process.
Serbo-Croatian. SC-15A. Part 1. Basic Structure,
1983-09-30
regularly replace certain others in specific linguistic conditions. There are two main kinds of such mutations . One set is associated with sonority (voicing...or non-voicing), the other with palatalization. (a) Mutations caused by voicing or devoicing concern the plosive and sibilant consonants, since no... mutations exist of consonants f, h, c, J, r, v. The plosives and sibilants mutate as follows: p alternates with b p/b s alternates with z s/z t alternates
Patterns of phonological disability in Cantonese-speaking children in Hong Kong.
Cheung, P; Abberton, E
2000-01-01
Tone, vowel and consonant production are described for a large group of Cantonese-speaking children assessed in speech and language therapy clinics in Hong Kong. The patterns of disability follow predictions made on the basis of work on normal phonological development in Cantonese, and on psychoacoustic factors in acquisition: consonants account for more disability than vowels, and tones are least problematic. Possible articulatory and auditory contributions to explanation of the observed patterns are discussed.
ERIC Educational Resources Information Center
Eshghi, Marziye; Vallino, Linda D.; Baylis, Adriane L.; Preisser, John S.; Zajac, David J.
2017-01-01
Purpose: The objective was to determine velopharyngeal (VP) status of stop consonants and vowels produced by young children with repaired cleft palate (CP) and typically developing (TD) children from 12 to 18 months of age. Method: Nasal ram pressure (NRP) was monitored in 9 children (5 boys, 4 girls) with repaired CP with or without cleft lip and…
Alexander, Joshua M.
2016-01-01
By varying parameters that control nonlinear frequency compression (NFC), this study examined how different ways of compressing inaudible mid- and/or high-frequency information at lower frequencies influences perception of consonants and vowels. Twenty-eight listeners with mild to moderately severe hearing loss identified consonants and vowels from nonsense syllables in noise following amplification via a hearing aid simulator. Low-pass filtering and the selection of NFC parameters fixed the output bandwidth at a frequency representing a moderately severe (3.3 kHz, group MS) or a mild-to-moderate (5.0 kHz, group MM) high-frequency loss. For each group (n = 14), effects of six combinations of NFC start frequency (SF) and input bandwidth [by varying the compression ratio (CR)] were examined. For both groups, the 1.6 kHz SF significantly reduced vowel and consonant recognition, especially as CR increased; whereas, recognition was generally unaffected if SF increased at the expense of a higher CR. Vowel recognition detriments for group MS were moderately correlated with the size of the second formant frequency shift following NFC. For both groups, significant improvement (33%–50%) with NFC was confined to final /s/ and /z/ and to some VCV tokens, perhaps because of listeners' limited exposure to each setting. No set of parameters simultaneously maximized recognition across all tokens. PMID:26936574
Speech training alters consonant and vowel responses in multiple auditory cortex fields
Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.
2015-01-01
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927
Lotto, A J; Kluender, K R
1998-05-01
When members of a series of synthesized stop consonants varying acoustically in F3 characteristics and varying perceptually from /da/ to /ga/ are preceded by /al/, subjects report hearing more /ga/ syllables relative to when each member is preceded by /ar/ (Mann, 1980). It has been suggested that this result demonstrates the existence of a mechanism that compensates for coarticulation via tacit knowledge of articulatory dynamics and constraints, or through perceptual recovery of vocal-tract dynamics. The present study was designed to assess the degree to which these perceptual effects are specific to qualities of human articulatory sources. In three experiments, series of consonant-vowel (CV) stimuli varying in F3-onset frequency (/da/-/ga/) were preceded by speech versions or nonspeech analogues of /al/ and /ar/. The effect of liquid identity on stop consonant labeling remained when the preceding VC was produced by a female speaker and the CV syllable was modeled after a male speaker's productions. Labeling boundaries also shifted when the CV was preceded by a sine wave glide modeled after F3 characteristics of /al/ and /ar/. Identifications shifted even when the preceding sine wave was of constant frequency equal to the offset frequency of F3 from a natural production. These results suggest an explanation in terms of general auditory processes as opposed to recovery of or knowledge of specific articulatory dynamics.
Remote programming of cochlear implants: a telecommunications model.
McElveen, John T; Blackburn, Erin L; Green, J Douglas; McLear, Patrick W; Thimsen, Donald J; Wilson, Blake S
2010-09-01
Evaluate the effectiveness of remote programming for cochlear implants. Retrospective review of the cochlear implant performance for patients who had undergone mapping and programming of their cochlear implant via remote connection through the Internet. Postoperative Hearing in Noise Test and Consonant/Nucleus/Consonant word scores for 7 patients who had undergone remote mapping and programming of their cochlear implant were compared with the mean scores of 7 patients who had been programmed by the same audiologist over a 12-month period. Times required for remote and direct programming were also compared. The quality of the Internet connection was assessed using standardized measures. Remote programming was performed via a virtual private network with a separate software program used for video and audio linkage. All 7 patients were programmed successfully via remote connectivity. No untoward patient experiences were encountered. No statistically significant differences could be found in comparing postoperative Hearing in Noise Test and Consonant/Nucleus/Consonant word scores for patients who had undergone remote programming versus a similar group of patients who had their cochlear implant programmed directly. Remote programming did not require a significantly longer programming time for the audiologist with these 7 patients. Remote programming of a cochlear implant can be performed safely without any deterioration in the quality of the programming. This ability to remotely program cochlear implant patients gives the potential to extend cochlear implantation to underserved areas in the United States and elsewhere.
Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension
Liu, Huei-Mei; Tsao, Feng-Ming
2017-01-01
Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031
Derakhshandeh, Fatemeh; Nikmaram, Mohammadreza; Hosseinabad, Hedieh Hashemi; Memarzadeh, Mehrdad; Taheri, Masoud; Omrani, Mohammadreza; Jalaie, Shohreh; Bijankhan, Mahmood; Sell, Debbie
2016-07-01
The aim of this study was to investigate the impact of an intensive 10-week course of articulation therapy on articulation errors in cleft lip and palate patients who have Velopharyngeal Insufficiency (VPI), non-oral and passive cleft speech characteristics. Five children with cleft palate (+/-cleft lip) with VPI and non-oral and passive cleft speech characteristics underwent 40 intensive articulation therapies over 10 weeks in a single case experimental design. The percentage of non-oral CSCs (NCSCs), passive CSCs (PCSCs), stimulable consonants (SC), correct consonants in word imitation (CCI), and correct consonants in picture naming (CCN) were captured at baseline, during intervention and in follow up phases. Visual analysis and two effect size indexes of Percentage of Nonoverlapping Data and Percentage of Improvement Rate Difference were analyzed. Articulation therapy resulted in visible decrease in NCSCs for all 5 participants across the intervention phases. Intervention was effective in changing percentage of passive CSCs in two different ways; it reduced the PCSCs in three cases and resulted in an increase in PCSCs in the other two cases. This was interpreted as intervention having changed the non-oral CSCs to consonants produced within the oral cavity but with passive characteristics affecting manner of production including weakness, nasalized plosives and nasal realizations of plosives and fricatives. Percent SC increased throughout the intervention period in all five patients. All participants demonstrated an increase in percentage of CCI and CCN suggesting an increase in the consonant inventory. Follow-up data showed that all the subjects were able to maintain their ability to articulate learned phonemes correctly even after a 4-week break from intervention. This single case experimental study supports the hypothesis that speech intervention in patients with VPI can result in an improvement in oral placements and passive CSCs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas
2015-01-01
Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach-avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans.
Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas
2015-01-01
Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach–avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans. PMID:26466351
Cheng, Bing; Zhang, Yang
2015-01-01
The present study investigated how syllable structure differences between the first Language (L1) and the second language (L2) affect L2 consonant perception and production at syllable-initial and syllable-final positions. The participants were Mandarin-speaking college students who studied English as a second language. Monosyllabic English words were used in the perception test. Production was recorded from each Chinese subject and rated for accentedness by two native speakers of English. Consistent with previous studies, significant positional asymmetry effects were found across speech sound categories in terms of voicing, place of articulation, and manner of articulation. Furthermore, significant correlations between perception and accentedness ratings were found at the syllable onset position but not for the coda. Many exceptions were also found, which could not be solely accounted for by differences in L1–L2 syllabic structures. The results show a strong effect of language experience at the syllable level, which joins force with acoustic, phonetic, and phonemic properties of individual consonants in influencing positional asymmetry in both domains of L2 segmental perception and production. The complexities and exceptions call for further systematic studies on the interactions between syllable structure universals and native language interference with refined theoretical models to specify the links between perception and production in second language acquisition. PMID:26635699
Kim, Kwang S; Max, Ludo
2014-01-01
To estimate the contributions of feedforward vs. feedback control systems in speech articulation, we analyzed the correspondence between initial and final kinematics in unperturbed tongue and jaw movements for consonant-vowel (CV) and vowel-consonant (VC) syllables. If movement extents and endpoints are highly predictable from early kinematic information, then the movements were most likely completed without substantial online corrections (feedforward control); if the correspondence between early kinematics and final amplitude or position is low, online adjustments may have altered the planned trajectory (feedback control) (Messier and Kalaska, 1999). Five adult speakers produced CV and VC syllables with high, mid, or low vowels while movements of the tongue and jaw were tracked electromagnetically. The correspondence between the kinematic parameters peak acceleration or peak velocity and movement extent as well as between the articulators' spatial coordinates at those kinematic landmarks and movement endpoint was examined both for movements across different target distances (i.e., across vowel height) and within target distances (i.e., within vowel height). Taken together, results suggest that jaw and tongue movements for these CV and VC syllables are mostly under feedforward control but with feedback-based contributions. One type of feedback-driven compensatory adjustment appears to regulate movement duration based on variation in peak acceleration. Results from a statistical model based on multiple regression are presented to illustrate how the relative strength of these feedback contributions can be estimated.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-07
... economically dynamic regional innovation cluster focused on energy efficient buildings technologies and systems... DEPARTMENT OF ENERGY Energy Efficient Building Systems Regional Innovation Cluster Initiative... February 8, 2010, titled the Energy Efficient Building Systems Regional Innovation Cluster Initiative. A...
Differences between conduction aphasia and Wernicke's aphasia.
Anzaki, F; Izumi, S
2001-07-01
Conduction aphasia and Wernike's aphasia have been differentiated by the degree of auditory language comprehension. We quantitatively compared the speech sound errors of two conduction aphasia patients and three Wernicke's aphasia patients on various language modality tests. All of the patients were Japanese. The two conduction aphasia patients had "conduites d'approche" errors and phonological paraphasia. The patient with mild Wernicke's aphasia made various errors. In the patient with severe Wernicke's aphasia, neologism was observed. Phonological paraphasia in the two conduction aphasia patients seemed to occur when the examinee searched for the target word. They made more errors in vowels than in consonants of target words on the naming and repetition tests. They seemed to search the target word by the correct consonant phoneme and incorrect vocalic phoneme in the table of the Japanese alphabet. The Wernicke's aphasia patients who had severe impairment of auditory comprehension, made more errors in consonants than in vowels of target words. In conclusion, utterance of conduction aphasia and that of Wernicke's aphasia are qualitatively distinct.
Lohmander, Anette; Lundeborg, Inger; Persson, Christina
2017-01-01
Normative language-based data are important for comparing speech performances of clinical groups. The Swedish Articulation and Nasality Test (SVANTE) was developed to enable a detailed speech assessment. This study's aim was to present normative data on articulation and nasality in Swedish speakers. Single word production, sentence repetition and connected speech were collected using SVANTE in 443 individuals. Mean (SD) and prevalences in the groups of 3-, 5-, 7-, 10-, 16- and 19-year-olds were calculated from phonetic transcriptions or ordinal rating. For the 3- and 5-year-olds, a consonant inventory was also determined. The mean percent of oral consonants correct ranged from 77% at age 3 to 99% at age 19. At age 5, a mean of 96% was already reached, and the consonant inventory was established except for /s/, /r/, /ɕ/. The norms on the SVANTE, also including a short version, will be useful in the interpretation of speech outcomes.
Carvalho Lima, Vania L C; Collange Grecco, Luanda A; Marques, Valéria C; Fregni, Felipe; Brandão de Ávila, Clara R
2016-04-01
The aim of this study was to describe the results of the first case combining integrative speech therapy with anodal transcranial direct current stimulation (tDCS) over Broca's area in a child with cerebral palsy. The ABFW phonology test was used to analyze speech based on the Percentage of Correct Consonants (PCC) and Percentage of Correct Consonants - Revised (PCC-R). After treatment, increases were found in both PCC (Imitation: 53.63%-78.10%; Nomination: 53.19%-70.21%) and PPC-R (Imitation: 64.54%-83.63%; Nomination: 61.70%-77.65%). Moreover, reductions occurred in distortions, substitutions and improvement was found in oral performance, especially tongue mobility (AMIOFE-mobility before = 4 after = 7). The child demonstrated a clinically important improvement in speech fluency as shown in results of imitation number of correct consonants and phonemes acquire. Based on these promising findings, continuing research in this field should be conducted with controlled clinical trials. Copyright © 2015 Elsevier Ltd. All rights reserved.
The development of motor synergies in children: Ultrasound and acoustic measurements
Noiray, Aude; Ménard, Lucie; Iskarous, Khalil
2013-01-01
The present study focuses on differences in lingual coarticulation between French children and adults. The specific question pursued is whether 4–5 year old children have already acquired a synergy observed in adults in which the tongue back helps the tip in the formation of alveolar consonants. Locus equations, estimated from acoustic and ultrasound imaging data were used to compare coarticulation degree between adults and children and further investigate differences in motor synergy between the front and back parts of the tongue. Results show similar slope and intercept patterns for adults and children in both the acoustic and articulatory domains, with an effect of place of articulation in both groups between alveolar and non-alveolar consonants. These results suggest that 4–5 year old children (1) have learned the motor synergy investigated and (2) have developed a pattern of coarticulatory resistance depending on a consonant place of articulation. Also, results show that acoustic locus equations can be used to gauge the presence of motor synergies in children. PMID:23297916
Quantization noise in digital speech. M.S. Thesis- Houston Univ.
NASA Technical Reports Server (NTRS)
Schmidt, O. L.
1972-01-01
The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.
Biomechanically Preferred Consonant-Vowel Combinations Fail to Appear in Adult Spoken Corpora
Whalen, D. H.; Giulivi, Sara; Nam, Hosung; Levitt, Andrea G.; Hallé, Pierre; Goldstein, Louis M.
2012-01-01
Certain consonant/vowel (CV) combinations are more frequent than would be expected from the individual C and V frequencies alone, both in babbling and, to a lesser extent, in adult language, based on dictionary counts: Labial consonants co-occur with central vowels more often than chance would dictate; coronals co-occur with front vowels, and velars with back vowels (Davis & MacNeilage, 1994). Plausible biomechanical explanations have been proposed, but it is also possible that infants are mirroring the frequency of the CVs that they hear. As noted, previous assessments of adult language were based on dictionaries; these “type” counts are incommensurate with the babbling measures, which are necessarily “token” counts. We analyzed the tokens in two spoken corpora for English, two for French and one for Mandarin. We found that the adult spoken CV preferences correlated with the type counts for Mandarin and French, not for English. Correlations between the adult spoken corpora and the babbling results had all three possible outcomes: significantly positive (French), uncorrelated (Mandarin), and significantly negative (English). There were no correlations of the dictionary data with the babbling results when we consider all nine combinations of consonants and vowels. The results indicate that spoken frequencies of CV combinations can differ from dictionary (type) counts and that the CV preferences apparent in babbling are biomechanically driven and can ignore the frequencies of CVs in the ambient spoken language. PMID:23420980
Meyer, Ted A.; Frisch, Stefan A.; Pisoni, David B.; Miyamoto, Richard T.; Svirsky, Mario A.
2012-01-01
Hypotheses Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? Background The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener’s lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener’s closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Methods Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. Results The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. Conclusion The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process. PMID:12851554
Clustering Categorical Data Using Community Detection Techniques
2017-01-01
With the advent of the k-modes algorithm, the toolbox for clustering categorical data has an efficient tool that scales linearly in the number of data items. However, random initialization of cluster centers in k-modes makes it hard to reach a good clustering without resorting to many trials. Recently proposed methods for better initialization are deterministic and reduce the clustering cost considerably. A variety of initialization methods differ in how the heuristics chooses the set of initial centers. In this paper, we address the clustering problem for categorical data from the perspective of community detection. Instead of initializing k modes and running several iterations, our scheme, CD-Clustering, builds an unweighted graph and detects highly cohesive groups of nodes using a fast community detection technique. The top-k detected communities by size will define the k modes. Evaluation on ten real categorical datasets shows that our method outperforms the existing initialization methods for k-modes in terms of accuracy, precision, and recall in most of the cases. PMID:29430249
Greene, Beth G; Logan, John S; Pisoni, David B
1986-03-01
We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.
NASA Astrophysics Data System (ADS)
Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.
2018-01-01
This study tested the hypothesis that object-based attention modulates the discrimination of level increments in stop-consonant noise bursts. With consonant-vowel-consonant (CvC) words consisting of an ≈80-dB vowel (v), a pre-vocalic (Cv) and a post-vocalic (vC) stop-consonant noise burst (≈60-dB SPL), we measured discrimination thresholds (LDTs) for level increments (ΔL) in the noise bursts presented either in CvC context or in isolation. In the 2-interval 2-alternative forced-choice task, each observation interval presented a CvC word (e.g., /pæk/ /pæk/), and normal-hearing participants had to discern ΔL in the Cv or vC burst. Based on the linguistic word labels, the auditory events of each trial were perceived as two auditory objects (Cv-v-vC and Cv-v-vC) that group together the bursts and vowels, hindering selective attention to ΔL. To discern ΔL in Cv or vC, the events must be reorganized into three auditory objects: the to-be-attended pre-vocalic (Cv–Cv) or post-vocalic burst pair (vC–vC), and the to-be-ignored vowel pair (v–v). Our results suggest that instead of being automatic this reorganization requires training, in spite of using familiar CvC words. Relative to bursts in isolation, bursts in context always produced inferior ΔL discrimination accuracy (a context effect), which depended strongly on the acoustic separation between the bursts and the vowel, being much keener for the object apart from (post-vocalic) than for the object adjoining (pre-vocalic) the vowel (a temporal-position effect). Variability in CvC dimensions that did not alter the noise-burst perceptual grouping had minor effects on discrimination accuracy. In addition to being robust and persistent, these effects are relatively general, evincing in forced-choice tasks with one or two observation intervals, with or without variability in the temporal position of ΔL, and with either fixed or roving CvC standards. The results lend support to the hypothesis. PMID:29364931
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Dressler, William W; Balieiro, Mauro C; Ferreira de Araújo, Luiza; Silva, Wilson A; Ernesto Dos Santos, José
2016-07-01
Research on gene-environment interaction was facilitated by breakthroughs in molecular biology in the late 20th century, especially in the study of mental health. There is a reliable interaction between candidate genes for depression and childhood adversity in relation to mental health outcomes. The aim of this paper is to explore the role of culture in this process in an urban community in Brazil. The specific cultural factor examined is cultural consonance, or the degree to which individuals are able to successfully incorporate salient cultural models into their own beliefs and behaviors. It was hypothesized that cultural consonance in family life would mediate the interaction of genotype and childhood adversity. In a study of 402 adult Brazilians from diverse socioeconomic backgrounds, conducted from 2011 to 2014, the interaction of reported childhood adversity and a polymorphism in the 2A serotonin receptor was associated with higher depressive symptoms. Further analysis showed that the gene-environment interaction was mediated by cultural consonance in family life, and that these effects were more pronounced in lower social class neighborhoods. The findings reinforce the role of the serotonergic system in the regulation of stress response and learning and memory, and how these processes in turn interact with environmental events and circumstances. Furthermore, these results suggest that gene-environment interaction models should incorporate a wider range of environmental experience and more complex pathways to better understand how genes and the environment combine to influence mental health outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Parmentier, Geneviève; Baumgardt, Holger
2012-12-01
We highlight the impact of cluster-mass-dependent evolutionary rates upon the evolution of the cluster mass function during violent relaxation, that is, while clusters dynamically respond to the expulsion of their residual star-forming gas. Mass-dependent evolutionary rates arise when the mean volume density of cluster-forming regions is mass-dependent. In that case, even if the initial conditions are such that the cluster mass function at the end of violent relaxation has the same shape as the embedded-cluster mass function (i.e. infant weight-loss is mass-independent), the shape of the cluster mass function does change transiently during violent relaxation. In contrast, for cluster-forming regions of constant mean volume density, the cluster mass function shape is preserved all through violent relaxation since all clusters then evolve at the same mass-independent rate. On the scale of individual clusters, we model the evolution of the ratio of the dynamical mass to luminous mass of a cluster after gas expulsion. Specifically, we map the radial dependence of the time-scale for a star cluster to return to equilibrium. We stress that fields of view a few pc in size only, typical of compact clusters with rapid evolutionary rates, are likely to reveal cluster regions which have returned to equilibrium even if the cluster experienced a major gas expulsion episode a few Myr earlier. We provide models with the aperture and time expressed in units of the initial half-mass radius and initial crossing-time, respectively, so that our results can be applied to clusters with initial densities, sizes, and apertures different from ours.
Takeuchi, Hiroshi
2018-05-08
Since searching for the global minimum on the potential energy surface of a cluster is very difficult, many geometry optimization methods have been proposed, in which initial geometries are randomly generated and subsequently improved with different algorithms. In this study, a size-guided multi-seed heuristic method is developed and applied to benzene clusters. It produces initial configurations of the cluster with n molecules from the lowest-energy configurations of the cluster with n - 1 molecules (seeds). The initial geometries are further optimized with the geometrical perturbations previously used for molecular clusters. These steps are repeated until the size n satisfies a predefined one. The method locates putative global minima of benzene clusters with up to 65 molecules. The performance of the method is discussed using the computational cost, rates to locate the global minima, and energies of initial geometries. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS
Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.
2014-01-01
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757
Conditioned Place Preference and Aversion for Music in a Virtual Reality Environment
Molet, Mikaël; Billiet, Gauthier; Bardo, Michael T.
2012-01-01
The use of a virtual reality environment (VRE) enables behavioral scientists to create different spatial contexts in which human participants behave freely, while still confined to the laboratory. In this article, VRE was used to study conditioned place preference (CPP) and aversion (CPA). In Experiment 1, half of the participants were asked to visit a house for two min with consonant music and then they were asked to visit an alternate house with static noise for two min, whereas the remaining participants did the visits in reverse order. In Experiment 2, we used the same design as Experiment 1, except for replacing consonant music with dissonant music. After conditioning in both experiments, the participants were given a choice between spending time in the two houses. In Experiment 1, participants spent more time in the house associated with the consonant music, thus showing a CPP toward that house. In Experiment 2, participants spent less time in the house associated with the dissonant music, thus showing a CPA for that house. These results support VRE as a tool to extend research on CPP/CPA in humans. PMID:23089383
Ebeling, Martin
2008-10-01
A mathematical model is presented here to explain the sensation of consonance and dissonance on the basis of neuronal coding and the properties of a neuronal periodicity detection mechanism. This mathematical model makes use of physiological data from a neuronal model of periodicity analysis in the midbrain, whose operation can be described mathematically by autocorrelation functions with regard to time windows. Musical intervals produce regular firing patterns in the auditory nerve that depend on the vibration ratio of the two tones. The mathematical model makes it possible to define a measure for the degree of these regularities for each vibration ratio. It turns out that this measure value is in line with the degree of tonal fusion as described by Stumpf [Tonpsychologie (Psychology of Tones) (Knuf, Hilversum), reprinted 1965]. This finding makes it probable that tonal fusion is a consequence of certain properties of the neuronal periodicity detection mechanism. Together with strong roughness resulting from interval tones with fundamentals close together or close to the octave, this neuronal mechanism may be regarded as the basis of consonance and dissonance.
Measurement of voice onset time in maxillectomy patients.
Hattori, Mariko; Sumita, Yuka I; Taniguchi, Hisashi
2014-01-01
Objective speech evaluation using acoustic measurement is needed for the proper rehabilitation of maxillectomy patients. For digital evaluation of consonants, measurement of voice onset time is one option. However, voice onset time has not been measured in maxillectomy patients as their consonant sound spectra exhibit unique characteristics that make the measurement of voice onset time challenging. In this study, we established criteria for measuring voice onset time in maxillectomy patients for objective speech evaluation. We examined voice onset time for /ka/ and /ta/ in 13 maxillectomy patients by calculating the number of valid measurements of voice onset time out of three trials for each syllable. Wilcoxon's signed rank test showed that voice onset time measurements were more successful for /ka/ and /ta/ when a prosthesis was used (Z = -2.232, P = 0.026 and Z = -2.401, P = 0.016, resp.) than when a prosthesis was not used. These results indicate a prosthesis affected voice onset measurement in these patients. Although more research in this area is needed, measurement of voice onset time has the potential to be used to evaluate consonant production in maxillectomy patients wearing a prosthesis.
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
2014-03-01
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Do /s/-Initial Clusters Imply CVCC Sequences? Evidence from Disordered Speech
ERIC Educational Resources Information Center
Pan, Ning; Roussel, Nancye
2008-01-01
The structure of /s/-initial clusters is debated in developmental phonology. Pan and Snyder (2004) took the Government Phonology (GP) framework and proposed that production of /s/-initial clusters requires the positive setting of two binary parameters [+/-Branching rhyme (BR)] and [+/-Magic empty nucleus (MEN)] and the initial /s/ is treated as a…
Acquisition process of typing skill using hierarchical materials in the Japanese language.
Ashitaka, Yuki; Shimada, Hiroyuki
2014-08-01
In the present study, using a new keyboard layout with only eight keys, we conducted typing training for unskilled typists. In this task, Japanese college students received training in typing words consisting of a pair of hiragana characters with four keystrokes, using the alphabetic input method, while keeping the association between the keys and typists' finger movements; the task was constructed so that chunking was readily available. We manipulated the association between the hiragana characters and alphabet letters (hierarchical materials: overlapped and nonoverlapped mappings). Our alphabet letter materials corresponded to the regular order within each hiragana word (within the four letters, the first and third referred to consonants, and the second and fourth referred to vowels). Only the interkeystroke intervals involved in the initiation of typing vowel letters showed an overlapping effect, which revealed that the effect was markedly large only during the early period of skill development (the effect for the overlapped mapping being larger than that for the nonoverlapped mapping), but that it had diminished by the time of late training. Conversely, the response time and the third interkeystroke interval, which are both involved in the latency of typing a consonant letter, did not reveal an overlapped effect, suggesting that chunking might be useful with hiragana characters rather than hiragana words. These results are discussed in terms of the fan effect and skill acquisition. Furthermore, we discuss whether there is a need for further research on unskilled and skilled Japanese typists.
Maganzini, Anthony L; Schroetter, Sarah B; Freeman, Kathy
2014-05-01
To quantify smile esthetics following orthodontic treatment and determine whether these changes are correlated to the severity of the initial malocclusion. A standardized smile mesh analysis that evaluated nine lip-tooth characteristics was applied to two groups of successfully treated patients: group 1 (initial American Board of Orthodontics Discrepancy Index [DI] score<20) and group 2 (initial DI score>20). T-tests were used to detect significant differences between the low-DI and high-DI groups for baseline pretreatment measurements, baseline posttreatment measurements, and changes from pre- to posttreatment. A Spearman correlation test compared the initial DI values with the changes in the nine smile measurements. Five of the smile measurements were improved in both groups following orthodontic treatment. Both groups demonstrated improved incisor exposure, an improved gingival smile line, an increase in smile width, a decreased buccal corridor space, and an improvement in smile consonance. Spearman correlation tests showed that initial DI value was not correlated to changes in any of the individual smile measurements. Smile esthetics is improved by orthodontic treatment regardless of the initial severity of the malocclusion. In other words, patients with more complex orthodontic issues and their counterparts with minor malocclusions benefitted equally from treatment in terms of their smile esthetics.
Using visible speech to train perception and production of speech for individuals with hearing loss.
Massaro, Dominic W; Light, Joanna
2004-04-01
The main goal of this study was to implement a computer-animated talking head, Baldi, as a language tutor for speech perception and production for individuals with hearing loss. Baldi can speak slowly; illustrate articulation by making the skin transparent to reveal the tongue, teeth, and palate; and show supplementary articulatory features, such as vibration of the neck to show voicing and turbulent airflow to show frication. Seven students with hearing loss between the ages of 8 and 13 were trained for 6 hours across 21 weeks on 8 categories of segments (4 voiced vs. voiceless distinctions, 3 consonant cluster distinctions, and 1 fricative vs. affricate distinction). Training included practice at the segment and the word level. Perception and production improved for each of the 7 children. Speech production also generalized to new words not included in the training lessons. Finally, speech production deteriorated somewhat after 6 weeks without training, indicating that the training method rather than some other experience was responsible for the improvement that was found.
Subglottal resonances of adult male and female native speakers of American English.
Lulich, Steven M; Morton, John R; Arsikere, Harish; Sommers, Mitchell S; Leung, Gary K F; Alwan, Abeer
2012-10-01
This paper presents a large-scale study of subglottal resonances (SGRs) (the resonant frequencies of the tracheo-bronchial tree) and their relations to various acoustical and physiological characteristics of speakers. The paper presents data from a corpus of simultaneous microphone and accelerometer recordings of consonant-vowel-consonant (CVC) words embedded in a carrier phrase spoken by 25 male and 25 female native speakers of American English ranging in age from 18 to 24 yr. The corpus contains 17,500 utterances of 14 American English monophthongs, diphthongs, and the rhotic approximant [[inverted r
NASA Technical Reports Server (NTRS)
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.
1985-01-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-19
... DEPARTMENT OF ENERGY Energy Efficient Building Systems Regional Innovation Cluster Initiative... Energy Efficient Building Systems Regional Innovation Cluster Initiative. A single proposal submitted by... systems design. The DOE funded Energy Efficient Building Systems Design Hub (the ``Hub'') will serve as a...
Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters
Mittag, Maria; Inauri, Karina; Huovilainen, Tatu; Leminen, Miika; Salo, Emma; Rinne, Teemu; Kujala, Teija; Alho, Kimmo
2013-01-01
We used event-related brain potentials (ERPs) to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological (AP) and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement (Pd) of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the AP and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity (RP) to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving left-ear syllables. PMID:24348324
Koerner, Tess K; Zhang, Yang; Nelson, Peggy B; Wang, Boxiang; Zou, Hui
2017-07-01
This study examined how speech babble noise differentially affected the auditory P3 responses and the associated neural oscillatory activities for consonant and vowel discrimination in relation to segmental- and sentence-level speech perception in noise. The data were collected from 16 normal-hearing participants in a double-oddball paradigm that contained a consonant (/ba/ to /da/) and vowel (/ba/ to /bu/) change in quiet and noise (speech-babble background at a -3 dB signal-to-noise ratio) conditions. Time-frequency analysis was applied to obtain inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) measures in delta, theta, and alpha frequency bands for the P3 response. Behavioral measures included percent correct phoneme detection and reaction time as well as percent correct IEEE sentence recognition in quiet and in noise. Linear mixed-effects models were applied to determine possible brain-behavior correlates. A significant noise-induced reduction in P3 amplitude was found, accompanied by significantly longer P3 latency and decreases in ITPC across all frequency bands of interest. There was a differential effect of noise on consonant discrimination and vowel discrimination in both ERP and behavioral measures, such that noise impacted the detection of the consonant change more than the vowel change. The P3 amplitude and some of the ITPC and ERSP measures were significant predictors of speech perception at segmental- and sentence-levels across listening conditions and stimuli. These data demonstrate that the P3 response with its associated cortical oscillations represents a potential neurophysiological marker for speech perception in noise. Copyright © 2017 Elsevier B.V. All rights reserved.
Effects of prosodic boundary on /aC/ sequences: articulatory results
NASA Astrophysics Data System (ADS)
Tabain, Marija
2003-05-01
This study presents EMA (electromagnetic articulography) data on articulation of the vowel /a/ at different prosodic boundaries in French. Three speakers of metropolitan French produced utterances containing the vowel /a/, preceded by /tee/ and followed by one of six consonants /bee dee gee eff ess sh/ (three stops and three fricatives), with different prosodic boundaries intervening between the /a/ and the six different consonants. The prosodic boundaries investigated are the Utterance, the Intonational phrase, the Accentual phrase, and the Word. Data for the Tongue Tip, Tongue Body, and Jaw are presented. The articulatory data presented here were recorded at the same time as the acoustic data presented in Tabain [J. Acoust. Soc. Am. 113, 516-531 (2003)]. Analyses show that there is a strong effect on peak displacement of the vowel according to the prosodic hierarchy, with the stronger prosodic boundaries inducing a much lower Tongue Body and Jaw position than the weaker prosodic boundaries. Durations of both the opening movement into and the closing movement out of the vowel are also affected. Peak velocity of the articulatory movements is also examined, and, contrary to results for phrase-final lengthening, it is found that peak velocity of the opening movement into the vowel tends to increase with the higher prosodic boundaries, together with the increased magnitude of the movement between the consonant and the vowel. Results for the closing movement out of the vowel and into the consonant are not so clear. Since one speaker shows evidence of utterance-level articulatory declension, it is suggested that the competing constraints of articulatory declension and prosodic effects might explain some previous results on phrase-final lengthening.
Chemotherapy as language: sound symbolism in cancer medication names.
Abel, Gregory A; Glinert, Lewis H
2008-04-01
The concept of sound symbolism proposes that even the tiniest sounds comprising a word may suggest the qualities of the object which that word represents. Cancer-related medication names, which are likely to be charged with emotional meaning for patients, might be expected to contain such sound-symbolic associations. We analyzed the sounds in the names of 60 frequently-used cancer-related medications, focusing on the medications' trade names as well as the names (trade or generic) commonly used in the clinic. We assessed the frequency of common voiced consonants (/b/, /d/, /g/, /v/, /z/; thought to be associated with slowness and heaviness) and voiceless consonants (/p/, /t/, /k/, /f/, /s/; thought to be associated with fastness and lightness), and compared them to what would be expected in standard American English using a reference dataset. A Fisher's exact test for independence showed the chemotherapy consonantal frequencies to be significantly different from standard English (p=0.009 for trade; p<0.001 for "common usage"). For the trade names, the majority of the voiceless consonants were significantly increased compared to standard English; this effect was more pronounced with the "common usage" names (for the group, O/E=1.62; 95% CI [1.37, 1.89]). Hormonal and targeted therapy trade names showed the greatest frequency of voiceless consonants (for the group, O/E=1.76; 95% CI [1.20, 2.49]). Our results suggest that taken together, the names of chemotherapy medications contain an increased frequency of certain sounds associated with lightness, smallness and fastness. This finding raises important questions about the possible role of the names of medications in the experiences of cancer patients and providers.
Visual speech discrimination and identification of natural and synthetic consonant stimuli
Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.
2015-01-01
From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249
GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.
2012-01-01
We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916
NASA Astrophysics Data System (ADS)
Portegies Zwart, S. F.; Chen, H.-C.
2008-06-01
We reconstruct the initial two-body relaxation time at the half mass radius for a sample of young ⪉ 300 Myr star clusters in the Large Magellanic cloud. We achieve this by simulating star clusters with 12288 to 131072 stars using direct N-body integration. The equations of motion of all stars are calculated with high precision direct N-body simulations which include the effects of the evolution of single stars and binaries. We find that the initial relaxation times of the sample of observed clusters in the Large Magellanic Cloud ranges from about 200 Myr to about 2 Gyr. The reconstructed initial half-mass relaxation times for these clusters have a much narrower distribution than the currently observed distribution, which ranges over more than two orders of magnitude.
Significant locations in auxiliary data as seeds for typical use cases of point clustering
NASA Astrophysics Data System (ADS)
Kröger, Johannes
2018-05-01
Random greedy clustering and grid-based clustering are highly susceptible by their initial parameters. When used for point data clustering in maps they often change the apparent distribution of the underlying data. We propose a process that uses precomputed weighted seed points for the initialization of clusters, for example from local maxima in population density data. Exemplary results from the clustering of a dataset of petrol stations are presented.
Making Sense of a Sequence of Events: A Psychologically Supported AI Implementation
NASA Astrophysics Data System (ADS)
Chassy, Philippe; Prade, Henri
People try to make sense of the usually incomplete reports they receive about events that take place. For doing this, they make use of what they believe the normal course of thing should be. An agenttextquoterights beliefs may be consonant or dissonant with what is reported. For making sense people usually ascribe different types of relations between events. A prototypical example is the ascription of causality between events. The paper proposes a systematic study of consonance and dissonance between beliefs and reports. The approach is shown to be consistent with findings in psychology. An implementation is presented with some illustrative examples.
[Error analysis of functional articulation disorders in children].
Zhou, Qiao-juan; Yin, Heng; Shi, Bing
2008-08-01
To explore the clinical characteristic of functional articulation disorders in children and provide more evidence for differential diagnosis and speech therapy. 172 children with functional articulation disorders were grouped by age. Children aged 4-5 years were assigned to one group, and those aged 6-10 years were to another group. Their phonological samples were collected and analyzed. In the two groups, substitution and omission (deletion) were the mainly articulation errors in these children, dental consonants were the main wrong sounds, and bilabial and labio-dental were rarely wrong. In age 4-5 group, sequence according to the error frequency from the highest to lowest was dental, velar, lingual, apical, bilabial, and labio-dental. In age 6-10 group, the sequence was dental, lingual, apical, velar, bilabial, labio-dental. Lateral misarticulation and palatalized misarticulation occurred more often in age 6-10 group than age 4-5 group and were only found in lingual and dental consonants in two groups. Misarticulation of functional articulation disorders mainly occurs in dental and rarely in bilabial and labio-dental. Substitution and omission are the most often occurred errors. Lateral misarticulation and palatalized misarticulation occur mainly in lingual and dental consonants.
Identification and discrimination of Spanish front vowels
NASA Astrophysics Data System (ADS)
Castellanos, Isabel; Lopez-Bascuas, Luis E.
2004-05-01
The idea that vowels are perceived less categorically than consonants is widely accepted. Ades [Psychol. Rev. 84, 524-530 (1977)] tried to explain this fact on the basis of the Durlach and Braida [J. Acoust. Soc. Am. 46, 372-383 (1969)] theory of intensity resolution. Since vowels seem to cover a broader perceptual range, context-coding noise for vowels should be greater than for consonants leading to a less categorical performance on the vocalic segments. However, relatively recent work by Macmillan et al. [J. Acoust. Soc. Am. 84, 1262-1280 (1988)] has cast doubt on the assumption of different perceptual ranges for vowels and consonants even though context variance is acknowledged to be greater for the former. A possibility is that context variance increases as number of long-term phonemic categories also increases. To test this hypothesis we focused on Spanish as the target language. Spanish has less vowel categories than English and the implication is that Spanish vowels will be more categorically perceived. Identification and discrimination experiments were conducted on a synthetic /i/-/e/ continuum and the obtained functions were studied to assess whether Spanish vowels are more categorically perceived than English vowels. The results are discussed in the context of different theories of speech perception.
Perception of temporally modified speech in auditory neuropathy.
Hassan, Dalia Mohamed
2011-01-01
Disrupted auditory nerve activity in auditory neuropathy (AN) significantly impairs the sequential processing of auditory information, resulting in poor speech perception. This study investigated the ability of AN subjects to perceive temporally modified consonant-vowel (CV) pairs and shed light on their phonological awareness skills. Four Arabic CV pairs were selected: /ki/-/gi/, /to/-/do/, /si/-/sti/ and /so/-/zo/. The formant transitions in consonants and the pauses between CV pairs were prolonged. Rhyming, segmentation and blending skills were tested using words at a natural rate of speech and with prolongation of the speech stream. Fourteen adult AN subjects were compared to a matched group of cochlear-impaired patients in their perception of acoustically processed speech. The AN group distinguished the CV pairs at a low speech rate, in particular with modification of the consonant duration. Phonological awareness skills deteriorated in adult AN subjects but improved with prolongation of the speech inter-syllabic time interval. A rehabilitation program for AN should consider temporal modification of speech, training for auditory temporal processing and the use of devices with innovative signal processing schemes. Verbal modifications as well as visual imaging appear to be promising compensatory strategies for remediating the affected phonological processing skills.
Measurement of Voice Onset Time in Maxillectomy Patients
Hattori, Mariko; Sumita, Yuka I.; Taniguchi, Hisashi
2014-01-01
Objective speech evaluation using acoustic measurement is needed for the proper rehabilitation of maxillectomy patients. For digital evaluation of consonants, measurement of voice onset time is one option. However, voice onset time has not been measured in maxillectomy patients as their consonant sound spectra exhibit unique characteristics that make the measurement of voice onset time challenging. In this study, we established criteria for measuring voice onset time in maxillectomy patients for objective speech evaluation. We examined voice onset time for /ka/ and /ta/ in 13 maxillectomy patients by calculating the number of valid measurements of voice onset time out of three trials for each syllable. Wilcoxon's signed rank test showed that voice onset time measurements were more successful for /ka/ and /ta/ when a prosthesis was used (Z = −2.232, P = 0.026 and Z = −2.401, P = 0.016, resp.) than when a prosthesis was not used. These results indicate a prosthesis affected voice onset measurement in these patients. Although more research in this area is needed, measurement of voice onset time has the potential to be used to evaluate consonant production in maxillectomy patients wearing a prosthesis. PMID:24574934
Human phoneme recognition depending on speech-intrinsic variability.
Meyer, Bernd T; Jürgens, Tim; Wesker, Thorsten; Brand, Thomas; Kollmeier, Birger
2010-11-01
The influence of different sources of speech-intrinsic variation (speaking rate, effort, style and dialect or accent) on human speech perception was investigated. In listening experiments with 16 listeners, confusions of consonant-vowel-consonant (CVC) and vowel-consonant-vowel (VCV) sounds in speech-weighted noise were analyzed. Experiments were based on the OLLO logatome speech database, which was designed for a man-machine comparison. It contains utterances spoken by 50 speakers from five dialect/accent regions and covers several intrinsic variations. By comparing results depending on intrinsic and extrinsic variations (i.e., different levels of masking noise), the degradation induced by variabilities can be expressed in terms of the SNR. The spectral level distance between the respective speech segment and the long-term spectrum of the masking noise was found to be a good predictor for recognition rates, while phoneme confusions were influenced by the distance to spectrally close phonemes. An analysis based on transmitted information of articulatory features showed that voicing and manner of articulation are comparatively robust cues in the presence of intrinsic variations, whereas the coding of place is more degraded. The database and detailed results have been made available for comparisons between human speech recognition (HSR) and automatic speech recognizers (ASR).
Acoustics of contrastive prosody in children
NASA Astrophysics Data System (ADS)
Patel, Rupal; Piel, Jordan; Grigos, Maria
2005-04-01
Empirical data on the acoustics of prosodic control in children is limited, particularly for linguistically contrastive tasks. Twelve children aged 4, 7, and 11 years were asked to produce two utterances ``Show Bob a bot'' (voiced consonants) and ``Show Pop a pot'' (voiceless consonants) 10 times each with emphasis placed on the second word (Bob/Pop) and 10 times with emphasis placed on the last word (bot/pot). A total of 40 utterances were analyzed per child. The following acoustic measures were obtained for each word within each utterance: average fundamental frequency (f0), peak f0, average intensity, peak intensity, and duration. Preliminary results suggest that 4 year olds are unable to modulate prosodic cues to signal the linguistic contrast. The 7 year olds, however, not only signaled the appropriate stress location, but did so with the most contrastive differences in f0, intensity, and duration, of all age groups. Prosodic differences between stressed and unstressed words were more pronounced for the utterance with voiced consonants. These findings suggest that the acoustics of linguistic prosody begin to differentiate between age 4 and 7 and may be highly influenced by changes in physiological control and flexibility that may also affect segmental features.
Acoustical study of the development of stop consonants in children
NASA Astrophysics Data System (ADS)
Imbrie, Annika K.
2003-10-01
This study focuses on the acoustic patterns of stop consonants and adjacent vowels as they develop in young children (ages 26-33) over a six month period. The acoustic properties that are being measured for stop consonants include spectra of bursts, frication noise and aspiration noise, and formant movements. Additionally, acoustic landmarks are labeled for measurements of durations of events determined by these landmarks. These acoustic measurements are being interpreted in terms of the supraglottal, laryngeal, and respiratory actions that give rise to them. Preliminary data show that some details of the child's gestures are still far from achieving the adult pattern. The burst of frication noise at the release tends to be shorter than adult values, and often consists of multiple bursts. From the burst spectrum, the place of articulation appears to be normal. Finally, coordination of closure of the glottis and release of the primary articulator is still quite variable, as is apparent from a large standard deviation in VOT. Analysis of longitudinal data on young children will result in better models of the development of the coordination of articulation, phonation, and respiration for motor speech production. [Work supported by NIH Grants Nos. DC00038 and DC00075.
Acoustical study of the development of stop consonants in children
NASA Astrophysics Data System (ADS)
Imbrie, Annika K.
2004-05-01
This study focuses on the acoustic patterns of stop consonants and adjacent vowels as they develop in young children (ages 2.6-3.3) over a 6-month period. The acoustic properties that are being measured for stop consonants include spectra of bursts, frication noise and aspiration noise, and formant movements. Additionally, acoustic landmarks are labeled for measurements of durations of events determined by these landmarks. These acoustic measurements are being interpreted in terms of the supraglottal, laryngeal, and respiratory actions that give rise to them. Preliminary data show that some details of the child's gestures are still far from achieving the adult pattern. The burst of frication noise at the release tends to be shorter than adult values, and often consists of multiple bursts, possibly due to greater compliance of the active articulator. From the burst spectrum, the place of articulation appears to be normal. Finally, coordination of closure of the glottis and release of the primary articulator is still quite variable, as is apparent from a large standard deviation in VOT. Analysis of longitudinal data on young children will result in better models of the development of motor speech production. [Work supported by NIH Grants DC00038 and DC00075.
Identification of speech transients using variable frame rate analysis and wavelet packets.
Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung
2006-01-01
Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.
Won, Jong Ho; Lorenzi, Christian; Nie, Kaibao; Li, Xing; Jameyson, Elyse M; Drennan, Ward R; Rubinstein, Jay T
2012-08-01
Previous studies have demonstrated that normal-hearing listeners can understand speech using the recovered "temporal envelopes," i.e., amplitude modulation (AM) cues from frequency modulation (FM). This study evaluated this mechanism in cochlear implant (CI) users for consonant identification. Stimuli containing only FM cues were created using 1, 2, 4, and 8-band FM-vocoders to determine if consonant identification performance would improve as the recovered AM cues become more available. A consistent improvement was observed as the band number decreased from 8 to 1, supporting the hypothesis that (1) the CI sound processor generates recovered AM cues from broadband FM, and (2) CI users can use the recovered AM cues to recognize speech. The correlation between the intact and the recovered AM components at the output of the sound processor was also generally higher when the band number was low, supporting the consonant identification results. Moreover, CI subjects who were better at using recovered AM cues from broadband FM cues showed better identification performance with intact (unprocessed) speech stimuli. This suggests that speech perception performance variability in CI users may be partly caused by differences in their ability to use AM cues recovered from FM speech cues.
Bugge, Anna; Möller, Sören; Westfall, Daniel R; Tarp, Jakob; Gejl, Anne K; Wedderkopp, Niels; Hillman, Charles H
2018-01-01
The main objective of this study was to investigate the associations between waist circumference, metabolic risk factors, and executive function in adolescents. The study was cross-sectional and included 558 adolescents (mean age 14.2 years). Anthropometrics and systolic blood pressure (sysBP) were measured and fasting blood samples were analyzed for metabolic risk factors. A metabolic risk factor cluster score (MetS-cluster score) was computed from the sum of standardized sysBP, triglycerides (TG), inverse high-density lipid cholesterol (HDLc) and insulin resistance (homeostasis model assessment). Cognitive control was measured with a modified flanker task. Regression analyses indicated that after controlling for demographic variables, HDLc exhibited a negative and TG a positive association with flanker reaction time (RT). Waist circumference did not demonstrate a statistically significant total association with the cognitive outcomes. In structural equation modeling, waist circumference displayed an indirect positive association with incongruent RT through a higher MetS-cluster score and through lower HDLc. The only statistically significant direct association between waist circumference and the cognitive outcomes was for incongruent RT in the model including HDLc as mediator. These findings are consonant with the previous literature reporting an adverse association between certain metabolic risk factors and cognitive control. Accordingly, these results suggest specificity between metabolic risk factors and cognitive control outcomes. Further, results of the present study, although cross-sectional, provide new evidence that specific metabolic risk factors may mediate an indirect association between adiposity and cognitive control in adolescents, even though a direct association between these variables was not observed. However, taking the cross-sectional study design into consideration, these results should be interpreted with caution and future longitudinal or experimental studies should verify the findings of this study.
The global Minmax k-means algorithm.
Wang, Xiaoyan; Bai, Yanping
2016-01-01
The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.
NASA Astrophysics Data System (ADS)
Sirait, Kamson; Tulus; Budhiarti Nababan, Erna
2017-12-01
Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.
Hallé, Pierre A; Ridouane, Rachid; Best, Catherine T
2016-01-01
In a discrimination experiment on several Tashlhiyt Berber singleton-geminate contrasts, we find that French listeners encounter substantial difficulty compared to native speakers. Native listeners of Tashlhiyt perform near ceiling level on all contrasts. French listeners perform better on final contrasts such as fit-fitt than initial contrasts such as bi-bbi or sir-ssir. That is, French listeners are more sensitive to silent closure duration in word-final voiceless stops than to either voiced murmur or frication duration of fully voiced stops or voiceless fricatives in word-initial position. We propose, tentatively, that native speakers of French, a language in which gemination is usually not considered to be phonemic, have not acquired quantity contrasts but yet exhibit a presumably universal sensitivity to rhythm, whereby listeners are able to perceive and compare the relative temporal distance between beats given by successive salient phonetic events such as a sequence of vowel nuclei.
Hallé, Pierre A.; Ridouane, Rachid; Best, Catherine T.
2016-01-01
In a discrimination experiment on several Tashlhiyt Berber singleton-geminate contrasts, we find that French listeners encounter substantial difficulty compared to native speakers. Native listeners of Tashlhiyt perform near ceiling level on all contrasts. French listeners perform better on final contrasts such as fit-fitt than initial contrasts such as bi-bbi or sir-ssir. That is, French listeners are more sensitive to silent closure duration in word-final voiceless stops than to either voiced murmur or frication duration of fully voiced stops or voiceless fricatives in word-initial position. We propose, tentatively, that native speakers of French, a language in which gemination is usually not considered to be phonemic, have not acquired quantity contrasts but yet exhibit a presumably universal sensitivity to rhythm, whereby listeners are able to perceive and compare the relative temporal distance between beats given by successive salient phonetic events such as a sequence of vowel nuclei. PMID:26973551
Electropalatographic analysis of apraxia of speech in a left hander and in a right hander.
Sugishita, M; Konno, K; Kabe, S; Yunoki, K; Togashi, O; Kawamura, M
1987-10-01
Two cases with 'pure' apraxia of speech are reported. The articulatory disturbances were quite similar. One of the two cases was a left-handed male with a subcortical haemorrhage and the other a right-handed male with a cerebral infarct. The MRI and CT scans showed that the first case had a lesion that mainly involved the right precentral gyrus and its deep white matter, and that the second had a lesion mainly affecting the lower parts of the left precentral and postcentral gyri and their deep white matter. These findings and a literature review suggest that a corticosubcortical lesion of the lower part of the left precentral gyrus in most right handers and a lesion of the symmetric region in the right hemisphere in some left handers cause apraxia of speech. The omission errors for sounds articulated by the tongue and the hard palate were analysed using electropalatography, which records visually the dynamics of the palatolingual contact. The results demonstrated that there were three kinds of omission errors: true omissions (no palatolingual contact); omissions with incorrect contact (palatolingual contact for a different sound or undifferentiated sound); and omissions with correct contact (correct palatolingual contact for a target sound). The latter two types of omission error were observed for initial consonants and they were probably caused by a delay in air flow. The patients also showed a tendency to substitute one of the two consonants/t, t/for other sounds, which suggested that they had difficulty in the inhibition of tongue activity.
McNeil, M.R.; Katz, W.F.; Fossett, T.R.D.; Garst, D.M.; Szuminsky, N.J.; Carter, G.; Lim, K.Y.
2010-01-01
Apraxia of speech (AOS) is a motor speech disorder characterized by disturbed spatial and temporal parameters of movement. Research on motor learning suggests that augmented feedback may provide a beneficial effect for training movement. This study examined the effects of the presence and frequency of online augmented visual kinematic feedback (AVKF) and clinician-provided perceptual feedback on speech accuracy in 2 adults with acquired AOS. Within a single-subject multiple-baseline design, AVKF was provided using electromagnetic midsagittal articulography (EMA) in 2 feedback conditions (50 or 100%). Articulator placement was specified for speech motor targets (SMTs). Treated and baselined SMTs were in the initial or final position of single-syllable words, in varying consonant-vowel or vowel-consonant contexts. SMTs were selected based on each participant's pre-assessed erred productions. Productions were digitally recorded and online perceptual judgments of accuracy (including segment and intersegment distortions) were made. Inter- and intra-judge reliability for perceptual accuracy was high. Results measured by visual inspection and effect size revealed positive acquisition and generalization effects for both participants. Generalization occurred across vowel contexts and to untreated probes. Results of the frequency manipulation were confounded by presentation order. Maintenance of learned and generalized effects were demonstrated for 1 participant. These data provide support for the role of augmented feedback in treating speech movements that result in perceptually accurate speech production. Future investigations will explore the independent contributions of each feedback type (i.e. kinematic and perceptual) in producing efficient and effective training of SMTs in persons with AOS. PMID:20424468
The cause of striae distensae.
Shuster, S
1979-01-01
Striae are always initiated by stretch whether the stretch is excessive or minimal: spontaneous striae do not occur. Cross-linkage of collagen appears to be more important than amount of collagen in permitting striae in response to stretch. An increase in cross linkage as in age increases the resistance to stretch deformation, but this rigidity leads ultimately to tearing of the skin and not striae. At the other extreme, the absence of crosslinkage leads to "elasticity" and excessive stretching with eventual rupture of the skin if the stretch goes beyond the elastic limit, but again, no striae. Striae appear to occur therefore only in skin in which the rigid cross-linked collagen and "elastic" unlinked collagen thus permitting a limited degree of stretch and a limited intradermal rupture, i.e. striae. (Although rigidity and elasticity are presented here in terms of collagen cross-linkage it seems probable that changes in interfibrillary materials such as glycosaminoglycans will prove important in this respect). This balance of stretch and limited tear is a continuous process and is an adaptation to the needs of growth in adolescence and change in body mass in early adult life and there are many many subclinical "striae" for each gross tear which is recognised clinically. An important factor likewise appears to be rate of stretch since if it is very slow, striae are less likely; there is "give" and new collagen formation. Although this working hypothesis is consonant with the facts only further work will show whether this smooth consonance is that of the fable or the weathered rock of fact.
Kokkinakis, Kostas; Loizou, Philipos C
2011-09-01
The purpose of this study is to determine the relative impact of reverberant self-masking and overlap-masking effects on speech intelligibility by cochlear implant listeners. Sentences were presented in two conditions wherein reverberant consonant segments were replaced with clean consonants, and in another condition wherein reverberant vowel segments were replaced with clean vowels. The underlying assumption is that self-masking effects would dominate in the first condition, whereas overlap-masking effects would dominate in the second condition. Results indicated that the degradation of speech intelligibility in reverberant conditions is caused primarily by self-masking effects that give rise to flattened formant transitions. © 2011 Acoustical Society of America
Business Clusters: Building on Local Strengths.
ERIC Educational Resources Information Center
Baldwin, Fred D.
2001-01-01
The Northwest Pennsylvania Industrial Resource Center's "wood cluster initiative" illustrates the benefits of rural business clusters. The initiative is turning a loose grouping of timber and forest-product firms into a competitive system by providing technical assistance, helping businesses plan and conduct job training programs,…
Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?
Coene, Martine; van der Lee, Anneke; Govaerts, Paul J.
2015-01-01
This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient's hearing impairment, to predict a patient's gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination. PMID:26557717
Dynamic Spectral Structure Specifies Vowels for Adults and Children
Nittrouer, Susan; Lowenstein, Joanna H.
2014-01-01
The dynamic specification account of vowel recognition suggests that formant movement between vowel targets and consonant margins is used by listeners to recognize vowels. This study tested that account by measuring contributions to vowel recognition of dynamic (i.e., time-varying) spectral structure and coarticulatory effects on stationary structure. Adults and children (four-and seven-year-olds) were tested with three kinds of consonant-vowel-consonant syllables: (1) unprocessed; (2) sine waves that preserved both stationary coarticulated and dynamic spectral structure; and (3) vocoded signals that primarily preserved that stationary, but not dynamic structure. Sections of two lengths were removed from syllable middles: (1) half the vocalic portion; and (2) all but the first and last three pitch periods. Adults performed accurately with unprocessed and sine-wave signals, as long as half the syllable remained; their recognition was poorer for vocoded signals, but above chance. Seven-year-olds performed more poorly than adults with both sorts of processed signals, but disproportionately worse with vocoded than sine-wave signals. Most four-year-olds were unable to recognize vowels at all with vocoded signals. Conclusions were that both dynamic and stationary coarticulated structures support vowel recognition for adults, but children attend to dynamic spectral structure more strongly because early phonological organization favors whole words. PMID:25536845
Callahan, Brandy L; Belleville, Sylvie; Ferland, Guylaine; Potvin, Olivier; Tremblay, Marie-Pier; Hudon, Carol; Macoir, Joël
2014-01-01
The Brown-Peterson task is used to assess verbal short-term memory as well as divided attention. In its auditory three-consonant version, trigrams are presented to participants who must recall the items in correct order after variable delays, during which an interference task is performed. The present study aimed to establish normative data for this test in the elderly French-Quebec population based on cross-sectional data from a retrospective, multi-center convenience sample. A total of 595 elderly native French-speakers from the province of Quebec performed the Memoria version of the auditory three-consonant Brown-Peterson test. For both series and item-by-item scoring methods, age, education, and, in most cases, recall after a 0-second interval were found to be significantly associated with recall performance after 10-second, 20-second, and 30-second interference intervals. Based on regression model results, equations to calculate Z scores are presented for the 10-second, 20-second and 30-second intervals and for each scoring method to allow estimation of expected performance based on participants' individual characteristics. As an important ceiling effect was observed at the 0-second interval, norms for this interference interval are presented in percentiles.
Dynamic spectral structure specifies vowels for children and adultsa
Nittrouer, Susan
2008-01-01
When it comes to making decisions regarding vowel quality, adults seem to weight dynamic syllable structure more strongly than static structure, although disagreement exists over the nature of the most relevant kind of dynamic structure: spectral change intrinsic to the vowel or structure arising from movements between consonant and vowel constrictions. Results have been even less clear regarding the signal components children use in making vowel judgments. In this experiment, listeners of four different ages (adults, and 3-, 5-, and 7-year-old children) were asked to label stimuli that sounded either like steady-state vowels or like CVC syllables which sometimes had middle sections masked by coughs. Four vowel contrasts were used, crossed for type (front/back or closed/open) and consonant context (strongly or only slightly constraining of vowel tongue position). All listeners recognized vowel quality with high levels of accuracy in all conditions, but children were disproportionately hampered by strong coarticulatory effects when only steady-state formants were available. Results clarified past studies, showing that dynamic structure is critical to vowel perception for all aged listeners, but particularly for young children, and that it is the dynamic structure arising from vocal-tract movement between consonant and vowel constrictions that is most important. PMID:17902868
Testing the limits of long-distance learning: Learning beyond a three-segment window
Finley, Sara
2012-01-01
Traditional flat-structured bigram and trigram models of phonotactics are useful because they capture a large number of facts about phonological processes. Additionally, these models predict that local interactions should be easier to learn than long-distance ones since long-distance dependencies are difficult to capture with these models. Long-distance phonotactic patterns have been observed by linguists in many languages, who have proposed different kinds of models, including feature-based bigram and trigram models, as well as precedence models. Contrary to flat-structured bigram and trigram models, these alternatives capture unbounded dependencies because at an abstract level of representation, the relevant elements are locally dependent, even if they are not adjacent at the observable level. Using an artificial grammar learning paradigm, we provide additional support for these alternative models of phonotactics. Participants in two experiments were exposed to a long-distance consonant harmony pattern in which the first consonant of a five-syllable word was [s] or [∫] ('sh') and triggered a suffix that was either [−su] or [−∫u] depending on the sibilant quality of this first consonant. Participants learned this pattern, despite the large distance between the trigger and the target, suggesting that when participants learn long-distance phonological patterns, that pattern is learned without specific reference to distance. PMID:22303815
Effects of obstruent consonants on the F0 contour
NASA Astrophysics Data System (ADS)
Hanson, Helen M.
2003-10-01
When a vowel follows an obstruent consonant, the fundamental frequency in the first few tens of milliseconds of the vowel is influenced by the voicing characteristics of the consonant. The goal of the research reported here is to model this influence, with the intention of improving generation of F0 contours in rule-based speech synthesis. Data have been recorded from 10 subjects. Stops, fricatives, and the nasal /m/ were paired with the vowels /i,opena/ to form CVm syllables. The syllables mVm served as baselines with which to compare the obstruents. The target syllables were embedded in carrier sentences. Intonation was varied so that each target syllable was produced with either a high, low, or no pitch accent. Results vary among subjects, but in general, obstruent effects on F0 primarily occur when the syllable carries a high pitch. In that case, F0 is increased relative to the baseline following voiceless obstruents, but F0 closely follows the baseline following voiced obstruents. After voiceless obstruents, F0 may be increased for up to 80 ms following voicing onset. When a syllable carries a low or no pitch accent, F0 is increased slightly following all obstruents. [Work supported by NIH Grant No. DC04331.
NASA Astrophysics Data System (ADS)
Apoux, Frédéric; Bacon, Sid P.
2004-09-01
The relative importance of temporal information in broad spectral regions for consonant identification was assessed in normal-hearing listeners. For the purpose of forcing listeners to use primarily temporal-envelope cues, speech sounds were spectrally degraded using four-noise-band vocoder processing. Frequency-weighting functions were determined using two methods. The first method consisted of measuring the intelligibility of speech with a hole in the spectrum either in quiet or in noise. The second method consisted of correlating performance with the randomly and independently varied signal-to-noise ratio within each band. Results demonstrated that all bands contributed equally to consonant identification when presented in quiet. In noise, however, both methods indicated that listeners consistently placed relatively more weight upon the highest frequency band. It is proposed that the explanation for the difference in results between quiet and noise relates to the shape of the modulation spectra in adjacent frequency bands. Overall, the results suggest that normal-hearing listeners use a common listening strategy in a given condition. However, this strategy may be influenced by the competing sounds, and thus may vary according to the context. Some implications of the results for cochlear implantees and hearing-impaired listeners are discussed.
Owen Van Horne, Amanda J.; Green Fager, Melanie
2015-01-01
Purpose Children with specific language impairment (SLI) frequently have difficulty producing the past tense. This study aimed to quantify the relative influence of telicity (i.e., the completedness of an event), verb frequency, and stem final phonemes on the production of past tense by school-age children with SLI and their typically-developing (TD) peers. Method Archival elicited production data from children with SLI between the ages of 6 and 9 and TD peers ages 4 to 8 were reanalyzed. Past tense accuracy was predicted using measures of telicity, verb frequency measures, and properties of the final consonant of the verb stem. Result All children were highly accurate when verbs were telic, the inflected form was frequently heard in the past tense, and the word ended in a sonorant/ non-alveolar consonant. All children were less accurate when verbs were atelic, rarely heard in the past tense, or ended in a word final obstruent or alveolar consonant. SLI status depressed overall accuracy rates, but did not influence how facilitative a given factor was. Conclusion Some factors that have been believed to be useful only when children are first discovering past tense, such as telicity, appear to be influential in later years as well. PMID:25879455
Sound Symbolism in the Languages of Australia
Haynie, Hannah; Bowern, Claire; LaPalombara, Hannah
2014-01-01
The notion that linguistic forms and meanings are related only by convention and not by any direct relationship between sounds and semantic concepts is a foundational principle of modern linguistics. Though the principle generally holds across the lexicon, systematic exceptions have been identified. These “sound symbolic” forms have been identified in lexical items and linguistic processes in many individual languages. This paper examines sound symbolism in the languages of Australia. We conduct a statistical investigation of the evidence for several common patterns of sound symbolism, using data from a sample of 120 languages. The patterns examined here include the association of meanings denoting “smallness” or “nearness” with front vowels or palatal consonants, and the association of meanings denoting “largeness” or “distance” with back vowels or velar consonants. Our results provide evidence for the expected associations of vowels and consonants with meanings of “smallness” and “proximity” in Australian languages. However, the patterns uncovered in this region are more complicated than predicted. Several sound-meaning relationships are only significant for segments in prominent positions in the word, and the prevailing mapping between vowel quality and magnitude meaning cannot be characterized by a simple link between gradients of magnitude and vowel F2, contrary to the claims of previous studies. PMID:24752356
Vaz, Suellen; Pezarini, Isabela de Oliveira; Paschoal, Larissa; Chacon, Lourenço
2015-01-01
To describe the spelling performance of children with regard to the record of sonorant consonants in Brazilian Portuguese language, to verify if the errors in their records were influenced by the accent in the word, and to categorize the kinds of errors found. For this current survey, 801 text productions were selected as a result of the development of 14 different thematic proposals, prepared by 76 children from the first grade of primary school, in 2001, coming from two schools of a city from São Paulo, Brazil. Of these productions, all words with sonorant consonants in a syllabic position of simple onset were selected. They were then organized as they appeared as pre-tonic, tonic, and post-tonic syllables, unstressed and tonic monosyllables. The following was observed: the number of hits was extremely higher than that of errors; higher occurrence of errors in non-accented syllables; higher occurrence of phonological substitutions followed by omissions and, at last, orthographic substitutions; and higher number of substitutions that involved graphemes referring to the sonorant class. Considering the distribution of orthographic data between hits and errors, as well as their relationship with phonetic-phonological aspects, may contribute to the comprehension of school difficulties, which are usually found in the first years of literacy instruction.
Electrophysiological and hemodynamic mismatch responses in rats listening to human speech syllables.
Mahmoudzadeh, Mahdi; Dehaene-Lambertz, Ghislaine; Wallois, Fabrice
2017-01-01
Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG) and hemodynamic responses (using fNIRS) to series of human speech syllables and investigated the brain response to a change of consonant (ba vs. ga) and to a change of voice (male vs. female). Both methods revealed concordant results, although ECoG measures were more sensitive than fNIRS. Responses to syllables were bilateral, but with marked right-hemispheric lateralization. Responses to voice changes were observed with both methods, while only ECoG was sensitive to consonant changes. These results suggest that rats more effectively processed the speech envelope than fine temporal cues in contrast with human preterm neonates, in whom the opposite effects were observed. Cross-species comparisons constitute a very valuable tool to define the singularities of the human brain and species-specific bias that may help human infants to learn their native language.
A comparative intelligibility study of single-microphone noise reduction algorithms.
Hu, Yi; Loizou, Philipos C
2007-09-01
The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.
Nunthayanon, Kulthida; Honda, Ei-ichi; Shimazaki, Kazuo; Ohmori, Hiroko; Inoue-Arai, Maristela Sayuri; Kurabayashi, Tohru; Ono, Takashi
2015-01-01
Different bony structures can affect the function of the velopharyngeal muscles. Asian populations differ morphologically, including the morphologies of their bony structures. The purpose of this study was to compare the velopharyngeal structures during speech in two Asian populations: Japanese and Thai. Ten healthy Japanese and Thai females (five each) were evaluated with a 3-Tesla (3 T) magnetic resonance imaging (MRI) scanner while they produced vowel-consonant-vowel syllable (/asa/). A gradient-echo sequence, fast low-angle shot with segmented cine and parallel imaging technique was used to obtain sagittal images of the velopharyngeal structures. MRI was carried out in real time during speech production, allowing investigations of the time-to-time changes in the velopharyngeal structures. Thai subjects had a significantly longer hard palate and produced shorter consonant than Japanese subjects. The velum of the Thai participants showed significant thickening during consonant production and their retroglossal space was significantly wider at rest, whereas the dimensional change during task performance was similar in the two populations. The 3 T MRI movie method can be used to investigate velopharyngeal function and diagnose velopharyngeal insufficiency. The racial differences may include differences in skeletal patterns and soft-tissue morphology that result in functional differences for the affected structures.
Won, Jong Ho; Lorenzi, Christian; Nie, Kaibao; Li, Xing; Jameyson, Elyse M.; Drennan, Ward R.; Rubinstein, Jay T.
2012-01-01
Previous studies have demonstrated that normal-hearing listeners can understand speech using the recovered “temporal envelopes,” i.e., amplitude modulation (AM) cues from frequency modulation (FM). This study evaluated this mechanism in cochlear implant (CI) users for consonant identification. Stimuli containing only FM cues were created using 1, 2, 4, and 8-band FM-vocoders to determine if consonant identification performance would improve as the recovered AM cues become more available. A consistent improvement was observed as the band number decreased from 8 to 1, supporting the hypothesis that (1) the CI sound processor generates recovered AM cues from broadband FM, and (2) CI users can use the recovered AM cues to recognize speech. The correlation between the intact and the recovered AM components at the output of the sound processor was also generally higher when the band number was low, supporting the consonant identification results. Moreover, CI subjects who were better at using recovered AM cues from broadband FM cues showed better identification performance with intact (unprocessed) speech stimuli. This suggests that speech perception performance variability in CI users may be partly caused by differences in their ability to use AM cues recovered from FM speech cues. PMID:22894230
Perceptual invariance of coarticulated vowels over variations in speaking rate.
Stack, Janet W; Strange, Winifred; Jenkins, James J; Clarke, William D; Trent, Sonja A
2006-04-01
This study examined the perception and acoustics of a large corpus of vowels spoken in consonant-vowel-consonant syllables produced in citation-form (lists) and spoken in sentences at normal and rapid rates by a female adult. Listeners correctly categorized the speaking rate of sentence materials as normal or rapid (2% errors) but did not accurately classify the speaking rate of the syllables when they were excised from the sentences (25% errors). In contrast, listeners accurately identified the vowels produced in sentences spoken at both rates when presented the sentences and when presented the excised syllables blocked by speaking rate or randomized. Acoustical analysis showed that formant frequencies at syllable midpoint for vowels in sentence materials showed "target undershoot" relative to citation-form values, but little change over speech rate. Syllable durations varied systematically with vowel identity, speaking rate, and voicing of final consonant. Vowel-inherent-spectral-change was invariant in direction of change over rate and context for most vowels. The temporal location of maximum F1 frequency further differentiated spectrally adjacent lax and tense vowels. It was concluded that listeners were able to utilize these rate- and context-independent dynamic spectrotemporal parameters to identify coarticulated vowels, even when sentential information about speaking rate was not available.
Beniya, Atsushi; Hirata, Hirohito; Watanabe, Yoshihide
2016-11-17
Relaxation dynamics of hot metal clusters on oxide surfaces play a crucial role in a variety of physical and chemical processes. However, their transient mobility has not been investigated as much as other systems such as atoms and molecules on metal surfaces due to experimental difficulties. To study the role of the transient mobility of clusters on the oxide surface, we investigated the initial adsorption process of size-selected Pt clusters on a thin Al 2 O 3 film. Soft-landing the size-selected clusters while suppressing the thermal migration resulted in the transient migration controlling the initial adsorption states as an isolated and aggregated cluster, as revealed using scanning tunneling microscopy. We demonstrate that transient migration significantly contributes to the initial cluster adsorption process; the cross section for aggregation is seven times larger than the expected value from geometrical considerations, indicating that metal clusters are highly mobile during a energy dissipation process on the oxide surface.
Albustanji, Yusuf M; Albustanji, Mahmoud M; Hegazi, Mohamed M; Amayreh, Mousa M
2014-10-01
The purpose of this study was to assess prevalence and types of consonant production errors and phonological processes in Saudi Arabic-speaking children with repaired cleft lip and palate, and to determine the relationship between frequency of errors on one hand and the type of the cleft. Possible relationship between age, gender and frequency of errors was also investigated. Eighty Saudi children with repaired cleft lip and palate aged 6-15 years (mean 6.7 years), underwent speech, language, and hearing evaluation. The diagnosis of articulation deficits was based on the results of an Arabic articulation test. Phonological processes were reported based on the productivity scale of a minimum 20% of occurrence. Diagnosis of nasality was based on a 5-point scale that reflects severity from 0 through 4. All participants underwent intraoral examination, informal language assessment, and hearing evaluation to assess their speech and language abilities. The Chi-Square test for independence was used to analyze the results of consonant production as a function of type of CLP and age. Out of 80 participants with CLP, 21 participants had normal articulation and resonance, 59 of participants (74%) showed speech abnormalities. Twenty-one of these 59 participants showed only articulation errors; 17 showed only hypernasality; and 21 showed both articulation and resonance deficits. CAs were observed in 20 participant. The productive phonological processes were consonant backing, final consonant deletion, gliding, and stopping. At age 6 and older, 37% of participants had persisting hearing loss. Despite early age at time of surgery (mean 6.7 months) for the studied CLP participants in this study, a substantial number of them demonstrated articulation errors and hypernasality. The results showed desirable findings for diverse languages. It is especially interesting to consider the prevalence of glottal stops and pharyngeal fricatives in a population for whom these sound are phonemic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Neurodiversity, Giftedness, and Aesthetic Perceptual Judgment of Music in Children with Autism
Masataka, Nobuo
2017-01-01
The author investigated the capability of aesthetic perceptual judgment of music in male children diagnosed with autism spectrum disorder (ASD) when compared to age-matched typically developing (TD) male children. Nineteen boys between 4 and 7 years of age with ASD were compared to 28 TD boys while listening to musical stimuli of different aesthetic levels. The results from two musical experiments using the above participants, are described here. In the first study, responses to a Mozart minuet and a dissonant altered version of the same Mozart minuet were compared. In this first study, the results indicated that both ASD and TD males preferred listening to the original consonant version of the minuet over the altered dissonant version. With the same participants, the second experiment included musical stimuli from four renowned composers: Mozart and Bach’s musical works, both considered consonant in their harmonic structure, were compared with music from Schoenberg and Albinoni, two composers who wrote musical works considered exceedingly harmonically dissonant. In the second study, when the stimuli included consonant or dissonant musical stimuli from different composers, the children with ASD showed greater preference for the aesthetic quality of the highly dissonant music compared to the TD children. While children in both of the groups listened to the consonant stimuli of Mozart and Bach music for the same amount of time, the children with ASD listened to the dissonant music of Schoenberg and Albinoni longer than the TD children. As preferring dissonant music is more aesthetically demanding perceptually, these results suggest that ASD male children demonstrate an enhanced capability of aesthetic judgment of music. Subsidiary data collected after the completion of the experiment revealed that absolute pitch ability was prevalent only in the children with ASD, some of whom also possessed extraordinary musical memory. The implications of these results are discussed with reference to the broader notion of neurodiversity, a term coined to capture potentially gifted qualities in individuals diagnosed with ASD. PMID:29018372
Gangji, Nazneen; Pascoe, Michelle; Smouse, Mantoa
2015-01-01
Swahili is widely spoken in East Africa, but to date there are no culturally and linguistically appropriate materials available for speech-language therapists working in the region. The challenges are further exacerbated by the limited research available on the typical acquisition of Swahili phonology. To describe the speech development of 24 typically developing first language Swahili-speaking children between the ages of 3;0 and 5;11 years in Dar es Salaam, Tanzania. A cross-sectional design was used with six groups of four children in 6-month age bands. Single-word speech samples were obtained from each child using a set of culturally appropriate pictures designed to elicit all consonants and vowels of Swahili. Each child's speech was audio-recorded and phonetically transcribed using International Phonetic Alphabet (IPA) conventions. Children's speech development is described in terms of (1) phonetic inventory, (2) syllable structure inventory, (3) phonological processes and (4) percentage consonants correct (PCC) and percentage vowels correct (PVC). Results suggest a gradual progression in the acquisition of speech sounds and syllables between the ages of 3;0 and 5;11 years. Vowel acquisition was completed and most of the consonants acquired by age 3;0. Fricatives/z, s, h/ were later acquired at 4 years and /θ/and /r/ were the last acquired consonants at age 5;11. Older children were able to produce speech sounds more accurately and had fewer phonological processes in their speech than younger children. Common phonological processes included lateralization and sound preference substitutions. The study contributes a preliminary set of normative data on speech development of Swahili-speaking children. Findings are discussed in relation to theories of phonological development, and may be used as a basis for further normative studies with larger numbers of children and ultimately the development of a contextually relevant assessment of the phonology of Swahili-speaking children. © 2014 Royal College of Speech and Language Therapists.
Willadsen, Elisabeth; Boers, Maria; Schöps, Antje; Kisling-Møller, Mia; Nielsen, Joan Bogh; Jørgensen, Line Dahl; Andersen, Mikael; Bolund, Stig; Andersen, Helene Søgaard
2018-01-01
Differing results regarding articulation skills in young children with cleft palate (CP) have been reported and often interpreted as a consequence of different surgical protocols. To assess the influence of different timing of hard palate closure in a two-stage procedure on articulation skills in 3-year-olds born with unilateral cleft lip and palate (UCLP). Secondary aims were to compare results with peers without CP, and to investigate if there are gender differences in articulation skills. Furthermore, burden of treatment was to be estimated in terms of secondary surgery, hearing and speech therapy. A randomized controlled trial (RCT). Early hard palate closure (EHPC) at 12 months versus late hard palate closure (LHPC) at 36 months in a two-stage procedure was tested in a cohort of 126 Danish-speaking children born with non-syndromic UCLP. All participants had the lip and soft palate closed around 4 months of age. Audio and video recordings of a naming test were available from 113 children (32 girls and 81 boys) and were transcribed phonetically. Recordings were obtained prior to hard palate closure in the LHPC group. The main outcome measures were percentage consonants correct adjusted (PCC-A) and consonant errors from blinded assessments. Results from 36 Danish-speaking children without CP obtained previously by Willadsen in 2012 were used for comparison. Children with EHPC produced significantly more target consonants correctly (83%) than children with LHPC (48%; p < .001). In addition, children with LHPC produced significantly more active cleft speech characteristics than children with EHPC (p < .001). Boys achieved significantly lower PCC-A scores than girls (p = .04) and produced significantly more consonant errors than girls (p = .02). No significant differences were found between groups regarding burden of treatment. The control group performed significantly better than the EHPC and LHPC groups on all compared variables. © 2017 Royal College of Speech and Language Therapists.
Why aftershock duration matters for probabilistic seismic hazard assessment
Shinji Toda,; Stein, Ross S.
2018-01-01
Most hazard assessments assume that high background seismicity rates indicate a higher probability of large shocks and, therefore, of strong shaking. However, in slowly deforming regions, such as eastern North America, Australia, and inner Honshu, this assumption breaks down if the seismicity clusters are instead aftershocks of historic and prehistoric mainshocks. Here, therefore we probe the circumstances under which aftershocks can last for 100–1000 years. Basham and Adams (1983) and Ebel et al. (2000) proposed that intraplate seismicity in eastern North America could be aftershocks of mainshocks that struck hundreds of years beforehand, a view consonant with rate–state friction (Dieterich, 1994), in which aftershock duration varies inversely with fault‐stressing rate. To test these hypotheses, we estimate aftershock durations of the 2011 Mw 9 Tohoku‐Oki rupture at 12 sites up to 250 km from the source, as well as for the near‐fault aftershocks of eight large Japanese mainshocks, sampling faults slipping 0.01 to 80 mm/yr . Whereas aftershock productivity increases with mainshock magnitude, we find that aftershock duration, the time until the aftershock rate decays to the premainshock rate, does not. Instead, aftershock sequences lasted a month on the fastest‐slipping faults and are projected to persist for more than 2000 years on the slowest. Thus, long aftershock sequences can misguide and inflate hazard assessments in intraplate regions if misinterpreted as background seismicity, whereas areas between seismicity clusters may instead harbor a higher chance of large mainshocks, the opposite of what is being assumed today.
Kinematical evolution of tidally limited star clusters: rotational properties
NASA Astrophysics Data System (ADS)
Tiongco, Maria A.; Vesperini, Enrico; Varri, Anna Lisa
2017-07-01
We present the results of a set of N-body simulations following the long-term evolution of the rotational properties of star cluster models evolving in the external tidal field of their host galaxy, after an initial phase of violent relaxation. The effects of two-body relaxation and escape of stars lead to a redistribution of the ordered kinetic energy from the inner to the outer regions, ultimately determining a progressive general loss of angular momentum; these effects are reflected in the overall decline of the rotation curve as the cluster evolves and loses stars. We show that all of our models share the same dependence of the remaining fraction of the initial rotation on the fraction of the initial mass lost. As the cluster evolves and loses part of its initial angular momentum, it becomes increasingly dominated by random motions, but even after several tens of relaxation times, and losing a significant fraction of its initial mass, a cluster can still be characterized by a non-negligible ratio of the rotational velocity to the velocity dispersion. This result is in qualitative agreement with the recently observed kinematical complexity that characterizes several Galactic globular clusters.
ERIC Educational Resources Information Center
Huang, Yifen
2010-01-01
Mixed-initiative clustering is a task where a user and a machine work collaboratively to analyze a large set of documents. We hypothesize that a user and a machine can both learn better clustering models through enriched communication and interactive learning from each other. The first contribution or this thesis is providing a framework of…
Imprints of feedback in young gasless clusters?
NASA Astrophysics Data System (ADS)
Parker, Richard J.; Dale, James E.
2013-06-01
We present the results of N-body simulations in which we take the masses, positions and velocities of sink particles from five pairs of hydrodynamical simulations of star formation by Dale et al. and evolve them for further 10 Myr. We compare the dynamical evolution of star clusters that formed under the influence of mass-loss driven by photoionization feedback to the evolution of clusters that formed without feedback. We remove any remaining gas and follow the evolution of structure in the clusters (measured by the Q-parameter), half-mass radius, central density, surface density and the fraction of bound stars. There is little discernible difference in the evolution of clusters that formed with feedback compared to those that formed without. The only clear trend is that all clusters which form without feedback in the hydrodynamical simulations lose any initial structure over 10 Myr, whereas some of the clusters which form with feedback retain structure for the duration of the subsequent N-body simulation. This is due to lower initial densities (and hence longer relaxation times) in the clusters from Dale et al. which formed with feedback, which prevents dynamical mixing from erasing substructure. However, several other conditions (such as supervirial initial velocities) also preserve substructure, so at a given epoch one would require knowledge of the initial density and virial state of the cluster in order to determine whether star formation in a cluster has been strongly influenced by feedback.
Cortical activity patterns predict speech discrimination ability
Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P
2010-01-01
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123
NASA Astrophysics Data System (ADS)
Long, Derle Ray
Coincidence theory states that when the components of harmony are in enhanced alignment the sound will be more consonant to the human auditory system. An objective method of examining the components of harmony is by investigating alignment of the mathematics of a particular sound or harmony. The study examined preference responses to excerpts tuned in just intonation, Pythagorean intonation, and equal temperament. Musical excerpts were presented in pairs and study subjects simply picked one version from the pair that they perceived as the most consonant. Results of the study revealed an overall preference for equal temperament in contradiction to coincidence theory. Several additional areas for research are suggested to further investigate the results of this study.
Deng, Xingjuan; Chen, Ji; Shuai, Jie
2009-08-01
For the purpose of improving the efficiency of aphasia rehabilitation training, artificial intelligence-scheduling function is added in the aphasia rehabilitation software, and the software's performance is improved. With the characteristics of aphasia patient's voice as well as with the need of artificial intelligence-scheduling functions under consideration, the present authors have designed a set of endpoint detection algorithm. It determines the reference endpoints, then extracts every word and ensures the reasonable segmentation points between consonants and vowels, using the reference endpoints. The results of experiments show that the algorithm is able to attain the objects of detection at a higher accuracy rate. Therefore, it is applicable to the detection of endpoint on aphasia-patient's voice.
Stuttering may start with repeating consonants (k, g, t). If stuttering becomes worse, words and phrases are repeated. Later, vocal spasms develop. There is a forced, almost explosive sound to speech. The ...
Effects of gender on the production of emphasis in Jordanian Arabic: A sociophonetic study
NASA Astrophysics Data System (ADS)
Abudalbuh, Mujdey D.
Emphasis, or pharyngealization, is a distinctive phonetic phenomenon and a phonemic feature of Semitic languages such as Arabic and Hebrew. The goal of this study is to investigate the effect of gender on the production of emphasis in Jordanian Arabic as manifested on the consonants themselves as well as on the adjacent vowels. To this end, 22 speakers of Jordanian Arabic, 12 males and 10 females, participated in a production experiment where they produced monosyllabic minimal CVC pairs contrasted on the basis of the presence of a word-initial plain or emphatic consonant. Several acoustic parameters were measured including Voice Onset Time (VOT), friction duration, the spectral mean of the friction noise, vowel duration and the formant frequencies (F1-F3) of the target vowels. The results of this study indicated that VOT is a reliable acoustic correlate of emphasis in Jordanian Arabic only for voiceless stops whose emphatic VOT was significantly shorter than their plain VOT. Also, emphatic fricatives were shorter than plain fricatives. Emphatic vowels were found to be longer than plain vowels. Overall, the results showed that emphatic vowels were characterized by a raised F1 at the onset and midpoint of the vowel, lowered F2 throughout the vowel, and raised F3 at the onset and offset of the vowel relative to the corresponding values of the plain vowels. Finally, results using Nearey's (1978) normalization algorithm indicated that emphasis was more acoustically evident in the speech of males than in the speech of females in terms of the F-pattern. The results are discussed from a sociolinguistic perspective in light of the previous literature and the notion of linguistic feminism.
Bullock-Rest, Natasha; Cerny, Alissa; Sweeney, Carol; Palumbo, Carole; Kurowski, Kathleen; Blumstein, Sheila E
2013-08-01
Previous behavioral work has shown that the phonetic realization of words in spoken word production is influenced by sound shape properties of the lexicon. A recent fMRI study (Peramunage, Blumstein, Myers, Goldrick, & Baese-Berk, 2011) showed that this influence of lexical structure on phonetic implementation recruited a network of areas that included the supramarginal gyrus (SMG) extending into the posterior superior temporal gyrus (pSTG) and the inferior frontal gyrus (IFG). The current study examined whether lesions in these areas result in a concomitant functional deficit. Ten individuals with aphasia and 8 normal controls read words aloud in which half had a voiced stop consonant minimal pair (e.g. tame; dame), and the other half did not (e.g. tooth; (*)dooth). Voice onset time (VOT) analysis of the initial voiceless stop consonant revealed that aphasic participants with lesions including the IFG and/or the SMG behaved as did normals, showing VOT lengthening effects for minimal pair words compared to non-minimal pair words. The failure to show a functional deficit in the production of VOT as a function of the lexical properties of a word with damage in the IFG or SMG suggests that fMRI findings do not always predict effects of lesions on behavioral deficits in aphasia. Nonetheless, the pattern of production errors made by the aphasic participants did reflect properties of the lexicon, supporting the view that the SMG and IFG are part of a lexical network involved in spoken word production. Copyright © 2013 Elsevier Inc. All rights reserved.
Reliability Measure of a Clinical Test: Appreciation of Music in Cochlear Implantees (AMICI)
Cheng, Min-Yu; Spitzer, Jaclyn B.; Shafiro, Valeriy; Sheft, Stanley; Mancuso, Dean
2014-01-01
Purpose The goals of this study were (1) to investigate the reliability of a clinical music perception test, Appreciation of Music in Cochlear Implantees (AMICI), and (2) examine associations between the perception of music and speech. AMICI was developed as a clinical instrument for assessing music perception in persons with cochlear implants (CIs). The test consists of four subtests: (1) music versus environmental noise discrimination, (2) musical instrument identification (closed-set), (3) musical style identification (closed-set), and (4) identification of musical pieces (open-set). To be clinically useful, it is crucial for AMICI to demonstrate high test-retest reliability, so that CI users can be assessed and retested after changes in maps or programming strategies. Research Design Thirteen CI subjects were tested with AMICI for the initial visit and retested again 10–14 days later. Two speech perception tests (consonant-nucleus-consonant [CNC] and Bamford-Kowal-Bench Speech-in-Noise [BKB-SIN]) were also administered. Data Analysis Test-retest reliability and equivalence of the test’s three forms were analyzed using paired t-tests and correlation coefficients, respectively. Correlation analysis was also conducted between results from the music and speech perception tests. Results Results showed no significant difference between test and retest (p > 0.05) with adequate power (0.9) as well as high correlations between the three forms (Forms A and B, r = 0.91; Forms A and C, r = 0.91; Forms B and C, r = 0.95). Correlation analysis showed high correlation between AMICI and BKB-SIN (r = −0.71), and moderate correlation between AMICI and CNC (r = 0.4). Conclusions The study showed AMICI is highly reliable for assessing musical perception in CI users. PMID:24384082
Carlson, Matthew T
2018-04-01
Language-specific restrictions on sound sequences in words can lead to automatic perceptual repair of illicit sound sequences. As an example, no Spanish words begin with /s/-consonant sequences ([#sC]), and where necessary (e.g., foreign loanwords) [#sC] is repaired by inserting an initial [e], (e.g. foreign loanwords, cf., esnob, from English snob). As a result, Spanish speakers tend to perceive an illusory [e] before [#sC] sequences. Interestingly, this perceptual illusion is weaker in early Spanish-English bilinguals, whose other language, English, allows [#sC]. The present study explored whether this apparent influence of the English language on Spanish is restricted to early bilinguals, whose early language experience includes a mixture of both languages, or whether later learning of second language (L2) English can also induce a weakening of the first language (L1) perceptual illusion. Two groups of late Spanish-English bilinguals, immersed in Spanish or English, were tested on the same Spanish AX (same-different) discrimination task used in a study by Carlson et al., (2016) and their results compared with the Spanish monolinguals from Carlson et al.'s study. Like early bilinguals, late bilinguals exhibited a reduced impact of perceptual prothesis on discrimination accuracy. Additionally, late bilinguals, particularly in English immersion, were slowest when responding against the Spanish perceptual illusion. Robust L1 perceptual illusions thus appear to be malleable in the face of later L2 learning. It is argued that these results are consonant with the need for late bilinguals to navigate alternative, conflicting representations of the same acoustic material, even in unilingual L1 speech perception tasks.
Computer program documentation: ISOCLS iterative self-organizing clustering program, program C094
NASA Technical Reports Server (NTRS)
Minter, R. T. (Principal Investigator)
1972-01-01
The author has identified the following significant results. This program implements an algorithm which, ideally, sorts a given set of multivariate data points into similar groups or clusters. The program is intended for use in the evaluation of multispectral scanner data; however, the algorithm could be used for other data types as well. The user may specify a set of initial estimated cluster means to begin the procedure, or he may begin with the assumption that all the data belongs to one cluster. The procedure is initiatized by assigning each data point to the nearest (in absolute distance) cluster mean. If no initial cluster means were input, all of the data is assigned to cluster 1. The means and standard deviations are calculated for each cluster.
Testing the limits of long-distance learning: learning beyond a three-segment window.
Finley, Sara
2012-01-01
Traditional flat-structured bigram and trigram models of phonotactics are useful because they capture a large number of facts about phonological processes. Additionally, these models predict that local interactions should be easier to learn than long-distance ones because long-distance dependencies are difficult to capture with these models. Long-distance phonotactic patterns have been observed by linguists in many languages, who have proposed different kinds of models, including feature-based bigram and trigram models, as well as precedence models. Contrary to flat-structured bigram and trigram models, these alternatives capture unbounded dependencies because at an abstract level of representation, the relevant elements are locally dependent, even if they are not adjacent at the observable level. Using an artificial grammar learning paradigm, we provide additional support for these alternative models of phonotactics. Participants in two experiments were exposed to a long-distance consonant-harmony pattern in which the first consonant of a five-syllable word was [s] or [∫] ("sh") and triggered a suffix that was either [-su] or [-∫u] depending on the sibilant quality of this first consonant. Participants learned this pattern, despite the large distance between the trigger and the target, suggesting that when participants learn long-distance phonological patterns, that pattern is learned without specific reference to distance. Copyright © 2012 Cognitive Science Society, Inc.
Shimokura, Ryota; Akasaka, Sakie; Nishimura, Tadashi; Hosoi, Hiroshi; Matsui, Toshie
2017-02-01
Some Japanese monosyllables contain consonants that are not easily discernible for individuals with sensorineural hearing loss. However, the acoustic features that make these monosyllables difficult to discern have not been clearly identified. Here, this study used the autocorrelation function (ACF), which can capture temporal features of signals, to clarify the factors influencing speech intelligibility. For each monosyllable, five factors extracted from the ACF [Φ(0): total energy; τ 1 and ϕ 1 : delay time and amplitude of the maximum peak; τ e : effective duration; W ϕ (0) : spectral centroid], voice onset time, speech intelligibility index, and loudness level were compared with the percentage of correctly perceived articulations (144 ears) obtained by 50 Japanese vowel and consonant-vowel monosyllables produced by one female speaker. Results showed that median effective duration [(τ e ) med ] was strongly correlated with the percentage of correctly perceived articulations of the consonants (r = 0.87, p < 0.01). (τ e ) med values were computed by running ACFs with the time lag at which the magnitude of the logarithmic-ACF envelope had decayed to -10 dB. Effective duration is a measure of temporal pattern persistence, i.e., the duration over which the waveform maintains a stable pattern. The authors postulate that low recognition ability is related to degraded perception of temporal fluctuation patterns.
Perceptual assessment of fricative--stop coarticulation.
Repp, B H; Mann, V A
1981-04-01
The perceptual dependence of stop consonants on preceding fricatives [Mann and Repp, J. Acoust. Soc. Am. 69, 548--558 (1981)] was further investigated in two experiments employing both natural and synthetic speech. These experiments consistently replicated our original finding that listeners, report velar stops following [s]. In addition, our data confirmed earlier reports that natural fricative noises (excerpted from utterances of [st alpha], [sk alpha], [(formula: see text)k alpha]) contain cues to the following stop consonants; this was revealed in subjects' identifications of stops from isolated fricative noises and from stimuli consisting of these noises followed by synthetic CV portions drawn from a [t alpha]--[k alpha] continuum. However, these cues in the noise portion could not account for the contextual effect of fricative identity ([formula: see text] versus [sp) on stop perception (more "k" responses following [s]). Rather, this effect seems to be related to a coarticulatory influence of a preceding fricative on stop production; Subjects' responses to excised natural CV portions (with bursts and aspiration removed) were biased towards a relatively more forward place of stop articulation when the CVs had originally been preceded by [s]; and the identification of a preceding ambiguous fricative was biased in the direction of the original fricative context in which a given CV portion had been produced. These findings support an articulatory explanation for the effect of preceding fricatives on stop consonant perception.
Effects of stimulus response compatibility on covert imitation of vowels.
Adank, Patti; Nuttall, Helen; Bekkering, Harold; Maegherman, Gwijde
2018-03-13
When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus-response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter. Responses tend to be faster for congruent than for incongruent distracters; thus, showing evidence of covert imitation. Simulation accounts propose a key role for covert imitation in speech perception. However, covert imitation has thus far only been demonstrated for a select class of speech sounds, namely consonants, and it is unclear whether covert imitation extends to vowels. We aimed to demonstrate that covert imitation effects as measured with the SRC paradigm extend to vowels, in two experiments. We examined whether covert imitation occurs for vowels in a consonant-vowel-consonant context in visual, audio, and audiovisual modalities. We presented the prompt at four time points to examine how covert imitation varied over the distracter's duration. The results of both experiments clearly demonstrated covert imitation effects for vowels, thus supporting simulation theories of speech perception. Covert imitation was not affected by stimulus modality and was maximal for later time points.
Maïonchi-Pino, Norbert; de Cara, Bruno; Ecalle, Jean; Magnan, Annie
2012-04-01
In this study, the authors queried whether French-speaking children with dyslexia were sensitive to consonant sonority and position within syllable boundaries to influence a phonological syllable-based segmentation in silent reading. Participants included 15 French-speaking children with dyslexia, compared with 30 chronological age-matched and reading level-matched controls. Children were tested with an audiovisual recognition task. A target pseudoword (TOLPUDE) was simultaneously presented visually and auditorily and then was compared with a printed test pseudoword that either was identical or differed after the coda deletion (TOPUDE) or the onset deletion (TOLUDE). The intervocalic consonant sequences had either a sonorant coda-sonorant onset (TOR.LADE), sonorant coda-obstruent onset (TOL.PUDE), obstruent coda-sonorant onset (DOT.LIRE), or obstruent coda-obstruent onset (BIC.TADE) sonority profile. All children processed identity better than they processed deletion, especially with the optimal sonorant coda-obstruent onset sonority profile. However, children preserved syllabification (coda deletion; TO.PUDE) rather than resyllabification (onset deletion; TO.LUDE) with intervocalic consonant sequence reductions, especially when sonorant codas were deleted but the optimal intersyllable contact was respected. It was surprising to find that although children with dyslexia generally exhibit phonological and acoustic-phonetic impairments (voicing), they showed sensitivity to the optimal sonority profile and a preference for preserved syllabification. The authors proposed a sonority-modulated explanation to account for phonological syllable-based processing. Educational implications are discussed.
Wada, Junichiro; Hideshima, Masayuki; Inukai, Shusuke; Matsuura, Hiroshi; Wakabayashi, Noriyuki
2014-01-01
To investigate the effects of the width and cross-sectional shape of the major connectors of maxillary dentures located in the middle area of the palate on the accuracy of phonetic output of consonants using an originally developed speech recognition system. Nine adults (4 males and 5 females, aged 24-26 years) with sound dentition were recruited. The following six sounds were considered: [∫i], [t∫i], [ɾi], [ni], [çi], and [ki]. The experimental connectors were fabricated to simulate bars (narrow, 8-mm width) and plates (wide, 20-mm width). Two types of cross-sectional shapes in the sagittal plane were specified: flat and plump edge. The appearance ratio of phonetic segment labels was calculated with the speech recognition system to indicate the accuracy of phonetic output. Statistical analysis was conducted using one-way ANOVA and Tukey's test. The mean appearance ratio of correct labels (MARC) significantly decreased for [ni] with the plump edge (narrow connector) and for [ki] with both the flat and plump edge (wide connectors). For [çi], the MARCs tended to be lower with flat plates. There were no significant differences for the other consonants. The width and cross-sectional shape of the connectors had limited effects on the articulation of consonants at the palate. © 2015 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Bekki, Kenji
2017-05-01
Most old globular clusters (GCs) in the Galaxy are observed to have internal chemical abundance spreads in light elements. We discuss a new GC formation scenario based on hierarchical star formation within fractal molecular clouds. In the new scenario, a cluster of bound and unbound star clusters ('star cluster complex', SCC) that have a power-law cluster mass function with a slope (β) of 2 is first formed from a massive gas clump developed in a dwarf galaxy. Such cluster complexes and β = 2 are observed and expected from hierarchical star formation. The most massive star cluster ('main cluster'), which is the progenitor of a GC, can accrete gas ejected from asymptotic giant branch (AGB) stars initially in the cluster and other low-mass clusters before the clusters are tidally stripped or destroyed to become field stars in the dwarf. The SCC is initially embedded in a giant gas hole created by numerous supernovae of the SCC so that cold gas outside the hole can be accreted on to the main cluster later. New stars formed from the accreted gas have chemical abundances that are different from those of the original SCC. Using hydrodynamical simulations of GC formation based on this scenario, we show that the main cluster with the initial mass as large as [2-5] × 105 M⊙ can accrete more than 105 M⊙ gas from AGB stars of the SCC. We suggest that merging of hierarchical SSCs can play key roles in stellar halo formation around GCs and self-enrichment processes in the early phase of GC formation.
ERIC Educational Resources Information Center
Spielberger, Julie; Baker, Stephen; Winje, Carolyn; Mayers, Leifa
2009-01-01
Chapin Hall has been conducting an implementation and evaluability study of the ECCI (Early Childhood Cluster Initiative) project since the midway point of its first year. As described in the authors' first report (Spielberger & Goyette, 2006), the initiative made considerable progress in its initial year, particularly in implementing the…
Global survey of star clusters in the Milky Way. VI. Age distribution and cluster formation history
NASA Astrophysics Data System (ADS)
Piskunov, A. E.; Just, A.; Kharchenko, N. V.; Berczik, P.; Scholz, R.-D.; Reffert, S.; Yen, S. X.
2018-06-01
Context. The all-sky Milky Way Star Clusters (MWSC) survey provides uniform and precise ages, along with other relevant parameters, for a wide variety of clusters in the extended solar neighbourhood. Aims: In this study we aim to construct the cluster age distribution, investigate its spatial variations, and discuss constraints on cluster formation scenarios of the Galactic disk during the last 5 Gyrs. Methods: Due to the spatial extent of the MWSC, we have considered spatial variations of the age distribution along galactocentric radius RG, and along Z-axis. For the analysis of the age distribution we used 2242 clusters, which all lie within roughly 2.5 kpc of the Sun. To connect the observed age distribution to the cluster formation history we built an analytical model based on simple assumptions on the cluster initial mass function and on the cluster mass-lifetime relation, fit it to the observations, and determined the parameters of the cluster formation law. Results: Comparison with the literature shows that earlier results strongly underestimated the number of evolved clusters with ages t ≳ 100 Myr. Recent studies based on all-sky catalogues agree better with our data, but still lack the oldest clusters with ages t ≳ 1 Gyr. We do not observe a strong variation in the age distribution along RG, though we find an enhanced fraction of older clusters (t > 1 Gyr) in the inner disk. In contrast, the distribution strongly varies along Z. The high altitude distribution practically does not contain clusters with t < 1 Gyr. With simple assumptions on the cluster formation history, the cluster initial mass function and the cluster lifetime we can reproduce the observations. The cluster formation rate and the cluster lifetime are strongly degenerate, which does not allow us to disentangle different formation scenarios. In all cases the cluster formation rate is strongly declining with time, and the cluster initial mass function is very shallow at the high mass end.
Internal Cluster Validation on Earthquake Data in the Province of Bengkulu
NASA Astrophysics Data System (ADS)
Rini, D. S.; Novianti, P.; Fransiska, H.
2018-04-01
K-means method is an algorithm for cluster n object based on attribute to k partition, where k < n. There is a deficiency of algorithms that is before the algorithm is executed, k points are initialized randomly so that the resulting data clustering can be different. If the random value for initialization is not good, the clustering becomes less optimum. Cluster validation is a technique to determine the optimum cluster without knowing prior information from data. There are two types of cluster validation, which are internal cluster validation and external cluster validation. This study aims to examine and apply some internal cluster validation, including the Calinski-Harabasz (CH) Index, Sillhouette (S) Index, Davies-Bouldin (DB) Index, Dunn Index (D), and S-Dbw Index on earthquake data in the Bengkulu Province. The calculation result of optimum cluster based on internal cluster validation is CH index, S index, and S-Dbw index yield k = 2, DB Index with k = 6 and Index D with k = 15. Optimum cluster (k = 6) based on DB Index gives good results for clustering earthquake in the Bengkulu Province.
Cognitive dissonance reduction as constraint satisfaction.
Shultz, T R; Lepper, M R
1996-04-01
A constraint satisfaction neural network model (the consonance model) simulated data from the two major cognitive dissonance paradigms of insufficient justification and free choice. In several cases, the model fit the human data better than did cognitive dissonance theory. Superior fits were due to the inclusion of constraints that were not part of dissonance theory and to the increased precision inherent to this computational approach. Predictions generated by the model for a free choice between undesirable alternatives were confirmed in a new psychological experiment. The success of the consonance model underscores important, unforeseen similarities between what had been formerly regarded as the rather exotic process of dissonance reduction and a variety of other, more mundane psychological processes. Many of these processes can be understood as the progressive application of constraints supplied by beliefs and attitudes.
A model of the evaporation of binary-fuel clusters of drops
NASA Technical Reports Server (NTRS)
Harstad, K.; Bellan, J.
1991-01-01
A formulation has been developed to describe the evaporation of dense or dilute clusters of binary-fuel drops. The binary fuel is assumed to be made of a solute and a solvent whose volatility is much lower than that of the solute. Convective flow effects, inducing a circulatory motion inside the drops, are taken into account, as well as turbulence external to the cluster volume. Results obtained with this model show that, similar to the conclusions for single isolated drops, the evaporation of the volatile is controlled by liquid mass diffusion when the cluster is dilute. In contrast, when the cluster is dense, the evaporation of the volatile is controlled by surface layer stripping, that is, by the regression rate of the drop, which is in fact controlled by the evaporation rate of the solvent. These conclusions are in agreement with existing experimental observations. Parametric studies show that these conclusions remain valid with changes in ambient temperature, initial slip velocity between drops and gas, initial drop size, initial cluster size, initial liquid mass fraction of the solute, and various combinations of solvent and solute. The implications of these results for computationally intensive combustor calculations are discussed.
Early dynamical evolution of young substructured clusters
NASA Astrophysics Data System (ADS)
Dorval, Julien; Boily, Christian
2017-03-01
Stellar clusters form with a high level of substructure, inherited from the molecular cloud and the star formation process. Evidence from observations and simulations also indicate the stars in such young clusters form a subvirial system. The subsequent dynamical evolution can cause important mass loss, ejecting a large part of the birth population in the field. It can also imprint the stellar population and still be inferred from observations of evolved clusters. Nbody simulations allow a better understanding of these early twists and turns, given realistic initial conditions. Nowadays, substructured, clumpy young clusters are usually obtained through pseudo-fractal growth and velocity inheritance. We introduce a new way to create clumpy initial conditions through a ''Hubble expansion'' which naturally produces self consistent clumps, velocity-wise. In depth analysis of the resulting clumps shows consistency with hydrodynamical simulations of young star clusters. We use these initial conditions to investigate the dynamical evolution of young subvirial clusters. We find the collapse to be soft, with hierarchical merging leading to a high level of mass segregation. The subsequent evolution is less pronounced than the equilibrium achieved from a cold collapse formation scenario.
Impact of a star formation efficiency profile on the evolution of open clusters
NASA Astrophysics Data System (ADS)
Shukirgaliyev, B.; Parmentier, G.; Berczik, P.; Just, A.
2017-09-01
Aims: We study the effect of the instantaneous expulsion of residual star-forming gas on star clusters in which the residual gas has a density profile that is shallower than that of the embedded cluster. This configuration is expected if star formation proceeds with a given star-formation efficiency per free-fall time in a centrally concentrated molecular gas clump. Methods: We performed direct N-body simulations whose initial conditions were generated by the program "mkhalo" from the package "falcON", adapted for our models. Our model clusters initially had a Plummer profile and are in virial equilibrium with the gravitational potential of the cluster-forming clump. The residual gas contribution was computed based on a local-density driven clustered star formation model. Our simulations included mass loss by stellar evolution and the tidal field of a host galaxy. Results: We find that a star cluster with a minimum global star formation efficiency (SFE) of 15 percent is able to survive instantaneous gas expulsion and to produce a bound cluster. Its violent relaxation lasts no longer than 20 Myr, independently of its global SFE and initial stellar mass. At the end of violent relaxation, the bound fractions of the surviving clusters with the same global SFEs are similar, regardless of their initial stellar mass. Their subsequent lifetime in the gravitational field of the Galaxy depends on their bound stellar masses. Conclusions: We therefore conclude that the critical SFE needed to produce a bound cluster is 15 percent, which is roughly half the earlier estimates of 33 percent. Thus we have improved the survival likelihood of young clusters after instantaneous gas expulsion. Young clusters can now survive instantaneous gas expulsion with a global SFEs as low as the SFEs observed for embedded clusters in the solar neighborhood (15-30 percent). The reason is that the star cluster density profile is steeper than that of the residual gas. However, in terms of the effective SFE, measured by the virial ratio of the cluster at gas expulsion, our results are in agreement with previous studies.
Spatial pattern recognition of seismic events in South West Colombia
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber
2013-09-01
Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, Alfred J.; McMillan, Stephen L. W.; Vesperini, Enrico
2013-12-01
We perform a series of simulations of evolving star clusters using the Astrophysical Multipurpose Software Environment (AMUSE), a new community-based multi-physics simulation package, and compare our results to existing work. These simulations model a star cluster beginning with a King model distribution and a selection of power-law initial mass functions and contain a tidal cutoff. They are evolved using collisional stellar dynamics and include mass loss due to stellar evolution. After studying and understanding that the differences between AMUSE results and results from previous studies are understood, we explored the variation in cluster lifetimes due to the random realization noisemore » introduced by transforming a King model to specific initial conditions. This random realization noise can affect the lifetime of a simulated star cluster by up to 30%. Two modes of star cluster dissolution were identified: a mass evolution curve that contains a runaway cluster dissolution with a sudden loss of mass, and a dissolution mode that does not contain this feature. We refer to these dissolution modes as 'dynamical' and 'relaxation' dominated, respectively. For Salpeter-like initial mass functions, we determined the boundary between these two modes in terms of the dynamical and relaxation timescales.« less
The early dynamical evolution of star clusters near the Galactic Centre
NASA Astrophysics Data System (ADS)
Park, So-Myoung; Goodwin, Simon P.; Kim, Sungsoo S.
2018-07-01
We examine the dynamical evolution of both Plummer sphere and substructured (fractal) star-forming regions in Galactic Centre (GC) strong tidal fields to see what initial conditions could give rise to an Arches-like massive star cluster by ˜2 Myr. We find that any initial distribution has to be contained within its initial tidal radius to survive, which sets a lower limit of the initial density of the Arches of ˜600 M⊙ pc-3 if the Arches is at 30 pc from the GC, or ˜200 M⊙ pc-3 if the Arches is at 100 pc from the GC. Plummer spheres that survive change little other than to dynamically mass segregate, but initially fractal distributions rapidly erase substructure, dynamically mass segregate and by 2 Myr look extremely similar to initial Plummer spheres, therefore it is almost impossible to determine the initial conditions of clusters in strong tidal fields.
The early dynamical evolution of star clusters near the Galactic Centre
NASA Astrophysics Data System (ADS)
Park, So-Myoung; Goodwin, Simon P.; Kim, Sungsoo S.
2018-04-01
We examine the dynamical evolution of both Plummer sphere and substructured (fractal) star forming regions in Galactic Centre (GC) strong tidal fields to see what initial conditions could give rise to an Arches-like massive star cluster by ˜2 Myr. We find that any initial distribution has to be contained within its initial tidal radius to survive, which sets a lower limit of the initial density of the Arches of ˜ 600 M⊙ pc-3 if the Arches is at 30 pc from the GC, or ˜ 200 M⊙ pc-3 if the Arches is at 100 pc from the GC. Plummer spheres that survive change little other than to dynamically mass segregate, but initially fractal distributions rapidly erase substructure, dynamically mass segregate and by 2 Myr look extremely similar to initial Plummer spheres, therefore it is almost impossible to determine the initial conditions of clusters in strong tidal fields.
ERIC Educational Resources Information Center
Gessman, Albert M.
1990-01-01
Discusses phonic shifting or sound shifts through an examination of Grimm's Law, or the Germanic Consonant Shift. The discussion includes comments on why the phonic shift developed and its pattern. (10 references) (GLR)
Clusters of circulating tumor cells traverse capillary-sized vessels
Au, Sam H.; Storey, Brian D.; Moore, John C.; Tang, Qin; Chen, Yeng-Long; Javaid, Sarah; Sarioglu, A. Fatih; Sullivan, Ryan; Madden, Marissa W.; O’Keefe, Ryan; Haber, Daniel A.; Maheswaran, Shyamala; Langenau, David M.; Stott, Shannon L.; Toner, Mehmet
2016-01-01
Multicellular aggregates of circulating tumor cells (CTC clusters) are potent initiators of distant organ metastasis. However, it is currently assumed that CTC clusters are too large to pass through narrow vessels to reach these organs. Here, we present evidence that challenges this assumption through the use of microfluidic devices designed to mimic human capillary constrictions and CTC clusters obtained from patient and cancer cell origins. Over 90% of clusters containing up to 20 cells successfully traversed 5- to 10-μm constrictions even in whole blood. Clusters rapidly and reversibly reorganized into single-file chain-like geometries that substantially reduced their hydrodynamic resistances. Xenotransplantation of human CTC clusters into zebrafish showed similar reorganization and transit through capillary-sized vessels in vivo. Preliminary experiments demonstrated that clusters could be disrupted during transit using drugs that affected cellular interaction energies. These findings suggest that CTC clusters may contribute a greater role to tumor dissemination than previously believed and may point to strategies for combating CTC cluster-initiated metastasis. PMID:27091969
Acquisition of /s/-Clusters in Dutch-Speaking Children with Phonological Disorders
ERIC Educational Resources Information Center
Gerrits, Ellen
2010-01-01
This study investigated the acquisition of word initial s clusters of 3-5 year old Dutch children with phonological disorders. Within these clusters, sl was produced correctly most often, whereas sn and sx were the more difficult clusters. In cluster reductions, s+obstruent and sl clusters reduction patterns followed the Sonority Sequencing…
Norman, I D; Aikins, M; Binka, F N
2011-12-01
Hospitals and other health facilities in Ghana do not appear to have standardized practices for quarantine and isolation in public health emergency management. This paper reviews the legislative framework governing the medico-legal prerequisites for initiating quarantine and isolation procedures as articulated in the Infectious Disease Act (Cap 78) 1908 amended, 1935, the Quarantine Act (Cap 77) 1915 amended, 1938, the Emergency Powers Act of 1994, (Act 472), and the National Disaster Management Act, 1996, (Act 517) in consonance with the 1992 Constitution of Ghana. The findings provide that (1) The legislative framework outlines systematic standards and protocols to be followed in the committal of person or persons in quarantine and isolation during public health emergencies. (2) These standards and protocols consider as imperative, the creation of standardized national templates for the initiation of quarantine and isolation measures. (3) The non-compliance of the standards and protocols renders vulnerable medical facilities and hospitals with their personnel to the threat of medical malpractice suits and breach of professional ethics. This paper provides suggestions to hospital administrators and medical personnel of how to develop administrative templates in compliance with the law in managing public health emergencies. It also provides examples of such templates for possible adoption by hospitals and other health administrators.
Allen, J S; Miller, J L
1999-10-01
Two speech production experiments tested the validity of the traditional method of creating voice-onset-time (VOT) continua for perceptual studies in which the systematic increase in VOT across the continuum is accompanied by a concomitant decrease in the duration of the following vowel. In experiment 1, segmental durations were measured for matched monosyllabic words beginning with either a voiced stop (e.g., big, duck, gap) or a voiceless stop (e.g., pig, tuck, cap). Results from four talkers showed that the change from voiced to voiceless stop produced not only an increase in VOT, but also a decrease in vowel duration. However, the decrease in vowel duration was consistently less than the increase in VOT. In experiment 2, results from four new talkers replicated these findings at two rates of speech, as well as highlighted the contrasting temporal effects on vowel duration of an increase in VOT due to a change in syllable-initial voicing versus a change in speaking rate. It was concluded that the traditional method of creating VOT continua for perceptual experiments, although not perfect, approximates natural speech by capturing the basic trade-off between VOT and vowel duration in syllable-initial voiced versus voiceless stop consonants.
Can a model of overlapping gestures account for scanning speech patterns?
Tjaden, K
1999-06-01
A simple acoustic model of overlapping, sliding gestures was used to evaluate whether coproduction was reduced for neurologic speakers with scanning speech patterns. F2 onset frequency was used as an acoustic measure of coproduction or gesture overlap. The effects of speaking rate (habitual versus fast) and utterance position (initial versus medial) on F2 frequency, and presumably gesture overlap, were examined. Regression analyses also were used to evaluate the extent to which across-repetition temporal variability in F2 trajectories could be explained as variation in coproduction for consonants and vowels. The lower F2 onset frequencies for disordered speakers suggested that gesture overlap was reduced for neurologic individuals with scanning speech. Speaking rate change did not influence F2 onset frequencies, and presumably gesture overlap, for healthy or disordered speakers. F2 onset frequency differences for utterance-initial and -medial repetitions were interpreted to suggest reduced coproduction for the utterance-initial position. The utterance-position effects on F2 onset frequency, however, likely were complicated by position-related differences in articulatory scaling. The results of the regression analysis indicated that gesture sliding accounts, in part, for temporal variability in F2 trajectories. Taken together, the results of this study provide support for the idea that speech production theory for healthy talkers helps to account for disordered speech production.
A Weight-Adaptive Laplacian Embedding for Graph-Based Clustering.
Cheng, De; Nie, Feiping; Sun, Jiande; Gong, Yihong
2017-07-01
Graph-based clustering methods perform clustering on a fixed input data graph. Thus such clustering results are sensitive to the particular graph construction. If this initial construction is of low quality, the resulting clustering may also be of low quality. We address this drawback by allowing the data graph itself to be adaptively adjusted in the clustering procedure. In particular, our proposed weight adaptive Laplacian (WAL) method learns a new data similarity matrix that can adaptively adjust the initial graph according to the similarity weight in the input data graph. We develop three versions of these methods based on the L2-norm, fuzzy entropy regularizer, and another exponential-based weight strategy, that yield three new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic data sets and real-world benchmark data sets exhibit the effectiveness of these new graph-based clustering methods.
Getting Ready for School: Palm Beach County's Early Childhood Cluster Initiative
ERIC Educational Resources Information Center
Spielberger, Julie; Baker, Stephen; Winje, Carolyn
2008-01-01
This publication reports findings from the second year of an implementation study of the Early Childhood Cluster Initiative (ECCI). ECCI is a prekindergarten program in ten elementary schools and a community child care center in Palm Beach County, based on the design of the High/Scope Perry Preschool model. The initiative is characterized by low…
Seelig, Amber D; Bensley, Kara M; Williams, Emily C; Armenta, Richard F; Rivera, Anna C; Peterson, Arthur V; Jacobson, Isabel G; Littman, Alyson J; Maynard, Charles; Bricker, Jonathan B; Rull, Rudolph P; Boyko, Edward J
2018-06-06
The aim of this study was to determine whether specific individual posttraumatic stress disorder (PTSD) symptoms or symptom clusters predict cigarette smoking initiation. Longitudinal data from the Millennium Cohort Study were used to estimate the relative risk for smoking initiation associated with PTSD symptoms among 2 groups: (1) all individuals who initially indicated they were nonsmokers (n = 44,968, main sample) and (2) a subset of the main sample who screened positive for PTSD (n = 1622). Participants were military service members who completed triennial comprehensive surveys that included assessments of smoking and PTSD symptoms. Complementary log-log models were fit to estimate the relative risk for subsequent smoking initiation associated with each of the 17 symptoms that comprise the PTSD Checklist and 5 symptom clusters. Models were adjusted for demographics, military factors, comorbid conditions, and other PTSD symptoms or clusters. In the main sample, no individual symptoms or clusters predicted smoking initiation. However, in the subset with PTSD, the symptoms "feeling irritable or having angry outbursts" (relative risk [RR] 1.41, 95% confidence interval [CI] 1.13-1.76) and "feeling as though your future will somehow be cut short" (RR 1.19, 95% CI 1.02-1.40) were associated with increased risk for subsequent smoking initiation. Certain PTSD symptoms were associated with higher risk for smoking initiation among current and former service members with PTSD. These results may help identify individuals who might benefit from more intensive smoking prevention efforts included with PTSD treatment.
Hemispatial neglect and serial order in verbal working memory.
Antoine, Sophie; Ranzini, Mariagrazia; van Dijck, Jean-Philippe; Slama, Hichem; Bonato, Mario; Tousch, Ann; Dewulf, Myrtille; Bier, Jean-Christophe; Gevers, Wim
2018-01-09
Working memory refers to our ability to actively maintain and process a limited amount of information during a brief period of time. Often, not only the information itself but also its serial order is crucial for good task performance. It was recently proposed that serial order is grounded in spatial cognition. Here, we compared performance of a group of right hemisphere-damaged patients with hemispatial neglect to healthy controls in verbal working memory tasks. Participants memorized sequences of consonants at span level and had to judge whether a target consonant belonged to the memorized sequence (item task) or whether a pair of consonants were presented in the same order as in the memorized sequence (order task). In line with this idea that serial order is grounded in spatial cognition, we found that neglect patients made significantly more errors in the order task than in the item task compared to healthy controls. Furthermore, this deficit seemed functionally related to neglect severity and was more frequently observed following right posterior brain damage. Interestingly, this specific impairment for serial order in verbal working memory was not lateralized. We advance the hypotheses of a potential contribution to the deficit of serial order in neglect patients of either or both (1) reduced spatial working memory capacity that enables to keep track of the spatial codes that provide memorized items with a positional context, (2) a spatial compression of these codes in the intact representational space. © 2018 The British Psychological Society.
Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles
2012-01-01
We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756
Context cue focality influences strategic prospective memory monitoring.
Hunter Ball, B; Bugg, Julie M
2018-02-12
Monitoring the environment for the occurrence of prospective memory (PM) targets is a resource-demanding process that produces cost (e.g., slower responding) to ongoing activities. However, research suggests that individuals are able to monitor strategically by using contextual cues to reduce monitoring in contexts in which PM targets are not expected to occur. In the current study, we investigated the processes supporting context identification (i.e., determining whether or not the context is appropriate for monitoring) by testing the context cue focality hypothesis. This hypothesis predicts that the ability to monitor strategically depends on whether the ongoing task orients attention to the contextual cues that are available to guide monitoring. In Experiment 1, participants performed an ongoing lexical decision task and were told that PM targets (TOR syllable) would only occur in word trials (focal context cue condition) or in items starting with consonants (nonfocal context cue condition). In Experiment 2, participants performed an ongoing first letter judgment (consonant/vowel) task and were told that PM targets would only occur in items starting with consonants (focal context cue condition) or in word trials (nonfocal context cue condition). Consistent with the context cue focality hypothesis, strategic monitoring was only observed during focal context cue conditions in which the type of ongoing task processing automatically oriented attention to the relevant features of the contextual cue. These findings suggest that strategic monitoring is dependent on limited-capacity processing resources and may be relatively limited when the attentional demands of context identification are sufficiently high.
Šonka, Karel; Šusta, Marek; Billiard, Michel
2015-02-01
The successive editions of the International Classification of Sleep Disorders (ICSD) reflect the evolution of the concepts of various sleep disorders. This is particularly the case for central disorders of hypersomnolence, with continuous changes in terminology and divisions of narcolepsy, idiopathic hypersomnia, and recurrent hypersomnia. According to the ICSD 2nd Edition (ICSD-2), narcolepsy with cataplexy (NwithC), narcolepsy without cataplexy (Nw/oC), idiopathic hypersomnia with long sleep time (IHwithLST), and idiopathic hypersomnia without long sleep time (IHw/oLST) are four, well-defined hypersomnias of central origin. However, in the absence of biological markers, doubts have been raised as to the relevance of a division of idiopathic hypersomnia into two forms, and it is not yet clear whether Nw/oC and IHw/oLST are two distinct entities. With this in mind, it was decided to empirically review the ICSD-2 classification by using a hierarchical cluster analysis to see whether this division has some relevance, even though the terms "with long sleep time" and "without long sleep time" are inappropriate. The cluster analysis differentiated three main clusters: Cluster 1, "combined monosymptomatic hypersomnia/narcolepsy type 2" (people initially diagnosed with IHw/oLST and Nw/oC); Cluster 2 "polysymptomatic hypersomnia" (people initially diagnosed with IHwithLST); and Cluster 3, narcolepsy type 1 (people initially diagnosed with NwithC). Cluster analysis confirmed that narcolepsy type 1 and polysymptomatic hypersomnia are independent sleep disorders. People who were initially diagnosed with Nw/oC and IHw/oLST formed a single cluster, referred to as "combined monosymptomatic hypersomnia/narcolepsy type 2." Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Webb, Jeremy J.; Vesperini, Enrico
2017-01-01
We make use of N-body simulations to determine the relationship between two observable parameters that are used to quantify mass segregation and energy equipartition in star clusters. Mass segregation can be quantified by measuring how the slope of a cluster's stellar mass function α changes with clustercentric distance r, and then calculating δ _α = d α (r)/d ln(r/r_m), where rm is the cluster's half-mass radius. The degree of energy equipartition in a cluster is quantified by η, which is a measure of how stellar velocity dispersion σ depends on stellar mass m via σ(m) ∝ m-η. Through a suite of N-body star cluster simulations with a range of initial sizes, binary fractions, orbits, black hole retention fractions, and initial mass functions, we present the co-evolution of δα and η. We find that measurements of the global η are strongly affected by the radial dependence of σ and mean stellar mass and the relationship between η and δα depends mainly on the cluster's initial conditions and the tidal field. Within rm, where these effects are minimized, we find that η and δα initially share a linear relationship. However, once the degree of mass segregation increases such that the radial dependence of σ and mean stellar mass become a factor within rm, or the cluster undergoes core collapse, the relationship breaks down. We propose a method for determining η within rm from an observational measurement of δα. In cases where η and δα can be measured independently, this new method offers a way of measuring the cluster's dynamical state.
Word-initial rhotic clusters in typically developing children: European Portuguese.
Ramalho, Ana Margarida; Freitas, M João
2018-01-01
Rhotic clusters are complex structures segmentally and prosodically and are frequently one of the last structures acquired by Portuguese-speaking children. This paper describes cross-sectional data for word-initial (WI) rhotic tap clusters in typically developing 3-4- and 5-year-olds in Portugal. Additional information is provided on WI /l/ as a singleton and in clusters. A native speaker audio-recorded and transcribed single words in a story-telling task. Results for WI rhotic clusters show an age effect consistent with previous research on European Portuguese. Singleton /l/ was in advance of /l/-clusters as expected, but the tap clusters were in advance of the /l/-clusters, possibly reflecting the velarized characteristics of the lateral. The prosodic variables word stress and word length were relevant for the WI rhotic clusters: shorter words and stressed syllables showed higher accuracy. Finally, mismatches ('errors') mainly reflected negative structural constraints (deletion of C2 and epenthesis) rather than segmental constraints (substitutions).
Formation of new stellar populations from gas accreted by massive young star clusters.
Li, Chengyuan; de Grijs, Richard; Deng, Licai; Geller, Aaron M; Xin, Yu; Hu, Yi; Faucher-Giguère, Claude-André
2016-01-28
Stars in clusters are thought to form in a single burst from a common progenitor cloud of molecular gas. However, massive, old 'globular' clusters--those with ages greater than ten billion years and masses several hundred thousand times that of the Sun--often harbour multiple stellar populations, indicating that more than one star-forming event occurred during their lifetimes. Colliding stellar winds from late-stage, asymptotic-giant-branch stars are often suggested to be triggers of second-generation star formation. For this to occur, the initial cluster masses need to be greater than a few million solar masses. Here we report observations of three massive relatively young star clusters (1-2 billion years old) in the Magellanic Clouds that show clear evidence of burst-like star formation that occurred a few hundred million years after their initial formation era. We show that such clusters could have accreted sufficient gas to form new stars if they had orbited in their host galaxies' gaseous disks throughout the period between their initial formation and the more recent bursts of star formation. This process may eventually give rise to the ubiquitous multiple stellar populations in globular clusters.
Getting Ready for School: Palm Beach County's Early Childhood Cluster Initiative. Executive Summary
ERIC Educational Resources Information Center
Spielberger, Julie; Baker, Stephen; Winje, Carolyn
2008-01-01
This report summarizes findings from the second year of an implementation study of the Early Childhood Cluster Initiative (ECCI). ECCI is a prekindergarten program in ten elementary schools and a community child care center in Palm Beach County, based on the design of the High/Scope Perry Preschool model. The initiative is characterized by low…
Rastle, Kathleen; Croot, Karen P; Harrington, Jonathan M; Coltheart, Max
2005-10-01
The research described in this article had 2 aims: to permit greater precision in the conduct of naming experiments and to contribute to a characterization of the motor execution stage of speech production. The authors report an exhaustive inventory of consonantal and postconsonantal influences on delayed naming latency and onset acoustic duration, derived from a hand-labeled corpus of single-syllable consonant-vowel utterances. Five talkers produced 6 repetitions each of a set of 168 prepared monosyllables, a set that comprised each of the consonantal onsets of English in 3 vowel contexts. Strong and significant effects associated with phonetic characteristics of initial and noninitial phonemes were observed on both delayed naming latency and onset acoustic duration. Results are discussed in terms of the biomechanical properties of the articulatory system that may give rise to these effects and in terms of their methodological implications for naming experiments.
Motivation and appraisal in perception of poorly specified speech.
Lidestam, Björn; Beskow, Jonas
2006-04-01
Normal-hearing students (n = 72) performed sentence, consonant, and word identification in either A (auditory), V (visual), or AV (audiovisual) modality. The auditory signal had difficult speech-to-noise relations. Talker (human vs. synthetic), topic (no cue vs. cue-words), and emotion (no cue vs. facially displayed vs. cue-words) were varied within groups. After the first block, effects of modality, face, topic, and emotion on initial appraisal and motivation were assessed. After the entire session, effects of modality on longer-term appraisal and motivation were assessed. The results from both assessments showed that V identification was more positively appraised than A identification. Correlations were tentatively interpreted such that evaluation of self-rated performance possibly depends on subjective standard and is reflected on motivation (if below subjective standard, AV group), or on appraisal (if above subjective standard, A group). Suggestions for further research are presented.
Food for Life/Comida para la Vida: Creating a Food Festival to Raise Diabetes Awareness
Lancaster, Kristie; Walker, Willie; Vance, Thomas; Kaskel, Phyllis; Arniella, Guedy; Horowitz, Carol
2012-01-01
African and Latino Americans have higher rates of diabetes and its complications than White Americans. Identifying people with undiagnosed diabetes and helping them obtain care can help to prevent complications and mortality. To kick off a screening initiative, our community-academic partnership created the “Food for Life Festival,” or “Festival Comida para la Vida.” This article will describe the community’s perspective on the Festival, which was designed to screen residents, and demonstrate that eating healthy can be fun, tasty, and affordable in a community-centered, culturally consonant setting. More than 1,000 residents attended the event; 382 adults were screened for diabetes, and 181 scored as high risk. Fifteen restaurants distributed free samples of healthy versions of their popular dishes. Community residents, restaurateurs, and clinicians commented that the event transformed many of their preconceived ideas about healthy foods and patient care. PMID:20097997
Food for Life / Comida para la Vida: creating a food festival to raise diabetes awareness.
Lancaster, Kristie; Walker, Willie; Vance, Thomas; Kaskel, Phyllis; Arniella, Guedy; Horowitz, Carol
2009-01-01
African and Latino Americans have higher rates of diabetes and its complications than White Americans. Identifying people with undiagnosed diabetes and helping them obtain care can help to prevent complications and mortality. To kick off a screening initiative, our community-academic partnership created the "Food for Life Festival," or "Festival Comida para la Vida." This article will describe the community's perspective on the Festival, which was designed to screen residents, and demonstrate that eating healthy can be fun, tasty, and affordable in a community-centered, culturally consonant setting. More than 1,000 residents attended the event; 382 adults were screened for diabetes, and 181 scored as high risk. Fifteen restaurants distributed free samples of healthy versions of their popular dishes. Community residents, restaurateurs, and clinicians commented that the event transformed many of their preconceived ideas about healthy foods and patient care.
Adult perceptions of phonotactic violations in Japanese
NASA Astrophysics Data System (ADS)
Fais, Laurel; Kajikawa, Sachiyo; Werker, Janet; Amano, Shigeaki
2004-05-01
Adult Japanese speakers ``hear'' epenthetic vowels in productions of Japanese-like words that violate the canonical CVCVCV form by containing internal consonant clusters (CVCCV) [Dupoux et al., J. Exp. Psychol. 25, 1568-1578 (1999)]. Given this finding, this research examined how Japanese adults rated the goodness of Japanese-like words produced without a vowel in the final syllable (CVC), and words produced without vowels in the penultimate and final syllables (CVCC). Furthermore, in some of these contexts, voiceless vowels may appear in fluent, casual Japanese productions, especially in the Kanto dialect, and in some, such voiceless vowels may not appear. Results indicate that both Kanto and Kinki speakers rated CVC productions for contexts in which voiceless vowels are not allowed as the worst; they rated CVC and CVCC contexts in which voiceless vowel productions are allowed as better. In these latter contexts, the CVC words, which result from the loss of one, final, vowel, are judged to be better than the CVCC words, which result from the loss of two (final and penultimate) vowels. These results mirror the relative seriousness of the phonotactic violations and indicate listeners have tacit knowledge of these regularities in their language.
Enhanced Sensitivity to Subphonemic Segments in Dyslexia: A New Instance of Allophonic Perception
Serniclaes, Willy; Seck, M’ballo
2018-01-01
Although dyslexia can be individuated in many different ways, it has only three discernable sources: a visual deficit that affects the perception of letters, a phonological deficit that affects the perception of speech sounds, and an audio-visual deficit that disturbs the association of letters with speech sounds. However, the very nature of each of these core deficits remains debatable. The phonological deficit in dyslexia, which is generally attributed to a deficit of phonological awareness, might result from a specific mode of speech perception characterized by the use of allophonic (i.e., subphonemic) units. Here we will summarize the available evidence and present new data in support of the “allophonic theory” of dyslexia. Previous studies have shown that the dyslexia deficit in the categorical perception of phonemic features (e.g., the voicing contrast between /t/ and /d/) is due to the enhanced sensitivity to allophonic features (e.g., the difference between two variants of /d/). Another consequence of allophonic perception is that it should also give rise to an enhanced sensitivity to allophonic segments, such as those that take place within a consonant cluster. This latter prediction is validated by the data presented in this paper. PMID:29587419
[Development and equivalence evaluation of spondee lists of mandarin speech test materials].
Zhang, Hua; Wang, Shuo; Wang, Liang; Chen, Jing; Chen, Ai-ting; Guo, Lian-sheng; Zhao, Xiao-yan; Ji, Chen
2006-06-01
To edit the spondee (disyllable) word lists as a part of mandarin speech test materials (MSTM). These will be basic speech materials for routine tests in clinics and laboratories. Two groups of professionals (audiologists, Chinese and Mandarin scientists, linguistician and statistician) were set up at first. The editing principles were established after 3 round table meetings. Ten spondee lists, each with 50 words, were edited and recorded into cassettes. All lists were phonemically balanced (3-dimensions: vowels, consonants and Chinese tones). Seventy-three normal hearing college students were tested. The speech was presented by earphone monaurally. Three statistic methods were used for equivalent analysis. Related analysis showed that all lists were much related, except List 5. Cluster analysis showed that all ten lists could be classified as two groups. But Kappa test showed that the lists' homogeneity were not well. Spondee lists are one of the most routine speech test materials. Their editing, recording and equivalent evaluation are affected by many factors. This also needs multi-discipline cooperation. All lists edited in present study need future modification in recording and testing in order to be used clinically and in research. The phonemic balance should be kept.
Spasmodic Dysphonia: a Laryngeal Control Disorder Specific to Speech
Ludlow, Christy L.
2016-01-01
Spasmodic dysphonia (SD) is a rare neurological disorder that emerges in middle age, is usually sporadic, and affects intrinsic laryngeal muscle control only during speech. Spasmodic bursts in particular laryngeal muscles disrupt voluntary control during vowel sounds in adductor SD and interfere with voice onset after voiceless consonants in abductor SD. Little is known about its origins; it is classified as a focal dystonia secondary to an unknown neurobiological mechanism that produces a chronic abnormality of laryngeal motor neuron regulation during speech. It develops primarily in females and does not interfere with breathing, crying, laughter, and shouting. Recent postmortem studies have implicated the accumulation of clusters in the parenchyma and perivascular regions with inflammatory changes in the brainstem in one to two cases. A few cases with single mutations in THAP1, a gene involved in transcription regulation, suggest that a weak genetic predisposition may contribute to mechanisms causing a nonprogressive abnormality in laryngeal motor neuron control for speech but not for vocal emotional expression. Research is needed to address the basic cellular and proteomic mechanisms that produce this disorder to provide intervention that could target the pathogenesis of the disorder rather than only providing temporary symptom relief. PMID:21248101
Spasmodic dysphonia: a laryngeal control disorder specific to speech.
Ludlow, Christy L
2011-01-19
Spasmodic dysphonia (SD) is a rare neurological disorder that emerges in middle age, is usually sporadic, and affects intrinsic laryngeal muscle control only during speech. Spasmodic bursts in particular laryngeal muscles disrupt voluntary control during vowel sounds in adductor SD and interfere with voice onset after voiceless consonants in abductor SD. Little is known about its origins; it is classified as a focal dystonia secondary to an unknown neurobiological mechanism that produces a chronic abnormality of laryngeal motor neuron regulation during speech. It develops primarily in females and does not interfere with breathing, crying, laughter, and shouting. Recent postmortem studies have implicated the accumulation of clusters in the parenchyma and perivascular regions with inflammatory changes in the brainstem in one to two cases. A few cases with single mutations in THAP1, a gene involved in transcription regulation, suggest that a weak genetic predisposition may contribute to mechanisms causing a nonprogressive abnormality in laryngeal motor neuron control for speech but not for vocal emotional expression. Research is needed to address the basic cellular and proteomic mechanisms that produce this disorder to provide intervention that could target the pathogenesis of the disorder rather than only providing temporary symptom relief.
Predictions interact with missing sensory evidence in semantic processing areas.
Scharinger, Mathias; Bendixen, Alexandra; Herrmann, Björn; Henry, Molly J; Mildner, Toralf; Obleser, Jonas
2016-02-01
Human brain function draws on predictive mechanisms that exploit higher-level context during lower-level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704-716, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining
2017-12-01
We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.
Yang, Jing
2018-03-01
This study investigated the durational features of English word-initial /s/+stop clusters produced by bilingual Mandarin (L1)-English (L2) children and monolingual English children and adults. The participants included two groups of five- to six-year-old bilingual children: low proficiency in the L2 (Bi-low) and high proficiency in the L2 (Bi-high), one group of age-matched English children, and one group of English adults. Each participant produced a list of English words containing /sp, st, sk/ at the word-initial position followed by /a, i, u/, respectively. The absolute durations of the clusters and cluster elements and the durational proportions of elements to the overall cluster were measured. The results revealed that Bi-high children behaved similarly to the English monolinguals whereas Bi-low children used a different strategy of temporal organization to coordinate the cluster components in comparison to the English monolinguals and Bi-high children. The influence of language experience and continuing development of temporal features in children were discussed.
Li, Zheng; Vendrell, Oriol
2016-01-01
The ultrafast nuclear and electronic dynamics of protonated water clusters H+(H2O)n after extreme ultraviolet photoionization is investigated. In particular, we focus on cluster cations with n = 3, 6, and 21. Upon ionization, two positive charges are present in the cluster related to the excess proton and the missing electron, respectively. A correlation is found between the cluster's geometrical conformation and initial electronic energy with the size of the final fragments produced. For situations in which the electron hole and proton are initially spatially close, the two entities become correlated and separate in a time-scale of 20 to 40 fs driven by strong non-adiabatic effects. PMID:26798842
NASA Technical Reports Server (NTRS)
Charlton, Jane C.; Laguna, Pablo
1995-01-01
The globular clusters that we observe in galaxies may be only a fraction of the initial population. Among the evolutionary influences on the population is the destruction of globular clusters by tidal forces as the cluster moves through the field of influence of a disk, a bulge, and/or a putative nuclear component (black hole). We have conducted a series of N-body simulations of globular clusters on bound and marginally bound orbits through poetentials that include black hole and speroidal components. The degree of concentration of the spheroidal component can have a considerable impact on the extent to which a globular cluster is disrupted. If half the mass of a 10(exp 10) solar mass spheroid is concentrated within 800 pc, then only black holes with masses greater than 10(exp 9) solar mass can have a significant tidal influence over that already exerted by the bulge. However, if the matter in the spheroidal component is not so strongly concentrated toward the center of the galaxy, a more modest central black hole (down to 10(exp 8) solar mass) could have a dominant influence on the globular cluster distribution, particularly if many of the clusters were initially on highly radial orbits. Our simulations show that the stars that are stripped from a globular cluster follow orbits with roughly the same eccentricity as the initial cluster orbit, spreading out along the orbit like a 'string of pearls.' Since only clusters on close to radial orbits will suffer substantial disruption, the population of stripped stars will be on orbits of high eccentricity.
Schaffer, Jessica N; Norsworthy, Allison N; Sun, Tung-Tien; Pearson, Melanie M
2016-04-19
The catheter-associated uropathogenProteus mirabilisfrequently causes urinary stones, but little has been known about the initial stages of bladder colonization and stone formation. We found thatP. mirabilisrapidly invades the bladder urothelium, but generally fails to establish an intracellular niche. Instead, it forms extracellular clusters in the bladder lumen, which form foci of mineral deposition consistent with development of urinary stones. These clusters elicit a robust neutrophil response, and we present evidence of neutrophil extracellular trap generation during experimental urinary tract infection. We identified two virulence factors required for cluster development: urease, which is required for urolithiasis, and mannose-resistantProteus-like fimbriae. The extracellular cluster formation byP. mirabilisstands in direct contrast to uropathogenicEscherichia coli, which readily formed intracellular bacterial communities but not luminal clusters or urinary stones. We propose that extracellular clusters are a key mechanism ofP. mirabilissurvival and virulence in the bladder.
NASA Astrophysics Data System (ADS)
Capuzzo-Dolcetta, Roberto
1993-10-01
Among the possible phenomena inducing evolution of the globular cluster system in an elliptical galaxy, dynamical friction due to field stars and tidal disruption caused by a central nucleus is of crucial importance. The aim of this paper is the study of the evolution of the globular cluster system in a triaxial galaxy in the presence of these phenomena. In particular, the possibility is examined that some galactic nuclei have been formed by frictionally decayed globular clusters moving in a triaxial potential. We find that the initial rapid growth of the nucleus, due mainly to massive clusters on box orbits falling in a short time scale into the galactic center, is later slowed by tidal disruption induced by the nucleus itself on less massive clusters in the way described by Ostriker, Binney, and Saha. The efficiency of dynamical friction is such to carry to the center of the galaxy enough globular cluster mass available to form a compact nucleus, but the actual modes and results of cluster-cluster encounters in the central potential well are complicated phenomena which remains to be investigated. The mass of the resulting nucleus is determined by the mutual feedback of the described processes, together with the initial spatial, velocity, and mass distributions of the globular cluster family. The effect on the system mass function is studied, showing the development of a low- and high-mass turnover even with an initially flat mass function. Moreover, in this paper is discussed the possibility that the globular cluster fall to the galactic center has been a cause of primordial violent galactic activity. An application of the model to M31 is presented.
Exponents of non-linear clustering in scale-free one-dimensional cosmological simulations
NASA Astrophysics Data System (ADS)
Benhaiem, David; Joyce, Michael; Sicard, François
2013-03-01
One-dimensional versions of dissipationless cosmological N-body simulations have been shown to share many qualitative behaviours of the three-dimensional problem. Their interest lies in the fact that they can resolve a much greater range of time and length scales, and admit exact numerical integration. We use such models here to study how non-linear clustering depends on initial conditions and cosmology. More specifically, we consider a family of models which, like the three-dimensional Einstein-de Sitter (EdS) model, lead for power-law initial conditions to self-similar clustering characterized in the strongly non-linear regime by power-law behaviour of the two-point correlation function. We study how the corresponding exponent γ depends on the initial conditions, characterized by the exponent n of the power spectrum of initial fluctuations, and on a single parameter κ controlling the rate of expansion. The space of initial conditions/cosmology divides very clearly into two parts: (1) a region in which γ depends strongly on both n and κ and where it agrees very well with a simple generalization of the so-called stable clustering hypothesis in three dimensions; and (2) a region in which γ is more or less independent of both the spectrum and the expansion of the universe. The boundary in (n, κ) space dividing the `stable clustering' region from the `universal' region is very well approximated by a `critical' value of the predicted stable clustering exponent itself. We explain how this division of the (n, κ) space can be understood as a simple physical criterion which might indeed be expected to control the validity of the stable clustering hypothesis. We compare and contrast our findings to results in three dimensions, and discuss in particular the light they may throw on the question of `universality' of non-linear clustering in this context.
ERIC Educational Resources Information Center
Spielberger, Julie; Goyette, Paul
2006-01-01
This report summarizes findings from the first year of an implementation study of the Early Childhood Cluster Initiative (ECCI). ECCI is a prekindergarten program in ten elementary schools and a community child care center in Palm Beach County, based on the design of the High/Scope Perry Preschool model. The initiative is characterized by low…
NASA Astrophysics Data System (ADS)
Hans, Andreas; Stumpf, Vasili; Holzapfel, Xaver; Wiegandt, Florian; Schmidt, Philipp; Ozga, Christian; Reiß, Philipp; Ben Ltaief, Ltaief; Küstner-Wetekam, Catmarna; Jahnke, Till; Ehresmann, Arno; Demekhin, Philipp V.; Gokhberg, Kirill; Knie, André
2018-01-01
We directly observe radiative charge transfer (RCT) in Ne clusters by dispersed vacuum-ultraviolet photon detection. The doubly ionized Ne2+-{{{N}}{{e}}}n-1 initial states of RCT are populated after resonant 1s-3p photoexcitation or 1s photoionization of Ne n clusters with < n> ≈ 2800. These states relax further producing Ne+-Ne+-{{{N}}{{e}}}n-2 final states, and the RCT photon is emitted. Ab initio calculations assign the observed RCT signal to the{}{{{N}}{{e}}}2+(2{{{p}}}-2{[}1{{D}}]){--}{{{N}}{{e}}}n-1 initial state, while transitions from other possible initial states are proposed to be quenched by competing relaxation processes. The present results are in agreement with the commonly discussed scenario, where the doubly ionized atom in a noble gas cluster forms a dimer which dissipates its vibrational energy on a picosecond timescale. Our study complements the picture of the RCT process in weakly bound clusters, providing information which is inaccessible by charged particle detection techniques.
Qian, Linping; Wang, Zhen; Beletskiy, Evgeny V.; ...
2017-03-28
Here, the ability of Au catalysts to effect the challenging task of utilizing molecular oxygen for the selective epoxidation of cyclooctene is fascinating. Although supported nanometre-size Au particles are poorly active, here we show that solubilized atomic Au clusters, present in ng ml –1 concentrations and stabilized by ligands derived from the oxidized hydrocarbon products, are active. They can be formed from various Au sources. They generate initiators and propagators to trigger the onset of the auto-oxidation reaction with an apparent turnover frequency of 440 s –1, and continue to generate additional initiators throughout the auto-oxidation cycle without direct participationmore » in the cycle. Spectroscopic characterization suggests that 7–8 atom clusters are effective catalytically. Extension of work based on these understandings leads to the demonstration that these Au clusters are also effective in selective oxidation of cyclohexene, and that solubilized Pt clusters are also capable of generating initiators for cyclooctene epoxidation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Linping; Wang, Zhen; Beletskiy, Evgeny V.
Here, the ability of Au catalysts to effect the challenging task of utilizing molecular oxygen for the selective epoxidation of cyclooctene is fascinating. Although supported nanometre-size Au particles are poorly active, here we show that solubilized atomic Au clusters, present in ng ml –1 concentrations and stabilized by ligands derived from the oxidized hydrocarbon products, are active. They can be formed from various Au sources. They generate initiators and propagators to trigger the onset of the auto-oxidation reaction with an apparent turnover frequency of 440 s –1, and continue to generate additional initiators throughout the auto-oxidation cycle without direct participationmore » in the cycle. Spectroscopic characterization suggests that 7–8 atom clusters are effective catalytically. Extension of work based on these understandings leads to the demonstration that these Au clusters are also effective in selective oxidation of cyclohexene, and that solubilized Pt clusters are also capable of generating initiators for cyclooctene epoxidation.« less
Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W
2015-01-01
As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues). The results suggest that the benefit to CI consonant-identification performance provided by the residual acoustic hearing is even greater when visual cues are also present. An analysis of consonant confusions suggests that this is because the voicing cues provided by the residual acoustic hearing are highly complementary with the mainly place-of-articulation cues provided by the visual stimulus. These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a second CI (i.e., bilateral implantation). Although recent developments in CI technology and surgical techniques have increased the likelihood of preserving residual acoustic hearing, preservation cannot be guaranteed in each individual case. Therefore, the potential gain to be derived from bilateral implantation needs to be weighed against the possible loss of the benefit provided by residual acoustic hearing.
Lundeborg Hammarström, Inger
2018-01-01
The present study investigated word-initial (WI) /r/-clusters in Central Swedish-speaking children with and without protracted phonological development (PPD). Data for WI singleton /r/ and singleton and cluster /l/ served as comparisons. Participants were twelve 4-year-olds with PPD and twelve age- and gender-matched typically developing (TD) controls. Native speakers audio-recorded and transcribed 109 target single words using a Swedish phonology test with 12 WI C+/r/-clusters and three WI CC+/r/-clusters. The results showed significantly higher match scores for the TD children, a lower match proportion for the /r/ targets and for singletons compared with clusters, and differences in mismatch patterns between the groups. There were no matches for /r/-cluster targets in the PPD group, with all children except two in that group showing deletions for both /r/-cluster types. The differences in mismatch proportions and types between the PPD group and controls suggests new directions for future clinical practice.
On the interaction of deaffrication and consonant harmony*
Dinnsen, Daniel A.; Gierut, Judith A.; Morrisette, Michele L.; Green, Christopher R.; Farris-Trimble, Ashley W.
2010-01-01
Error patterns in children’s phonological development are often described as simplifying processes that can interact with one another with different consequences. Some interactions limit the applicability of an error pattern, and others extend it to more words. Theories predict that error patterns interact to their full potential. While specific interactions have been documented for certain pairs of processes, no developmental study has shown that the range of typologically predicted interactions occurs for those processes. To determine whether this anomaly is an accidental gap or a systematic peculiarity of particular error patterns, two commonly occurring processes were considered, namely Deaffrication and Consonant Harmony. Results are reported from a cross-sectional and longitudinal study of 12 children (age 3;0 – 5;0) with functional phonological delays. Three interaction types were attested to varying degrees. The longitudinal results further instantiated the typology and revealed a characteristic trajectory of change. Implications of these findings are explored. PMID:20513256