Savundranayagam, Marie Y; Moore-Nielsen, Kelsey
2015-10-01
There are many recommended language-based strategies for effective communication with persons with dementia. What is unknown is whether effective language-based strategies are also person centered. Accordingly, the objective of this study was to examine whether language-based strategies for effective communication with persons with dementia overlapped with the following indicators of person-centered communication: recognition, negotiation, facilitation, and validation. Conversations (N = 46) between staff-resident dyads were audio-recorded during routine care tasks over 12 weeks. Staff utterances were coded twice, using language-based and person-centered categories. There were 21 language-based categories and 4 person-centered categories. There were 5,800 utterances transcribed: 2,409 without indicators, 1,699 coded as language or person centered, and 1,692 overlapping utterances. For recognition, 26% of utterances were greetings, 21% were affirmations, 13% were questions (yes/no and open-ended), and 15% involved rephrasing. Questions (yes/no, choice, and open-ended) comprised 74% of utterances that were coded as negotiation. A similar pattern was observed for utterances coded as facilitation where 51% of utterances coded as facilitation were yes/no questions, open-ended questions, and choice questions. However, 21% of facilitative utterances were affirmations and 13% involved rephrasing. Finally, 89% of utterances coded as validation were affirmations. The findings identify specific language-based strategies that support person-centered communication. However, between 1 and 4, out of a possible 21 language-based strategies, overlapped with at least 10% of utterances coded as each person-centered indicator. This finding suggests that staff need training to use more diverse language strategies that support personhood of residents with dementia.
Learn Locally, Act Globally: Learning Language from Variation Set Cues
Onnis, Luca; Waterfall, Heidi R.; Edelman, Shimon
2011-01-01
Variation set structure — partial overlap of successive utterances in child-directed speech — has been shown to correlate with progress in children’s acquisition of syntax. We demonstrate the benefits of variation set structure directly: in miniature artificial languages, arranging a certain proportion of utterances in a training corpus in variation sets facilitated word and phrase constituent learning in adults. Our findings have implications for understanding the mechanisms of L1 acquisition by children, and for the development of more efficient algorithms for automatic language acquisition, as well as better methods for L2 instruction. PMID:19019350
Girolametto, Luigi; Weitzman, Elaine; Greenberg, Janice
2012-02-01
This study examined the efficacy of a professional development program for early childhood educators that facilitated emergent literacy skills in preschoolers. The program, led by a speech-language pathologist, focused on teaching alphabet knowledge, print concepts, sound awareness, and decontextualized oral language within naturally occurring classroom interactions. Twenty educators were randomly assigned to experimental and control groups. Educators each recruited 3 to 4 children from their classrooms to participate. The experimental group participated in 18 hr of group training and 3 individual coaching sessions with a speech-language pathologist. The effects of intervention were examined in 30 min of videotaped interaction, including storybook reading and a post-story writing activity. At posttest, educators in the experimental group used a higher rate of utterances that included print/sound references and decontextualized language than the control group. Similarly, the children in the experimental group used a significantly higher rate of utterances that included print/sound references and decontextualized language compared to the control group. These findings suggest that professional development provided by a speech-language pathologist can yield short-term changes in the facilitation of emergent literacy skills in early childhood settings. Future research is needed to determine the impact of this program on the children's long-term development of conventional literacy skills.
Marks, Nicola J
2014-07-01
Scientists play an important role in framing public engagement with science. Their language can facilitate or impede particular interactions taking place with particular citizens: scientists' "speech acts" can "perform" different types of "scientific citizenship". This paper examines how scientists in Australia talked about therapeutic cloning during interviews and during the 2006 parliamentary debates on stem cell research. Some avoided complex labels, thereby facilitating public examination of this field. Others drew on language that only opens a space for publics to become educated, not to participate in a more meaningful way. Importantly, public utterances made by scientists here contrast with common international utterances: they did not focus on the therapeutic but the research promises of therapeutic cloning. Social scientists need to pay attention to the performative aspects of language in order to promote genuine citizen involvement in techno-science. Speech Act Theory is a useful analytical tool for this.
Rhythm's Gonna Get You: Regular Meter Facilitates Semantic Sentence Processing
ERIC Educational Resources Information Center
Rothermich, Kathrin; Schmidt-Kassow, Maren; Kotz, Sonja A.
2012-01-01
Rhythm is a phenomenon that fundamentally affects the perception of events unfolding in time. In language, we define "rhythm" as the temporal structure that underlies the perception and production of utterances, whereas "meter" is defined as the regular occurrence of beats (i.e. stressed syllables). In stress-timed languages such as German, this…
Uptake in Incidental Focus on Form in Meaning-Focused ESL Lessons
ERIC Educational Resources Information Center
Loewen, Shawn
2004-01-01
Uptake is a term used to describe learners' responses to the provision of feedback after either an erroneous utterance or a query about a linguistic item within the context of meaning-focused language activities. Some researchers argue that uptake may contribute to second language acquisition by facilitating noticing and pushing learners to…
Visual Grouping in Accordance With Utterance Planning Facilitates Speech Production.
Zhao, Liming; Paterson, Kevin B; Bai, Xuejun
2018-01-01
Research on language production has focused on the process of utterance planning and involved studying the synchronization between visual gaze and the production of sentences that refer to objects in the immediate visual environment. However, it remains unclear how the visual grouping of these objects might influence this process. To shed light on this issue, the present research examined the effects of the visual grouping of objects in a visual display on utterance planning in two experiments. Participants produced utterances of the form "The snail and the necklace are above/below/on the left/right side of the toothbrush" for objects containing these referents (e.g., a snail, a necklace and a toothbrush). These objects were grouped using classic Gestalt principles of color similarity (Experiment 1) and common region (Experiment 2) so that the induced perceptual grouping was congruent or incongruent with the required phrasal organization. The results showed that speech onset latencies were shorter in congruent than incongruent conditions. The findings therefore reveal that the congruency between the visual grouping of referents and the required phrasal organization can influence speech production. Such findings suggest that, when language is produced in a visual context, speakers make use of both visual and linguistic cues to plan utterances.
How language production shapes language form and comprehension
MacDonald, Maryellen C.
2012-01-01
Language production processes can provide insight into how language comprehension works and language typology—why languages tend to have certain characteristics more often than others. Drawing on work in memory retrieval, motor planning, and serial order in action planning, the Production-Distribution-Comprehension (PDC) account links work in the fields of language production, typology, and comprehension: (1) faced with substantial computational burdens of planning and producing utterances, language producers implicitly follow three biases in utterance planning that promote word order choices that reduce these burdens, thereby improving production fluency. (2) These choices, repeated over many utterances and individuals, shape the distributions of utterance forms in language. The claim that language form stems in large degree from producers' attempts to mitigate utterance planning difficulty is contrasted with alternative accounts in which form is driven by language use more broadly, language acquisition processes, or producers' attempts to create language forms that are easily understood by comprehenders. (3) Language perceivers implicitly learn the statistical regularities in their linguistic input, and they use this prior experience to guide comprehension of subsequent language. In particular, they learn to predict the sequential structure of linguistic signals, based on the statistics of previously-encountered input. Thus, key aspects of comprehension behavior are tied to lexico-syntactic statistics in the language, which in turn derive from utterance planning biases promoting production of comparatively easy utterance forms over more difficult ones. This approach contrasts with classic theories in which comprehension behaviors are attributed to innate design features of the language comprehension system and associated working memory. The PDC instead links basic features of comprehension to a different source: production processes that shape language form. PMID:23637689
Investigating Joint Attention Mechanisms through Spoken Human-Robot Interaction
ERIC Educational Resources Information Center
Staudte, Maria; Crocker, Matthew W.
2011-01-01
Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's…
ERIC Educational Resources Information Center
Rice, Mabel L.; Smolik, Filip; Perpich, Denise; Thompson, Travis; Rytting, Nathan; Blossom, Megan
2010-01-01
Purpose: The mean length of children's utterances is a valuable estimate of their early language acquisition. The available normative data lack documentation of language and nonverbal intelligence levels of the samples. This study reports age-referenced mean length of utterance (MLU) data from children with specific language impairment (SLI) and…
Theodore, Rachel M; Demuth, Katherine; Shattuck-Hufnagel, Stefanie
2015-06-01
Prosodic and articulatory factors influence children's production of inflectional morphemes. For example, plural -s is produced more reliably in utterance-final compared to utterance-medial position (i.e., the positional effect), which has been attributed to the increased planning time in utterance-final position. In previous investigations of plural -s, utterance-medial plurals were followed by a stop consonant (e.g., dogsbark), inducing high articulatory complexity. We examined whether the positional effect would be observed if the utterance-medial context were simplified to a following vowel. An elicited imitation task was used to collect productions of plural nouns from 2-year-old children. Nouns were elicited utterance-medially and utterance-finally, with the medial plural followed by either a stressed or an unstressed vowel. Acoustic analysis was used to identify evidence of morpheme production. The positional effect was absent when the morpheme was followed by a vowel (e.g., dogseat). However, it returned when the vowel-initial word contained 2 syllables (e.g., dogsarrive), suggesting that the increased processing load in the latter condition negated the facilitative effect of the easy articulatory context. Children's productions of grammatical morphemes reflect a rich interaction between emerging levels of linguistic competence, raising considerations for diagnosis and rehabilitation of language disorders.
Semiotic diversity in utterance production and the concept of ‘language’
Kendon, Adam
2014-01-01
Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. PMID:25092661
The Utterance as Speech Genre in Mikhail Bakhtin's Philosophy of Language.
ERIC Educational Resources Information Center
McCord, Michael A.
This paper focuses on one of the central concepts of Mikhail Bakhtin's philosophy of language: his theory of the utterance as speech genre. Before exploring speech genres, the paper discusses Bakhtin's ideas concerning language--both language as a general system, and the use of language as particular speech communication. The paper considers…
Stuttering Frequency in Relation to Lexical Diversity, Syntactic Complexity, and Utterance Length
ERIC Educational Resources Information Center
Wagovich, Stacy A.; Hall, Nancy E.
2018-01-01
Children's frequency of stuttering can be affected by utterance length, syntactic complexity, and lexical content of language. Using a unique small-scale within-subjects design, this study explored whether language samples that contain more stuttering have (a) longer, (b) syntactically more complex, and (c) lexically more diverse utterances than…
Persistence of Emphasis in Language Production: A Cross-Linguistic Approach
ERIC Educational Resources Information Center
Bernolet, Sarah; Hartsuiker, Robert J.; Pickering, Martin J.
2009-01-01
This study investigates the way in which speakers determine which aspects of an utterance to emphasize and how this affects the form of utterances. To do this, we ask whether the binding between emphasis and thematic roles persists between utterances. In one within-language (Dutch-Dutch) and three cross-linguistic (Dutch-English) structural…
Semiotic diversity in utterance production and the concept of 'language'.
Kendon, Adam
2014-09-19
Sign language descriptions that use an analytic model borrowed from spoken language structural linguistics have proved to be not fully appropriate. Pictorial and action-like modes of expression are integral to how signed utterances are constructed and to how they work. However, observation shows that speakers likewise use kinesic and vocal expressions that are not accommodated by spoken language structural linguistic models, including pictorial and action-like modes of expression. These, also, are integral to how speaker utterances in face-to-face interaction are constructed and to how they work. Accordingly, the object of linguistic inquiry should be revised, so that it comprises not only an account of the formal abstract systems that utterances make use of, but also an account of how the semiotically diverse resources that all languaging individuals use are organized in relation to one another. Both language as an abstract system and languaging should be the concern of linguistics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
The sounds of sarcasm in English and Cantonese: A cross-linguistic production and perception study
NASA Astrophysics Data System (ADS)
Cheang, Henry S.
Three studies were conducted to examine the acoustic markers of sarcasm in English and in Cantonese, and the manner in which such markers are perceived across these languages. The first study consisted of acoustic analyses of sarcastic utterances spoken in English to verify whether particular prosodic cues correspond to English sarcastic speech. Native English speakers produced utterances expressing sarcasm, sincerity, humour, or neutrality. Measures taken from each utterance included fundamental frequency (F0), amplitude, speech rate, harmonics-to-noise ratio (HNR, to probe voice quality), and one-third octave spectral values (to probe resonance). The second study was conducted to ascertain whether specific acoustic features marked sarcasm in Cantonese and how such features compare with English sarcastic prosody. The elicitation and acoustic analysis methods from the first study were applied to similarly-constructed Cantonese utterances spoken by native Cantonese speakers. Direct acoustic comparisons between Cantonese and English sarcasm exemplars were also made. To further test for cross-linguistic prosodic cues of sarcasm and to assess whether sarcasm could be conveyed across languages, a cross-linguistic perceptual study was then performed. A subset of utterances from the first two studies was presented to naive listeners fluent in either Cantonese or English. Listeners had to identify the attitude in each utterance regardless of language of presentation. Sarcastic utterances in English (regardless of text) were marked by lower mean F0 and reductions in HNR and F0 standard deviation (relative to comparison attitudes). Resonance changes, reductions in both speech rate and F0 range signalled sarcasm in conjunction with some vocabulary terms. By contrast, higher mean F0, amplitude range reductions, and F0 range restrictions corresponded with sarcastic utterances spoken in Cantonese regardless of text. For Cantonese, reduced speech rate and higher HNR interacted with certain vocabulary to mark sarcasm. Sarcastic prosody was most distinguished from acoustic features corresponding to sincere utterances in both languages. Direct English-Cantonese comparisons between sarcasm tokens confirmed cross-linguistic differences in sarcastic prosody. Finally, Cantonese and English listeners could identify sarcasm in their native languages but identified sarcastic utterances spoken in the unfamiliar language at chance levels. It was concluded that particular acoustic cues marked sarcastic speech in Cantonese and English, and these patterns of sarcastic prosody were specific to each language.
The Role of Speech Rhythm in Language Discrimination: Further Tests with a Non-Human Primate
ERIC Educational Resources Information Center
Tincoff, Ruth; Hauser, Marc; Tsao, Fritz; Spaepen, Geertrui; Ramus, Franck; Mehler, Jacques
2005-01-01
Human newborns discriminate languages from different rhythmic classes, fail to discriminate languages from the same rhythmic class, and fail to discriminate languages when the utterances are played backwards. Recent evidence showing that cotton-top tamarins discriminate Dutch from Japanese, but not when utterances are played backwards, is…
Reasonable Language: An Integrative Study of Paul Grice's Theories of Meaning, Reasoning, and Value
ERIC Educational Resources Information Center
Kurle, BonnieJean
2012-01-01
Three worries seem to plague Grice's theory of meaning. If, as Grice seems to hold, utterer intentions, including the meaning intention (M-intention), are to be epistemically prior to what some utterance--what some sentence or phrase-means, then one should be able to translate utterances from a language radically different from one's…
Brain basis of communicative actions in language
Egorova, Natalia; Shtyrov, Yury; Pulvermüller, Friedemann
2016-01-01
Although language is a key tool for communication in social interaction, most studies in the neuroscience of language have focused on language structures such as words and sentences. Here, the neural correlates of speech acts, that is, the actions performed by using language, were investigated with functional magnetic resonance imaging (fMRI). Participants were shown videos, in which the same critical utterances were used in different communicative contexts, to Name objects, or to Request them from communication partners. Understanding of critical utterances as Requests was accompanied by activation in bilateral premotor, left inferior frontal and temporo-parietal cortical areas known to support action-related and social interactive knowledge. Naming, however, activated the left angular gyrus implicated in linking information about word forms and related reference objects mentioned in critical utterances. These findings show that understanding of utterances as different communicative actions is reflected in distinct brain activation patterns, and thus suggest different neural substrates for different speech act types. PMID:26505303
Brain basis of communicative actions in language.
Egorova, Natalia; Shtyrov, Yury; Pulvermüller, Friedemann
2016-01-15
Although language is a key tool for communication in social interaction, most studies in the neuroscience of language have focused on language structures such as words and sentences. Here, the neural correlates of speech acts, that is, the actions performed by using language, were investigated with functional magnetic resonance imaging (fMRI). Participants were shown videos, in which the same critical utterances were used in different communicative contexts, to Name objects, or to Request them from communication partners. Understanding of critical utterances as Requests was accompanied by activation in bilateral premotor, left inferior frontal and temporo-parietal cortical areas known to support action-related and social interactive knowledge. Naming, however, activated the left angular gyrus implicated in linking information about word forms and related reference objects mentioned in critical utterances. These findings show that understanding of utterances as different communicative actions is reflected in distinct brain activation patterns, and thus suggest different neural substrates for different speech act types. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Liu, Huei-Mei
2014-01-01
Research Findings: I examined the long-term association between the lexical and acoustic features of maternal utterances during book reading and the language skills of infants and children. Maternal utterances were collected from 22 mother-child dyads in picture book-reading episodes when children were ages 6-12 months and 5 years. Two aspects of…
ERIC Educational Resources Information Center
Rydell, Patrick J.; Mirenda, Pat
1991-01-01
This study of 3 boys (ages 5-6) with autism found that adult high-constraint antecedent utterances elicited more verbal utterances in general, including subjects' echolalia; adult low-constraint utterances elicited more subject high-constraint utterances; and the degree of adult-utterance constraint did not influence the mean lengths of subjects'…
ERIC Educational Resources Information Center
Hollander, Michelle A.; Gelman, Susan A.; Raman, Lakshmi
2009-01-01
Many languages distinguish generic utterances (e.g., "Tigers are ferocious") from non-generic utterances (e.g., "Those tigers are ferocious"). Two studies examined how generic language specially links properties and categories. We used a novel-word extension task to ask if 4- to 5-year-old children and adults distinguish…
Listener Reliability in Assigning Utterance Boundaries in Children's Spontaneous Speech
ERIC Educational Resources Information Center
Stockman, Ida J.
2010-01-01
Research and clinical practices often rely on an utterance unit for spoken language analysis. This paper calls attention to the problems encountered when identifying utterance boundaries in young children's spontaneous conversational speech. The results of a reliability study of utterance boundary assignment are described for 20 females with…
Calculating Mean Length of Utterance for Eastern Canadian Inuktitut
ERIC Educational Resources Information Center
Allen, Shanley E. M.; Dench, Catherine
2015-01-01
Although virtually all Inuit children in eastern Arctic Canada learn Inuktitut as their native language, there is a critical lack of tools to assess their level of language ability. This article investigates how mean length of utterance (MLU), a widely-used assessment measure in English and other languages, can be best applied in Inuktitut. The…
Language Sample Measures and Language Ability in Spanish English Bilingual Kindergarteners
Bedore, Lisa M.; Peña, Elizabeth D.; Gillam, Ronald B.; Ho, Tsung-Han
2010-01-01
Measures of productivity and sentence organization are useful metrics for quantifying language development and language impairments in monolingual and bilingual children. It is not yet known what measures within and across languages are most informative when evaluating the language skills of bilingual children. The purpose of this study was to evaluate how measures of language productivity and organization in two languages converge with children’s measured language abilities on the Bilingual English Spanish Assessment (BESA), a standardized measure of language ability. 170 kindergarten age children who produced narrative language samples in Spanish and in English based on a wordless picture book were included in the analysis. Samples were analyzed for number of utterances, number of different words, mean length of utterance, and percentage of grammatical utterances. The best predictors of language ability as measured by the BESA scores were English MLU, English grammaticality, and Spanish grammaticality. Results are discussed in relationship to the nature of the measures in each of the languages and in regard to their potential utility for identifying low language ability in bilingual children. PMID:20955835
The Developmental Trajectory of Nonadjacent Dependency Learning
ERIC Educational Resources Information Center
Gomez, Rebecca; Maye, Jessica
2005-01-01
We investigated the developmental trajectory of nonadjacent dependency learning in an artificial language. Infants were exposed to 1 of 2 artificial languages with utterances of the form [aXc or bXd] (Grammar 1) or [aXd or bXc] (Grammar 2). In both languages, the grammaticality of an utterance depended on the relation between the 1st and 3rd…
ERIC Educational Resources Information Center
Vigil, Vannesa T.; Eyer, Julia A.; Hardee, W Paul
2005-01-01
Responding relevantly to an information-soliciting utterance (ISU) is required of a school-age child many times daily. For the child with pragmatic language difficulties, this may be especially problematic, yet clinicians have had few data to design intervention for improving these skills. This small-scale study looks at the ability of a child…
Sound representation in higher language areas during language generation
Magrassi, Lorenzo; Aromataris, Giuseppe; Cabrini, Alessandro; Annovazzi-Lodi, Valerio; Moro, Andrea
2015-01-01
How language is encoded by neural activity in the higher-level language areas of humans is still largely unknown. We investigated whether the electrophysiological activity of Broca’s area correlates with the sound of the utterances produced. During speech perception, the electric cortical activity of the auditory areas correlates with the sound envelope of the utterances. In our experiment, we compared the electrocorticogram recorded during awake neurosurgical operations in Broca’s area and in the dominant temporal lobe with the sound envelope of single words versus sentences read aloud or mentally by the patients. Our results indicate that the electrocorticogram correlates with the sound envelope of the utterances, starting before any sound is produced and even in the absence of speech, when the patient is reading mentally. No correlations were found when the electrocorticogram was recorded in the superior parietal gyrus, an area not directly involved in language generation, or in Broca’s area when the participants were executing a repetitive motor task, which did not include any linguistic content, with their dominant hand. The distribution of suprathreshold correlations across frequencies of cortical activities varied whether the sound envelope derived from words or sentences. Our results suggest the activity of language areas is organized by sound when language is generated before any utterance is produced or heard. PMID:25624479
NASA Astrophysics Data System (ADS)
Jawahar, Kavish; Dempster, Edith R.
2013-06-01
In this study, the sociocultural view of science as a language and some quantitative language features of the complementary theoretical framework of systemic functional linguistics are employed to analyse the utterances of three South African Physical Sciences teachers. Using a multi-case study methodology, this study provides a sophisticated description of the utterances of Pietermaritzburg Physical Sciences teachers in language contexts characterised by varying proportions of English Second Language (ESL) students in each class. The results reveal that, as expected, lexical cohesion as measured by the cohesive harmony index and proportion of repeated content words relative to total words, increased with an increasing proportion of ESL students. However, the use of nominalisation by the teachers and the lexical density of their utterances did not decrease with an increasing proportion of ESL students. Furthermore, the results reveal that each individual Physical Sciences teacher had a 'signature' talk, unrelated to the language context in which they taught. This study signals the urgent and critical need for South African science teacher training programmes to place a greater emphasis on the functional use of language for different language contexts in order to empower South African Physical Sciences teachers to adequately apprentice their students into the use of the register of scientific English.
Pronunciation difficulty, temporal regularity, and the speech-to-song illusion.
Margulis, Elizabeth H; Simchy-Gross, Rhimmon; Black, Justin L
2015-01-01
The speech-to-song illusion (Deutsch et al., 2011) tracks the perceptual transformation from speech to song across repetitions of a brief spoken utterance. Because it involves no change in the stimulus itself, but a dramatic change in its perceived affiliation to speech or to music, it presents a unique opportunity to comparatively investigate the processing of language and music. In this study, native English-speaking participants were presented with brief spoken utterances that were subsequently repeated ten times. The utterances were drawn either from languages that are relatively difficult for a native English speaker to pronounce, or languages that are relatively easy for a native English speaker to pronounce. Moreover, the repetition could occur at regular or irregular temporal intervals. Participants rated the utterances before and after the repetitions on a 5-point Likert-like scale ranging from "sounds exactly like speech" to "sounds exactly like singing." The difference in ratings before and after was taken as a measure of the strength of the speech-to-song illusion in each case. The speech-to-song illusion occurred regardless of whether the repetitions were spaced at regular temporal intervals or not; however, it occurred more readily if the utterance was spoken in a language difficult for a native English speaker to pronounce. Speech circuitry seemed more liable to capture native and easy-to-pronounce languages, and more reluctant to relinquish them to perceived song across repetitions.
ERIC Educational Resources Information Center
Herr-Israel, Ellen; McCune, Lorraine
2011-01-01
In the period between sole use of single words and majority use of multiword utterances, children draw from their existing productive capability and conversational input to facilitate the eventual outcome of majority use of multiword utterances. During this period, children use word combinations that are not yet mature multiword utterances, termed…
Don't Underestimate the Benefits of Being Misunderstood.
Gibson, Edward; Tan, Caitlin; Futrell, Richard; Mahowald, Kyle; Konieczny, Lars; Hemforth, Barbara; Fedorenko, Evelina
2017-06-01
Being a nonnative speaker of a language poses challenges. Individuals often feel embarrassed by the errors they make when talking in their second language. However, here we report an advantage of being a nonnative speaker: Native speakers give foreign-accented speakers the benefit of the doubt when interpreting their utterances; as a result, apparently implausible utterances are more likely to be interpreted in a plausible way when delivered in a foreign than in a native accent. Across three replicated experiments, we demonstrated that native English speakers are more likely to interpret implausible utterances, such as "the mother gave the candle the daughter," as similar plausible utterances ("the mother gave the candle to the daughter") when the speaker has a foreign accent. This result follows from the general model of language interpretation in a noisy channel, under the hypothesis that listeners assume a higher error rate in foreign-accented than in nonaccented speech.
Pragmatic Functions in Late Talkers: A 1-Year Follow-Up Study
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Kliment, Sarah
2017-01-01
This study analyzed spontaneous language samples of three-year-olds with a history of expressive language delay (late talkers) and age-matched controls using Dore's Conversational Acts analysis (1978) and Mean Length of Utterance (MLU; Brown, 1973). Differences were observed between groups in utterances classified as organizational device and…
Causal Inference and Language Comprehension: Event-Related Potential Investigations
ERIC Educational Resources Information Center
Davenport, Tristan S.
2014-01-01
The most important information conveyed by language is often contained not in the utterance itself, but in the interaction between the utterance and the comprehender's knowledge of the world and the current situation. This dissertation uses psycholinguistic methods to explore the effects of a common type of inference--causal inference--on language…
Early syntactic creativity: a usage-based approach.
Lieven, Elena; Behrens, Heike; Speares, Jennifer; Tomasello, Michael
2003-05-01
The aim of the current study was to determine the degree to which a sample of one child's creative utterances related to utterances that the child previously produced. The utterances to be accounted for were all of the intelligible, multi-word utterances produced by the child in a single hour of interaction with her mother early in her third year of life (at age 2;1.11). We used a high-density database consisting of 5 hours of recordings per week together with a maternal diary for the previous 6 weeks. Of the 295 multi-word utterances on tape, 37% were 'novel' in the sense that they had not been said in their entirety before. Using a morpheme-matching method, we identified the way(s) in which each novel utterance differed from its closest match in the preceding corpus. In 74% of the cases we required only one operation to match the previous utterance and the great majority of these consisted of the substitution of a word (usually a noun) into a previous utterance or schema. Almost all the other single-operation utterances involved adding a word onto the beginning or end of a previous utterance. 26% of the novel, multi-word utterances required more than one operation to match the closest previous utterance, although many of these only involved a combination of the two operations seen for the single-operation utterances. Some others were, however, more complex to match. The results suggest that the relatively high degree of creativity in early English child language could be at least partially based upon entrenched schemas and a small number of simple operations to modify them. We discuss the implications of these results for the interplay in language production between strings registered in memory and categorial knowledge.
Siu, Elaine; Man, David W K
2006-09-01
Children with Specific Language Impairment present with delayed language development, but do not have a history of hearing impairment, mental deficiency, or associated social or behavioral problems. Non-word repetition was suggested as an index to reflect the capacity of phonological working memory. There is a paucity of such studies among Hong Kong Chinese children. This preliminary study aimed to examine the relationship between phonological working memory and Specific Language Impairment, through the processes of non-word repetition and sentence comprehension, of children with Specific Language Impairment and pre-school children with normal language development. Both groups of children were screened by a standardized language test. A list of Cantonese (the commonest dialect used in Hong Kong) multisyllabic nonsense utterances and a set of 18 sentences were developed for this study. t-tests and Pearson correlation were used to study the relationship between non-word repetition, working memory and specific language impairment. Twenty-three pre-school children with Specific Language Impairment (mean age = 68.30 months; SD = 6.90) and another 23 pre-school children (mean age = 67.30 months; SD = 6.16) participated in the study. Significant difference performance was found between the Specific Language Impairment group and normal language group in the multisyllabic nonsense utterances repetition task and the sentence comprehension task. Length effect was noted in Specific Language Impairment group children, which is consistent with the findings of other literature. In addition, correlations were also observed between the number of nonsense utterances repeated and the number of elements comprehended. Cantonese multisyllabic nonsense utterances might be worth further developing as a screening tool for the early detection of children with Specific Language Impairment.
Kasari, Connie; Kaiser, Ann; Goods, Kelly; Nietfeld, Jennifer; Mathy, Pamela; Landa, Rebecca; Murphy, Susan; Almirall, Daniel
2014-06-01
This study tested the effect of beginning treatment with a speech-generating device (SGD) in the context of a blended, adaptive treatment design for improving spontaneous, communicative utterances in school-aged, minimally verbal children with autism. A total of 61 minimally verbal children with autism, aged 5 to 8 years, were randomized to a blended developmental/behavioral intervention (JASP+EMT) with or without the augmentation of a SGD for 6 months with a 3-month follow-up. The intervention consisted of 2 stages. In stage 1, all children received 2 sessions per week for 3 months. Stage 2 intervention was adapted (by increased sessions or adding the SGD) based on the child's early response. The primary outcome was the total number of spontaneous communicative utterances; secondary measures were the total number of novel words and total comments from a natural language sample. Primary aim results found improvements in spontaneous communicative utterances, novel words, and comments that all favored the blended behavioral intervention that began by including an SGD (JASP+EMT+SGD) as opposed to spoken words alone (JASP+EMT). Secondary aim results suggest that the adaptive intervention beginning with JASP+EMT+SGD and intensifying JASP+EMT+SGD for children who were slow responders led to better posttreatment outcomes. Minimally verbal school-aged children can make significant and rapid gains in spoken spontaneous language with a novel, blended intervention that focuses on joint engagement and play skills and incorporates an SGD. Future studies should further explore the tailoring design used in this study to better understand children's response to treatment. Clinical trial registration information-Developmental and Augmented Intervention for Facilitating Expressive Language (CCNIA); http://clinicaltrials.gov/; NCT01013545. Copyright © 2014 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Revising Segmentation Hypotheses in First and Second Language Listening
ERIC Educational Resources Information Center
Field, John
2008-01-01
Any on-line processing that takes place while an utterance is unfolding is extremely tentative, with early-formed hypotheses having to be revised as the utterance proceeds. The hypotheses in question relate not only to the words that are present but also to where their boundaries fall. This study examines how first and second language listeners…
Language as Description, Indication, and Depiction.
Ferrara, Lindsay; Hodge, Gabrielle
2018-01-01
Signers and speakers coordinate a broad range of intentionally expressive actions within the spatiotemporal context of their face-to-face interactions (Parmentier, 1994; Clark, 1996; Johnston, 1996; Kendon, 2004). Varied semiotic repertoires combine in different ways, the details of which are rooted in the interactions occurring in a specific time and place (Goodwin, 2000; Kusters et al., 2017). However, intense focus in linguistics on conventionalized symbolic form/meaning pairings (especially those which are arbitrary) has obscured the importance of other semiotics in face-to-face communication. A consequence is that the communicative practices resulting from diverse ways of being (e.g., deaf, hearing) are not easily united into a global theoretical framework. Here we promote a theory of language that accounts for how diverse humans coordinate their semiotic repertoires in face-to-face communication, bringing together evidence from anthropology, semiotics, gesture studies and linguistics. Our aim is to facilitate direct comparison of different communicative ecologies. We build on Clark's (1996) theory of language use as 'actioned' via three methods of signaling: describing, indicating, and depicting. Each method is fundamentally different to the other, and they can be used alone or in combination with others during the joint creation of multimodal 'composite utterances' (Enfield, 2009). We argue that a theory of language must be able to account for all three methods of signaling as they manifest within and across composite utterances. From this perspective, language-and not only language use-can be viewed as intentionally communicative action involving the specific range of semiotic resources available in situated human interactions.
ERIC Educational Resources Information Center
Jerger, Sara; Thorne, John C.
2016-01-01
Purpose: This research attempted to replicate Hoffman's 2009 finding that the proportion of narrative utterances with semantic or syntactic errors (i.e., = 14% "restricted utterances") can differentiate school-age children with typical development from those with language impairment with a sensitivity of 83% and specificity of 88%.…
Frey, Jennifer R; Kaiser, Ann P; Scherer, Nancy J
2018-02-01
The purpose of this study was to investigate the influences of child speech intelligibility and rate on caregivers' linguistic responses. This study compared the language use of children with cleft palate with or without cleft lip (CP±L) and their caregivers' responses. Descriptive analyses of children's language and caregivers' responses and a multilevel analysis of caregiver responsivity were conducted to determine whether there were differences in children's productive language and caregivers' responses to different types of child utterances. Play-based caregiver-child interactions were video recorded in a clinic setting. Thirty-eight children (19 toddlers with nonsyndromic repaired CP±L and 19 toddlers with typical language development) between 17 and 37 months old and their primary caregivers participated. Child and caregiver measures were obtained from transcribed and coded video recordings and included the rate, total number of words, and number of different words spoken by children and their caregivers, intelligibility of child utterances, and form of caregiver responses. Findings from this study suggest caregivers are highly responsive to toddlers' communication attempts, regardless of the intelligibility of those utterances. However, opportunities to respond were fewer for children with CP±L. Significant differences were observed in children's intelligibility and productive language and in caregivers' use of questions in response to unintelligible utterances of children with and without CP±L. This study provides information about differences in children with CP±L's language use and caregivers' responses to spoken language of toddlers with and without CP±L.
ERIC Educational Resources Information Center
Richardson, Tanya; Murray, Jane
2017-01-01
Within English early childhood education, there is emphasis on improving speech and language development as well as a drive for outdoor learning. This paper synthesises both aspects to consider whether or not links exist between the environment and the quality of young children's utterances as part of their speech and language development and if…
ERIC Educational Resources Information Center
Pavelko, Stacey L.; Owens, Robert E., Jr.
2017-01-01
Purpose: The purpose of this study was to document whether mean length of utterance (MLU[subscript S]), total number of words (TNW), clauses per sentence (CPS), and/or words per sentence (WPS) demonstrated age-related changes in children with typical language and to document the average time to collect, transcribe, and analyze conversational…
Nyman, Anna; Lohmander, Anette
2018-01-01
Babbling is an important precursor to speech, but has not yet been thoroughly investigated in children with neurodevelopmental disabilities. Canonical babbling ratio (CBR) is a commonly used but time-consuming measure for quantifying babbling. The aim of this study was twofold: to validate a simplified version of the CBR (CBR UTTER ), and to use this measure to determine if early precursors to speech and language development could be detected in children with different neurodevelopmental disabilities. Two different data sets were used. In Part I, CBR UTTER was compared to two other CBR measures using previously obtained phonetic transcriptions of 3571 utterances from 38 audio recordings of 12-18 month old children with and without cleft palate. In CBR UTTER , number of canonical utterances was divided by total number of utterances. In CBR syl , number of canonical syllables was divided by total number of syllables. In CBR utt , number of canonical syllables was divided by total number of utterances. High agreement was seen between CBR UTTER and CBR syl , suggesting CBR UTTER as an alternative. In Part II, babbling in children with neurodevelopmental disability was examined. Eighteen children aged 12-22 months with Down syndrome, cerebral palsy or developmental delay were audio-video recorded during interaction with a parent. Recordings were analysed by observation of babbling, consonant production, calculation of CBR UTTER , and compared to data from controls. The study group showed significantly lower occurrence of all variables, except for of plosives. The long-term relevance of the findings for the speech and language development of the children needs to be investigated.
Parent-Child Interaction Therapy (PCIT) in school-aged children with specific language impairment.
Allen, Jessica; Marshall, Chloë R
2011-01-01
Parents play a critical role in their child's language development. Therefore, advising parents of a child with language difficulties on how to facilitate their child's language might benefit the child. Parent-Child Interaction Therapy (PCIT) has been developed specifically for this purpose. In PCIT, the speech-and-language therapist (SLT) works collaboratively with parents, altering interaction styles to make interaction more appropriate to their child's level of communicative needs. This study investigates the effectiveness of PCIT in 8-10-year-old children with specific language impairment (SLI) in the expressive domain. It aimed to identify whether PCIT had any significant impact on the following communication parameters of the child: verbal initiations, verbal and non-verbal responses, mean length of utterance (MLU), and proportion of child-to-parent utterances. Sixteen children with SLI and their parents were randomly assigned to two groups: treated or delayed treatment (control). The treated group took part in PCIT over a 4-week block, and then returned to the clinic for a final session after a 6-week consolidation period with no input from the therapist. The treated and control group were assessed in terms of the different communication parameters at three time points: pre-therapy, post-therapy (after the 4-week block) and at the final session (after the consolidation period), through video analysis. It was hypothesized that all communication parameters would significantly increase in the treated group over time and that no significant differences would be found in the control group. All the children in the treated group made language gains during spontaneous interactions with their parents. In comparison with the control group, PCIT had a positive effect on three of the five communication parameters: verbal initiations, MLU and the proportion of child-to-parent utterances. There was a marginal effect on verbal responses, and a trend towards such an effect for non-verbal responses. Despite the small group sizes, this study provides preliminary evidence that PCIT can achieve its treatment goals with 8-10-year-olds who have expressive language impairments. This has potentially important implications for how mainstream speech and language services provide intervention to school-aged children. In contrast to direct one-to-one therapy, PCIT offers a single block of therapy where the parents' communication and interaction skills are developed to provide the child with an appropriate language-rich environment, which in turn could be more cost-effective for the service provider. © 2010 Royal College of Speech & Language Therapists.
ERIC Educational Resources Information Center
Rice, Mabel L.; Redmond, Sean M.; Hoffman, Lesa
2006-01-01
Purpose: Although mean length of utterance (MLU) is a useful benchmark in studies of children with specific language impairment (SLI), some empirical and interpretive issues are unresolved. The authors report on 2 studies examining, respectively, the concurrent validity and temporal stability of MLU equivalency between children with SLI and…
Language discrimination without language: Experiments on tamarin monkeys
NASA Astrophysics Data System (ADS)
Tincoff, Ruth; Hauser, Marc; Spaepen, Geertrui; Tsao, Fritz; Mehler, Jacques
2002-05-01
Human newborns can discriminate spoken languages differing on prosodic characteristics such as the timing of rhythmic units [T. Nazzi et al., JEP:HPP 24, 756-766 (1998)]. Cotton-top tamarins have also demonstrated a similar ability to discriminate a morae- (Japanese) vs a stress-timed (Dutch) language [F. Ramus et al., Science 288, 349-351 (2000)]. The finding that tamarins succeed in this task when either natural or synthesized utterances are played in a forward direction, but fail on backward utterances which disrupt the rhythmic cues, suggests that sensitivity to language rhythm may rely on general processes of the primate auditory system. However, the rhythm hypothesis also predicts that tamarins would fail to discriminate languages from the same rhythm class, such as English and Dutch. To assess the robustness of this ability, tamarins were tested on a different-rhythm-class distinction, Polish vs Japanese, and a new same-rhythm-class distinction, English vs Dutch. The stimuli were natural forward utterances produced by multiple speakers. As predicted by the rhythm hypothesis, tamarins discriminated between Polish and Japanese, but not English and Dutch. These findings strengthen the claim that discriminating the rhythmic cues of language does not require mechanisms specialized for human speech. [Work supported by NSF.
Suskind, Dana L; Graf, Eileen; Leffel, Kristin R; Hernandez, Marc W; Suskind, Elizabeth; Webber, Robert; Tannenbaum, Sally; Nevins, Mary Ellen
2016-02-01
To investigate the impact of a spoken language intervention curriculum aiming to improve the language environments caregivers of low socioeconomic status (SES) provide for their D/HH children with CI & HA to support children's spoken language development. Quasiexperimental. Tertiary. Thirty-two caregiver-child dyads of low-SES (as defined by caregiver education ≤ MA/MS and the income proxies = Medicaid or WIC/LINK) and children aged < 4.5 years, hearing loss of ≥ 30 dB, between 500 and 4000 Hz, using at least one amplification device with adequate amplification (hearing aid, cochlear implant, osseo-integrated device). Behavioral. Caregiver-directed educational intervention curriculum designed to improve D/HH children's early language environments. Changes in caregiver knowledge of child language development (questionnaire scores) and language behavior (word types, word tokens, utterances, mean length of utterance [MLU], LENA Adult Word Count (AWC), Conversational Turn Count (CTC)). Significant increases in caregiver questionnaire scores as well as utterances, word types, word tokens, and MLU in the treatment but not the control group. No significant changes in LENA outcomes. Results partially support the notion that caregiver-directed language enrichment interventions can change home language environments of D/HH children from low-SES backgrounds. Further longitudinal studies are necessary.
ERIC Educational Resources Information Center
Buium, Nissan; And Others
Speech samples were collected from three 48-month-old children with Down's Syndrome over an 11-month period after Ss had reached the one word utterance stage. Each S's linguistic utterances were semantically evaluated in terms of M. Bowerman's, R. Brown's, and I. Schlesinger's semantic relational concepts. Generally, findings suggested that Ss…
Echolalia and comprehension in autistic children.
Roberts, J M
1989-06-01
The research reported in this paper investigates the phenomenon of echolalia in the speech of autistic children by examining the relationship between the frequency of echolalia and receptive language ability. The receptive language skills of 10 autistic children were assessed, and spontaneous speech samples were recorded. Analysis of these data showed that those children with poor receptive language skills produced significantly more echolalic utterances than those children whose receptive skills were more age-appropriate. Children who produced fewer echolalic utterances, and had more advanced receptive language ability, evidenced a higher proportion of mitigated echolalia. The most common type of mitigation was echo plus affirmation or denial.
Influence of interlocutor/reader on utterance in reflective writing and interview
NASA Astrophysics Data System (ADS)
Collyer, Vivian M.
2010-03-01
The influence of the Other on utterance is foundational to language study. This analysis contrasts this influence within two modes of communication: reflective writing and interview. The data source is derived from the reflective writings and interview transcripts of a twelfth-grade physics student. In this student's case, reflective writing includes extensive utterances, utilizing rhetorical devices to persuade and reconcile with his reader. In the interview, on-going back-and-forth utterances allow the two participants to negotiate a co-constructed meaning for religion. Implications for the classroom are briefly discussed.
Direct and Indirect Effects of Behavioral Parent Training on Infant Language Production
Bagner, Daniel M.; Garcia, Dainelys; Hill, Ryan
2016-01-01
Given the strong association between early behavior problems and language impairment, we examined the effect of a brief home-based adaptation of Parent–child Interaction Therapy on infant language production. Sixty infants (55% male; mean age 13.47 ± 1.31 months) were recruited at a large urban primary care clinic and were included if their scores exceeded the 75th percentile on a brief screener of early behavior problems. Families were randomly assigned to receive the home-based parenting intervention or standard pediatric primary care. The observed number of infant total (i.e., token) and different (i.e., type) utterances spoken during an observation of an infant-led play and a parent-report measure of infant externalizing behavior problems were examined at pre- and post-intervention and at 3- and 6-month follow-ups. Infants receiving the intervention demonstrated a significantly higher number of observed different and total utterances at the 6-month follow-up compared to infants in standard care. Furthermore, there was an indirect effect of the intervention on infant language production, such that the intervention led to decreases in infant externalizing behavior problems from pre- to post-intervention, which, in turn, led to increases in infant different utterances at the 3- and 6-month follow-ups and total utterances at the 6-month follow-up. Results provide initial evidence for the effect of this brief and home-based intervention on infant language production, including the indirect effect of the intervention on infant language through improvements in infant behavior, highlighting the importance of targeting behavior problems in early intervention. PMID:26956651
ERIC Educational Resources Information Center
te Kaat-van den Os, Danielle J. A.; Jongmans, Marian J.; Volman, M (Chiel) J. M.; Lauteslager, Peter E. M.
2015-01-01
Expressive language problems are common among children with Down syndrome (DS). In typically developing (TD) children, gestures play an important role in supporting the transition from one-word utterances to two-word utterances. As far as we know, an overview on the role of gestures to support expressive language development in children with DS is…
Sandbank, Micheal; Yoder, Paul
2016-05-01
The purpose of this correlational meta-analysis was to examine the association between parental utterance length and language outcomes in children with disabilities and whether this association varies according to other child characteristics, such as age and disability type. This association can serve as a starting point for language intervention practices for children with disabilities. We conducted a systematic search of 42 electronic databases to identify relevant studies. Twelve studies reporting on a total of 13 populations (including 257 participants) were identified. A random-effects model was used to estimate a combined effect size across all studies as well as separate effect sizes across studies in each disability category. The combined effect size across all studies suggests a weak positive association between parental input length and child language outcomes. However, subgroup analyses within disability categories suggest that this association may differ for children with autism. Results of 4 studies including 47 children with autism show that parental input length is strongly associated with positive language outcomes in this population. Present evidence suggests that clinicians should reconsider intervention practices that prescribe shorter, grammatically incomplete utterances, particularly when working with children with autism.
Direct and Indirect Effects of Behavioral Parent Training on Infant Language Production.
Bagner, Daniel M; Garcia, Dainelys; Hill, Ryan
2016-03-01
Given the strong association between early behavior problems and language impairment, we examined the effect of a brief home-based adaptation of Parent-child Interaction Therapy on infant language production. Sixty infants (55% male; mean age 13.47±1.31 months) were recruited at a large urban primary care clinic and were included if their scores exceeded the 75th percentile on a brief screener of early behavior problems. Families were randomly assigned to receive the home-based parenting intervention or standard pediatric primary care. The observed number of infant total (i.e., token) and different (i.e., type) utterances spoken during an observation of an infant-led play and a parent-report measure of infant externalizing behavior problems were examined at pre- and post-intervention and at 3- and 6-month follow-ups. Infants receiving the intervention demonstrated a significantly higher number of observed different and total utterances at the 6-month follow-up compared to infants in standard care. Furthermore, there was an indirect effect of the intervention on infant language production, such that the intervention led to decreases in infant externalizing behavior problems from pre- to post-intervention, which, in turn, led to increases in infant different utterances at the 3- and 6-month follow-ups and total utterances at the 6-month follow-up. Results provide initial evidence for the effect of this brief and home-based intervention on infant language production, including the indirect effect of the intervention on infant language through improvements in infant behavior, highlighting the importance of targeting behavior problems in early intervention. Copyright © 2015. Published by Elsevier Ltd.
2013-01-01
Background Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate. Results Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv). Conclusions Presumably, FG supports the left-hemispheric perisylvian “language network”, i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian “language zones”, might facilitate – under time-critical conditions – the consolidation of linguistic information at the level of verbal working memory. PMID:23879896
The interactional significance of formulas in autistic language.
Dobbinson, Sushie; Perkins, Mick; Boucher, Jill
2003-01-01
The phenomenon of echolalia in autistic language is well documented. Whilst much early research dismissed echolalia as merely an indicator of cognitive limitation, later work identified particular discourse functions of echolalic utterances. The work reported here extends the study of the interactional significance of echolalia to formulaic utterances. Audio and video recordings of conversations between the first author and two research participants were transcribed and analysed according to a Conversation Analysis framework and a multi-layered linguistic framework. Formulaic language was found to have predictable interactional significance within the language of an individual with autism, and the generic phenomenon of formulaicity in company with predictable discourse function was seen to hold across the research participants, regardless of cognitive ability. The implications of formulaicity in autistic language for acquisition and processing mechanisms are discussed.
Second Language Learners and Speech Act Comprehension
ERIC Educational Resources Information Center
Holtgraves, Thomas
2007-01-01
Recognizing the specific speech act ( Searle, 1969) that a speaker performs with an utterance is a fundamental feature of pragmatic competence. Past research has demonstrated that native speakers of English automatically recognize speech acts when they comprehend utterances (Holtgraves & Ashley, 2001). The present research examined whether this…
Finding the music of speech: Musical knowledge influences pitch processing in speech.
Vanden Bosch der Nederlanden, Christina M; Hannon, Erin E; Snyder, Joel S
2015-10-01
Few studies comparing music and language processing have adequately controlled for low-level acoustical differences, making it unclear whether differences in music and language processing arise from domain-specific knowledge, acoustic characteristics, or both. We controlled acoustic characteristics by using the speech-to-song illusion, which often results in a perceptual transformation to song after several repetitions of an utterance. Participants performed a same-different pitch discrimination task for the initial repetition (heard as speech) and the final repetition (heard as song). Better detection was observed for pitch changes that violated rather than conformed to Western musical scale structure, but only when utterances transformed to song, indicating that music-specific pitch representations were activated and influenced perception. This shows that music-specific processes can be activated when an utterance is heard as song, suggesting that the high-level status of a stimulus as either language or music can be behaviorally dissociated from low-level acoustic factors. Copyright © 2015 Elsevier B.V. All rights reserved.
The Importance of Form in Skinner's Analysis of Verbal Behavior and a Further Step
Vargas, E. A.
2013-01-01
A series of quotes from B. F. Skinner illustrates the importance of form in his analysis of verbal behavior. In that analysis, form plays an important part in contingency control. Form and function complement each other. Function, the array of variables that control a verbal utterance, dictates the meaning of a specified form; form, as stipulated by a verbal community, indicates that meaning. The mediational actions that shape verbal utterances do not necessarily encounter their controlling variables. These are inferred from the form of the verbal utterance. Form carries the burden of implied meaning and underscores the importance of the verbal community in the expression of all the forms of language. Skinner's analysis of verbal behavior and the importance of form within that analysis provides the foundation by which to investigate language. But a further step needs to be undertaken to examine and to explain the abstractions of language as an outcome of action at an aggregate level. PMID:23814376
Rydell, P J; Mirenda, P
1991-06-01
The effects of specific types of adult antecedent utterances (high vs. low constraint) on the verbal behaviors produced by three subjects with autism were examined. Adult utterance types were differentiated in terms of the amount of control the adults exhibited in their verbal interactions with the subjects during a free play setting. Videotaped interactions were analyzed and coded according to a predetermined categorical system. The results of this investigation suggest that the level of linguistic constraint exerted on the child interactants during naturalistic play sessions affected their communicative output. The overall findings suggest that (a) adult high constraint utterances elicited more verbal utterances in general, as well as a majority of the subjects' echolalia; (b) adult low constraint utterances elicited more subject high constraint utterances; and (c) the degree of constraint of adult utterances did not appear to influence the mean lengths of subjects' utterances. The results are discussed in terms of their implications for educational interventions, and suggestions are made for future research concerning the dynamics of echolalia in interactive contexts.
Lee, Chia-Cheng; Jhang, Yuna; Chen, Li-mei; Relyea, George; Oller, D. Kimbrough
2016-01-01
Prior research on ambient-language effects in babbling has often suggested infants produce language-specific phonological features within the first year. These results have been questioned in research failing to find such effects and challenging the positive findings on methodological grounds. We studied English- and Chinese-learning infants at 8, 10, and 12 months and found listeners could not detect ambient-language effects in the vast majority of infant utterances, but only in items deemed to be words or to contain canonical syllables that may have made them sound like words with language-specific shapes. Thus, the present research suggests the earliest ambient-language effects may be found in emerging lexical items or in utterances influenced by language-specific features of lexical items. Even the ambient-language effects for infant canonical syllables and words were very small compared with ambient-language effects for meaningless but phonotactically well-formed syllable sequences spoken by adult native speakers of English and Chinese. PMID:28496393
Konst, Emmy M; Rietveld, Toni; Peters, Herman F M; Kuijpers-Jagtman, Anne Marie
2003-07-01
To investigate the effects of infant orthopedics (IO) on the language skills of children with complete unilateral cleft lip and palate (UCLP). In a prospective randomized clinical trial (Dutchcleft), two groups of children with complete UCLP were followed up longitudinally: one group was treated with IO based on a modified Zurich approach in the first year of life (IO group); the other group did not receive this treatment (non-IO group). At the ages of 2, 2(1/2), 3, and 6 years, language development was evaluated in 12 children (six IO and six non-IO). Receptive language skills were assessed using the Reynell test. Expressive language skills of the toddlers were evaluated by calculating mean length of utterance (MLU) and mean length of longest utterances (MLLU); in the 6-year-olds, the expressive language skills were measured using standardized Dutch language tests. The participants had complete UCLP without soft tissue bands or other malformations. IO did not affect the receptive language skills. However, the expressive language measures MLU and MLLU were influenced by IO. At age 2(1/2) and 3 years, the IO group produced longer utterances than the non-IO group. In the follow-up, the difference in expressive language between the two groups was no longer significant. Children treated with IO during their first year of life produced longer sentences than non-IO children at the ages of 2(1/2) and 3 years. At 6 years of age, both groups presented similar expressive language skills. Hence, IO treatment did not have long-lasting effects on language development.
Using a Language Generation System for Second Language Learning.
ERIC Educational Resources Information Center
Levison, Michael; Lessard, Greg
1996-01-01
Describes a language generation system, which, given data files describing a natural language, generates utterances of the class the user has specified. The system can exercise control over the syntax, lexicon, morphology, and semantics of the language. This article explores a range of the system's potential applications to second-language…
Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat
2015-01-01
The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.
Reference in Action: Links between Pointing and Language
ERIC Educational Resources Information Center
Cooperrider, Kensy Andrew
2011-01-01
When referring to things in the world, speakers produce utterances that are composites of speech and action. Pointing gestures are a pervasive part of such composite utterances, but many questions remain about exactly how pointing is integrated with speech. In this dissertation I present three strands of research that investigate relations of…
Utterance in the Classroom: Dialogic Motives for Invention.
ERIC Educational Resources Information Center
Hunt, Russell A.
Collaboration in writing is not confined to conventional multiple authorship and peer editing, but extends across the text to include its readers. Strong support for language development comes from dialogic situations in which student writing is created as a response to some other utterance, and yet classrooms rarely support such situations. The…
Passionate Utterance and Moral Education
ERIC Educational Resources Information Center
Munday, Ian
2009-01-01
This paper explores Stanley Cavell's notion of "passionate utterance", which acts as an extension of/departure from (we might read it as both) J. L. Austin's theory of the performative. Cavell argues that Austin having made the revolutionary discovery that truth claims in language are bound up with how words perform, then gets bogged by convention…
Differentiating Children with and without Language Impairment Based on Grammaticality
ERIC Educational Resources Information Center
Eisenberg, Sarita L.; Guo, Ling-Yu
2013-01-01
Purpose: This study compared the diagnostic accuracy of a general grammaticality measure (i.e., percentage grammatical utterance; PGU) to 2 less comprehensive measures of grammaticality--a measure that excluded utterances without a subject and/or main verb (i.e., percentage sentence point; PSP) and a measure that looked only at verb tense errors…
Prosodic skills in children with Down syndrome and in typically developing children.
Zampini, Laura; Fasolo, Mirco; Spinelli, Maria; Zanchi, Paola; Suttora, Chiara; Salerni, Nicoletta
2016-01-01
Many studies have analysed language development in children with Down syndrome to understand better the nature of their linguistic delays and the reason why these delays, particularly those in the morphosyntactic area, seem greater than their cognitive impairment. However, the prosodic characteristics of language development in children with Down syndrome have been scarcely investigated. To analyse the prosodic skills of children with Down syndrome in the production of multi-word utterances. Data on the prosodic skills of these children were compared with data on typically developing children matched on developmental age and vocabulary size. Between-group differences and the relationships between prosodic and syntactic skills were investigated. The participants were nine children with Down syndrome (who ranged in chronological age from 45 to 63 months and had a mean developmental age of 30 months) and 12 30-month-old typically developing children. The children in both groups had a vocabulary size of approximately 450 words. The children's spontaneous productions were recorded during observations of mother-child play sessions. Data analyses showed that despite their morphosyntactic difficulties, children with Down syndrome were able to master some aspects of prosody in multi-word utterances. They were able to produce single intonation multi-word utterances on the same level as typically developing children. In addition, the intonation contour of their utterances was not negatively influenced by syntactic complexity, contrary to what occurred in typically developing children, although it has to be considered that the utterances produced by children with Down syndrome were less complex than those produced by children in the control group. However, children with Down syndrome appeared to be less able than typically developing children to use intonation to express the pragmatic interrogative function. The findings are discussed considering the effects of social experience on the utterance prosodic realization. © 2015 Royal College of Speech and Language Therapists.
ERIC Educational Resources Information Center
Hooshyar, Nahid T.
Maternal language directed to 21 nonhandicapped, 21 Down syndrome, and 19 language impaired preschool children was examined. The three groups (all Caucasian and middle-class) were matched in mean length of utterance (MLU) and in developmental skills as measured on the Vineland Adaptive Behavior Scale. Mother-child language interaction was…
Trudeau, Natacha; Sutton, Ann; Dagenais, Emmanuelle; de Broeck, Sophie; Morford, Jill
2007-10-01
This study investigated the impact of syntactic complexity and task demands on construction of utterances using picture communication symbols by participants from 3 age groups with no communication disorders. Participants were 30 children (7;0 [years;months] to 8;11), 30 teenagers (12;0 to 13;11), and 30 adults (18 years and above). All participants constructed graphic symbol utterances to describe photographs presented with spoken French stimuli. Stimuli included simple and complex (object relative and subject relative) utterances describing the photographs, which were presented either 1 at a time (neutral condition) or in an array of 4 (contrast condition). Simple utterances lead to more uniform response patterns than complex utterances. Among complex utterances, subject relative sentences appeared more difficult to convey. Increasing the need for message clarity (i.e., contrast condition) elicited changes in the production of graphic symbol sequences for complex propositions. The effects of syntactic complexity and task demands were more pronounced for children. Graphic symbol utterance construction appears to involve more than simply transferring spoken language skills. One possible explanation is that this type of task requires higher levels of metalinguistic ability. Clinical implications and directions for further research are discussed.
Translingual Literacy, Language Difference, and Matters of Agency
ERIC Educational Resources Information Center
Lu, Min-Zhan; Horner, Bruce
2013-01-01
We argue that composition scholarship's defenses of language differences in student writing reinforce dominant ideology's spatial framework conceiving language difference as deviation from a norm of sameness. We argue instead for adopting a temporal-spatial framework defining difference as the norm of utterances, and defining languages,…
Generative and Item-Specific Knowledge of Language
ERIC Educational Resources Information Center
Morgan, Emily Ida Popper
2016-01-01
The ability to generate novel utterances compositionally using generative knowledge is a hallmark property of human language. At the same time, languages contain non-compositional or idiosyncratic items, such as irregular verbs, idioms, etc. This dissertation asks how and why language achieves a balance between these two systems--generative and…
Bean Soup Translation: Flexible, Linguistically-Motivated Syntax for Machine Translation
ERIC Educational Resources Information Center
Mehay, Dennis Nolan
2012-01-01
Machine translation (MT) systems attempt to translate texts from one language into another by translating words from a "source language" and rearranging them into fluent utterances in a "target language." When the two languages organize concepts in very different ways, knowledge of their general sentence structure, or…
Dilemmas in Implementing Language Rights in Multilingual Uganda
ERIC Educational Resources Information Center
Namyalo, Saudah; Nakayiza, Judith
2015-01-01
Even after decades of uttering platitudes about the languages of Uganda, language policy pronouncements have invariably turned out to be public relations statements rather than blueprints for action. A serious setback for the right to linguistic equality and the right to use Uganda's indigenous languages has largely hinged on the language…
Assessing the Impact of Conversational Overlap in Content on Child Language Growth
ERIC Educational Resources Information Center
Che, Elizabeth S.; Brooks, Patricia J.; Alarcon, Maria F.; Yannaco, Francis D.; Donnelly, Seamus
2018-01-01
When engaged in conversation, both parents and children tend to re-use words that their partner has just said. This study explored whether proportions of maternal and/or child utterances that overlapped in content with what their partner had just said contributed to growth in mean length of utterance (MLU), developmental sentence score, and…
Longobardi, Emiddia; Rossi-Arnaud, Clelia; Spataro, Pietro; Putnick, Diane L; Bornstein, Marc H
2015-01-01
Because of its structural characteristics, specifically the prevalence of verb types in infant-directed speech and frequent pronoun-dropping, the Italian language offers an attractive opportunity to investigate the predictive effects of input frequency and positional salience on children's acquisition of nouns and verbs. We examined this issue in a sample of twenty-six mother-child dyads whose spontaneous conversations were recorded, transcribed, and coded at 1;4 and 1;8. The percentages of nouns occurring in the final position of maternal utterances at 1;4 predicted children's production of noun types at 1;8. For verbs, children's growth rates were positively predicted by the percentages of input verbs occurring in utterance-initial position, but negatively predicted by the percentages of verbs located in the final position of maternal utterances at 1;4. These findings clearly illustrate that the effects of positional salience vary across lexical categories.
Kubota, Yoshie; Yano, Yoshitaka; Seki, Susumu; Takada, Kaori; Sakuma, Mio; Morimoto, Takeshi; Akaike, Akinori; Hiraide, Atsushi
2011-04-11
To determine the value of using the Roter Interaction Analysis System during objective structured clinical examinations (OSCEs) to assess pharmacy students' communication competence. As pharmacy students completed a clinical OSCE involving an interview with a simulated patient, 3 experts used a global rating scale to assess students' overall performance in the interview, and both the student's and patient's languages were coded using the Roter Interaction Analysis System (RIAS). The coders recorded the number of utterances (ie, units of spoken language) in each RIAS category. Correlations between the raters' scores and the number and types of utterances were examined. There was a significant correlation between students' global rating scores on the OSCE and the number of utterances in the RIAS socio-emotional category but not the RIAS business category. The RIAS proved to be a useful tool for assessing the socio-emotional aspect of students' interview skills.
Gesture and Motor Skill in Relation to Language in Children with Language Impairment
ERIC Educational Resources Information Center
Iverson, Jana M.; Braddock, Barbara A.
2011-01-01
Purpose: To examine gesture and motor abilities in relation to language in children with language impairment (LI). Method: Eleven children with LI (aged 2;7 to 6;1 [years;months]) and 16 typically developing (TD) children of similar chronological ages completed 2 picture narration tasks, and their language (rate of verbal utterances, mean length…
Language Sampling for Preschoolers With Severe Speech Impairments
Ragsdale, Jamie; Bustos, Aimee
2016-01-01
Purpose The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Method Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Results Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur–Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Conclusion Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information. PMID:27552110
Language Sampling for Preschoolers With Severe Speech Impairments.
Binger, Cathy; Ragsdale, Jamie; Bustos, Aimee
2016-11-01
The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information.
Attention to Language: Lessons Learned at the Dinner Table.
ERIC Educational Resources Information Center
Ely, Richard; Gleason, Jean Berko; MacGibbon, Ann; Zaretsky, Elena
2001-01-01
Studied the dinner table conversations of 22 families with young children. Analyzed utterances for language-focused terms. Reported that metalinguistic uses exceeded pragmatic uses. Found that during routine social interactions, parents provide children with potentially important information about the communicative functions of language.…
Listening Skill Development through Massive Comprehensible Input.
ERIC Educational Resources Information Center
Kalivoda, Theodore B.
Foreign language listening comprehension instruction too often relies on brief selections read aloud or sporadic teacher talk interspersed with native language (NL) utterances, which fail to provide sustained listening practice. NL is overused for grammar-related talk, reducing target language exposure, encouraging translation, and hindering…
All Together Now: Concurrent Learning of Multiple Structures in an Artificial Language
ERIC Educational Resources Information Center
Romberg, Alexa R.; Saffran, Jenny R.
2013-01-01
Natural languages contain many layers of sequential structure, from the distribution of phonemes within words to the distribution of phrases within utterances. However, most research modeling language acquisition using artificial languages has focused on only one type of distributional structure at a time. In two experiments, we investigated adult…
Human-Level Natural Language Understanding: False Progress and Real Challenges
ERIC Educational Resources Information Center
Bignoli, Perrin G.
2013-01-01
The field of Natural Language Processing (NLP) focuses on the study of how utterances composed of human-level languages can be understood and generated. Typically, there are considered to be three intertwined levels of structure that interact to create meaning in language: syntax, semantics, and pragmatics. Not only is a large amount of…
Iconicity and the Emergence of Combinatorial Structure in Language
ERIC Educational Resources Information Center
Verhoef, Tessa; Kirby, Simon; de Boer, Bart
2016-01-01
In language, recombination of a discrete set of meaningless building blocks forms an unlimited set of possible utterances. How such combinatorial structure emerged in the evolution of human language is increasingly being studied. It has been shown that it can emerge when languages culturally evolve and adapt to human cognitive biases. How the…
Pickering, Martin J; Garrod, Simon
2013-08-01
Our target article proposed that language production and comprehension are interwoven, with speakers making predictions of their own utterances and comprehenders making predictions of other people's utterances at different linguistic levels. Here, we respond to comments about such issues as cognitive architecture and its neural basis, learning and development, monitoring, the nature of forward models, communicative intentions, and dialogue.
Understanding Student Language: An Unsupervised Dialogue Act Classification Approach
ERIC Educational Resources Information Center
Ezen-Can, Aysu; Boyer, Kristy Elizabeth
2015-01-01
Within the landscape of educational data, textual natural language is an increasingly vast source of learning-centered interactions. In natural language dialogue, student contributions hold important information about knowledge and goals. Automatically modeling the dialogue act of these student utterances is crucial for scaling natural language…
Voice recognition through phonetic features with Punjabi utterances
NASA Astrophysics Data System (ADS)
Kaur, Jasdeep; Juglan, K. C.; Sharma, Vishal; Upadhyay, R. K.
2017-07-01
This paper deals with perception and disorders of speech in view of Punjabi language. Visualizing the importance of voice identification, various parameters of speaker identification has been studied. The speech material was recorded with a tape recorder in their normal and disguised mode of utterances. Out of the recorded speech materials, the utterances free from noise, etc were selected for their auditory and acoustic spectrographic analysis. The comparison of normal and disguised speech of seven subjects is reported. The fundamental frequency (F0) at similar places, Plosive duration at certain phoneme, Amplitude ratio (A1:A2) etc. were compared in normal and disguised speech. It was found that the formant frequency of normal and disguised speech remains almost similar only if it is compared at the position of same vowel quality and quantity. If the vowel is more closed or more open in the disguised utterance the formant frequency will be changed in comparison to normal utterance. The ratio of the amplitude (A1: A2) is found to be speaker dependent. It remains unchanged in the disguised utterance. However, this value may shift in disguised utterance if cross sectioning is not done at the same location.
Language change in a multiple group society
NASA Astrophysics Data System (ADS)
Pop, Cristina-Maria; Frey, Erwin
2013-08-01
The processes leading to change in languages are manifold. In order to reduce ambiguity in the transmission of information, agreement on a set of conventions for recurring problems is favored. In addition to that, speakers tend to use particular linguistic variants associated with the social groups they identify with. The influence of other groups propagating across the speech community as new variant forms sustains the competition between linguistic variants. With the utterance selection model, an evolutionary description of language change, Baxter [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.73.046118 73, 046118 (2006)] have provided a mathematical formulation of the interactions inside a group of speakers, exploring the mechanisms that lead to or inhibit the fixation of linguistic variants. In this paper, we take the utterance selection model one step further by describing a speech community consisting of multiple interacting groups. Tuning the interaction strength between groups allows us to gain deeper understanding about the way in which linguistic variants propagate and how their distribution depends on the group partitioning. Both for the group size and the number of groups we find scaling behaviors with two asymptotic regimes. If groups are strongly connected, the dynamics is that of the standard utterance selection model, whereas if their coupling is weak, the magnitude of the latter along with the system size governs the way consensus is reached. Furthermore, we find that a high influence of the interlocutor on a speaker's utterances can act as a counterweight to group segregation.
ERIC Educational Resources Information Center
Casey, Laura Baylot; Bicard, David F.
2009-01-01
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
Ways of looking ahead: hierarchical planning in language production.
Lee, Eun-Kyung; Brown-Schmidt, Sarah; Watson, Duane G
2013-12-01
It is generally assumed that language production proceeds incrementally, with chunks of linguistic structure planned ahead of speech. Extensive research has examined the scope of language production and suggests that the size of planned chunks varies across contexts (Ferreira & Swets, 2002; Wagner & Jescheniak, 2010). By contrast, relatively little is known about the structure of advance planning, specifically whether planning proceeds incrementally according to the surface structure of the utterance, or whether speakers plan according to the hierarchical relationships between utterance elements. In two experiments, we examine the structure and scope of lexical planning in language production using a picture description task. Analyses of speech onset times and word durations show that speakers engage in hierarchical planning such that structurally dependent lexical items are planned together and that hierarchical planning occurs for both direct and indirect dependencies. Copyright © 2013 Elsevier B.V. All rights reserved.
Dickinson, David K; Porche, Michelle V
2011-01-01
Indirect effects of preschool classroom indexes of teacher talk were tested on fourth-grade outcomes for 57 students from low-income families in a longitudinal study of classroom and home influences on reading. Detailed observations and audiotaped teacher and child language data were coded to measure content and quantity of verbal interactions in preschool classrooms. Preschool teachers' use of sophisticated vocabulary during free play predicted fourth-grade reading comprehension and word recognition (mean age=9; 7), with effects mediated by kindergarten child language measures (mean age=5; 6). In large group preschool settings, teachers' attention-getting utterances were directly related to later comprehension. Preschool teachers' correcting utterances and analytic talk about books, and early support in the home for literacy predicted fourth-grade vocabulary, as mediated by kindergarten receptive vocabulary. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.
Phrase frequency effects in language production.
Janssen, Niels; Barber, Horacio A
2012-01-01
A classic debate in the psychology of language concerns the question of the grain-size of the linguistic information that is stored in memory. One view is that only morphologically simple forms are stored (e.g., 'car', 'red'), and that more complex forms of language such as multi-word phrases (e.g., 'red car') are generated on-line from the simple forms. In two experiments we tested this view. In Experiment 1, participants produced noun+adjective and noun+noun phrases that were elicited by experimental displays consisting of colored line drawings and two superimposed line drawings. In Experiment 2, participants produced noun+adjective and determiner+noun+adjective utterances elicited by colored line drawings. In both experiments, naming latencies decreased with increasing frequency of the multi-word phrase, and were unaffected by the frequency of the object name in the utterance. These results suggest that the language system is sensitive to the distribution of linguistic information at grain-sizes beyond individual words.
Development of a Mandarin-English Bilingual Speech Recognition System for Real World Music Retrieval
NASA Astrophysics Data System (ADS)
Zhang, Qingqing; Pan, Jielin; Lin, Yang; Shao, Jian; Yan, Yonghong
In recent decades, there has been a great deal of research into the problem of bilingual speech recognition-to develop a recognizer that can handle inter- and intra-sentential language switching between two languages. This paper presents our recent work on the development of a grammar-constrained, Mandarin-English bilingual Speech Recognition System (MESRS) for real world music retrieval. Two of the main difficult issues in handling the bilingual speech recognition systems for real world applications are tackled in this paper. One is to balance the performance and the complexity of the bilingual speech recognition system; the other is to effectively deal with the matrix language accents in embedded language**. In order to process the intra-sentential language switching and reduce the amount of data required to robustly estimate statistical models, a compact single set of bilingual acoustic models derived by phone set merging and clustering is developed instead of using two separate monolingual models for each language. In our study, a novel Two-pass phone clustering method based on Confusion Matrix (TCM) is presented and compared with the log-likelihood measure method. Experiments testify that TCM can achieve better performance. Since potential system users' native language is Mandarin which is regarded as a matrix language in our application, their pronunciations of English as the embedded language usually contain Mandarin accents. In order to deal with the matrix language accents in embedded language, different non-native adaptation approaches are investigated. Experiments show that model retraining method outperforms the other common adaptation methods such as Maximum A Posteriori (MAP). With the effective incorporation of approaches on phone clustering and non-native adaptation, the Phrase Error Rate (PER) of MESRS for English utterances was reduced by 24.47% relatively compared to the baseline monolingual English system while the PER on Mandarin utterances was comparable to that of the baseline monolingual Mandarin system. The performance for bilingual utterances achieved 22.37% relative PER reduction.
Perception of English Intonation by English, Spanish, and Chinese Listeners
ERIC Educational Resources Information Center
Grabe, Esther; Rosner, Burton S.; Garcia-Albea, Jose E.; Zhou, Xiaolin
2003-01-01
Native language affects the perception of segmental phonetic structure, of stress, and of semantic and pragmatic effects of intonation. Similarly, native language might influence the perception of similarities and differences among intonation contours. To test this hypothesis, a cross-language experiment was conducted. An English utterance was…
Production of Infinitival Complements by Children with Specific Language Impairment
ERIC Educational Resources Information Center
Arndt, Karen Barako; Schuele, C. Melanie
2012-01-01
The purpose of this study was to explore the production of infinitival complements by children with specific language impairment (SLI) as compared with mean length of utterance (MLU)-matched children in an effort to clarify inconsistencies in the literature. Spontaneous language samples were analysed for infinitival complements (reduced…
ERIC Educational Resources Information Center
Hilger, Allison I.; Loucks, Torrey M. J.; Quinto-Pozos, David; Dye, Matthew W. G.
2015-01-01
A study was conducted to examine production variability in American Sign Language (ASL) in order to gain insight into the development of motor control in a language produced in another modality. Production variability was characterized through the spatiotemporal index (STI), which represents production stability in whole utterances and is a…
Interaction of Language Processing and Motor Skill in Children with Specific Language Impairment
ERIC Educational Resources Information Center
DiDonato Brumbach, Andrea C.; Goffman, Lisa
2014-01-01
Purpose: To examine how language production interacts with speech motor and gross and fine motor skill in children with specific language impairment (SLI). Method: Eleven children with SLI and 12 age-matched peers (4-6 years) produced structurally primed sentences containing particles and prepositions. Utterances were analyzed for errors and for…
Teaching strategies in inclusive classrooms with deaf students.
Cawthon, S W
2001-01-01
The purpose of this study was to investigate teacher speech and educational philosophies in inclusive classrooms with deaf and hearing students. Data were collected from language transcripts, classroom observations, and teacher interviews. Total speech output, Mean Length Utterance, proportion of questions to statements, and proportion of open to closed questions were calculated for each teacher. Teachers directed fewer utterances, on average, to deaf than to hearing students but showed different language patterns on the remaining measures. Inclusive philosophies focused on an individualized approach to teaching, attention to deaf culture, advocacy, smaller class sizes, and an openness to diversity in the classroom. The interpreters' role in the classroom included translating teacher speech, voicing student sign language, mediating communication between deaf students and their peers, and monitoring overall classroom behavior.
[Big data, medical language and biomedical terminology systems].
Schulz, Stefan; López-García, Pablo
2015-08-01
A variety of rich terminology systems, such as thesauri, classifications, nomenclatures and ontologies support information and knowledge processing in health care and biomedical research. Nevertheless, human language, manifested as individually written texts, persists as the primary carrier of information, in the description of disease courses or treatment episodes in electronic medical records, and in the description of biomedical research in scientific publications. In the context of the discussion about big data in biomedicine, we hypothesize that the abstraction of the individuality of natural language utterances into structured and semantically normalized information facilitates the use of statistical data analytics to distil new knowledge out of textual data from biomedical research and clinical routine. Computerized human language technologies are constantly evolving and are increasingly ready to annotate narratives with codes from biomedical terminology. However, this depends heavily on linguistic and terminological resources. The creation and maintenance of such resources is labor-intensive. Nevertheless, it is sensible to assume that big data methods can be used to support this process. Examples include the learning of hierarchical relationships, the grouping of synonymous terms into concepts and the disambiguation of homonyms. Although clear evidence is still lacking, the combination of natural language technologies, semantic resources, and big data analytics is promising.
Knowledge and implicature: modeling language understanding as social cognition.
Goodman, Noah D; Stuhlmüller, Andreas
2013-01-01
Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the rational speech-act theory: Listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to "invert" this model of the speaker. We apply this framework to model scalar implicature ("some" implies "not all," and "N" implies "not more than N"). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. We test these predictions in two experiments and find good fit between model predictions and human judgments. Copyright © 2013 Cognitive Science Society, Inc.
Quantifying repetitive speech in autism spectrum disorders and language impairment.
van Santen, Jan P H; Sproat, Richard W; Hill, Alison Presmanes
2013-10-01
We report on an automatic technique for quantifying two types of repetitive speech: repetitions of what the child says him/herself (self-repeats) and of what is uttered by an interlocutor (echolalia). We apply this technique to a sample of 111 children between the ages of four and eight: 42 typically developing children (TD), 19 children with specific language impairment (SLI), 25 children with autism spectrum disorders (ASD) plus language impairment (ALI), and 25 children with ASD with normal, non-impaired language (ALN). The results indicate robust differences in echolalia between the TD and ASD groups as a whole (ALN + ALI), and between TD and ALN children. There were no significant differences between ALI and SLI children for echolalia or self-repetitions. The results confirm previous findings that children with ASD repeat the language of others more than other populations of children. On the other hand, self-repetition does not appear to be significantly more frequent in ASD, nor does it matter whether the child's echolalia occurred within one (immediate) or two turns (near-immediate) of the adult's original utterance. Furthermore, non-significant differences between ALN and SLI, between TD and SLI, and between ALI and TD are suggestive that echolalia may not be specific to ALN or to ASD in general. One important innovation of this work is an objective fully automatic technique for assessing the amount of repetition in a transcript of a child's utterances. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.
Quantifying Repetitive Speech in Autism Spectrum Disorders and Language Impairment
van Santen, Jan P. H.; Sproat, Richard W.; Hill, Alison Presmanes
2013-01-01
We report on an automatic technique for quantifying two types of repetitive speech: repetitions of what the child says him/herself (self-repeats) and of what is uttered by an interlocutor (echolalia). We apply this technique to a sample of 111 children between the ages of four and eight: 42 typically developing children (TD), 19 children with specific language impairment (SLI), 25 children with autism spectrum disorders (ASD) plus language impairment (ALI), and 25 children with ASD with normal, non-impaired language (ALN). The results indicate robust differences in echolalia between the TD and ASD groups as a whole (ALN + ALI), and between TD and ALN children. There were no significant differences between ALI and SLI children for echolalia or self-repetitions. The results confirm previous findings that children with ASD repeat the language of others more than other populations of children. On the other hand, self-repetition does not appear to be significantly more frequent in ASD, nor does it matter whether the child’s echolalia occurred within one (immediate) or two turns (near-immediate) of the adult’s original utterance. Furthermore, non-significant differences between ALN and SLI, between TD and SLI, and between ALI and TD are suggestive that echolalia may not be specific to ALN or to ASD in general. One important innovation of this work is an objective fully automatic technique for assessing the amount of repetition in a transcript of a child’s utterances. PMID:23661504
Deep bottleneck features for spoken language identification.
Jiang, Bing; Song, Yan; Wei, Si; Liu, Jun-Hua; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
A key problem in spoken language identification (LID) is to design effective representations which are specific to language information. For example, in recent years, representations based on both phonotactic and acoustic features have proven their effectiveness for LID. Although advances in machine learning have led to significant improvements, LID performance is still lacking, especially for short duration speech utterances. With the hypothesis that language information is weak and represented only latently in speech, and is largely dependent on the statistical properties of the speech content, existing representations may be insufficient. Furthermore they may be susceptible to the variations caused by different speakers, specific content of the speech segments, and background noise. To address this, we propose using Deep Bottleneck Features (DBF) for spoken LID, motivated by the success of Deep Neural Networks (DNN) in speech recognition. We show that DBFs can form a low-dimensional compact representation of the original inputs with a powerful descriptive and discriminative capability. To evaluate the effectiveness of this, we design two acoustic models, termed DBF-TV and parallel DBF-TV (PDBF-TV), using a DBF based i-vector representation for each speech utterance. Results on NIST language recognition evaluation 2009 (LRE09) show significant improvements over state-of-the-art systems. By fusing the output of phonotactic and acoustic approaches, we achieve an EER of 1.08%, 1.89% and 7.01% for 30 s, 10 s and 3 s test utterances respectively. Furthermore, various DBF configurations have been extensively evaluated, and an optimal system proposed.
Teaching Sentential Intonation through Proverbs
ERIC Educational Resources Information Center
Yurtbasi, Metin
2012-01-01
Suprasegmental elements such as "stress," "pitch," "juncture" and "linkers" are language universals that are uttered naturally in the mother tongue without prior training but need to be learned systematically in the target language. Among other techniques of "sentential pronunciation teaching" to…
ERIC Educational Resources Information Center
Cancino, Herlinda; And Others
Three hypotheses are examined in relation to English copula and negative utterances produced by three native Spanish speakers. The hypotheses are interference, interlanguage and L1=L2, which states that acquisition of a language by second language learners will parallel acquisiton of the same language by first language learners. The results of the…
Pitch enhancement facilitates word learning across visual contexts
Filippi, Piera; Gingras, Bruno; Fitch, W. Tecumseh
2014-01-01
This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution. PMID:25566144
Eviatar, Zohar; Just, Marcel Adam
2006-01-01
Higher levels of discourse processing evoke patterns of cognition and brain activation that extend beyond the literal comprehension of sentences. We used fMRI to examine brain activation patterns while 16 healthy participants read brief three-sentence stories that concluded with either a literal, metaphoric, or ironic sentence. The fMRI images acquired during the reading of the critical sentence revealed a selective response of the brain to the two types of nonliteral utterances. Metaphoric utterances resulted in significantly higher levels of activation in the left inferior frontal gyrus and in bilateral inferior temporal cortex than the literal and ironic utterances. Ironic statements resulted in significantly higher activation levels than literal statements in the right superior and middle temporal gyri, with metaphoric statements resulting in intermediate levels in these regions. The findings show differential hemispheric sensitivity to these aspects of figurative language, and are relevant to models of the functional cortical architecture of language processing in connected discourse. PMID:16806316
Kubota, Yoshie; Seki, Susumu; Takada, Kaori; Sakuma, Mio; Morimoto, Takeshi; Akaike, Akinori; Hiraide, Atsushi
2011-01-01
Objective To determine the value of using the Roter Interaction Analysis System during objective structured clinical examinations (OSCEs) to assess pharmacy students' communication competence. Methods As pharmacy students completed a clinical OSCE involving an interview with a simulated patient, 3 experts used a global rating scale to assess students' overall performance in the interview, and both the student's and patient's languages were coded using the Roter Interaction Analysis System (RIAS). The coders recorded the number of utterances (ie, units of spoken language) in each RIAS category. Correlations between the raters' scores and the number and types of utterances were examined. Results There was a significant correlation between students' global rating scores on the OSCE and the number of utterances in the RIAS socio-emotional category but not the RIAS business category. Conclusions The RIAS proved to be a useful tool for assessing the socio-emotional aspect of students' interview skills. PMID:21655397
Self-, other-, and joint monitoring using forward models.
Pickering, Martin J; Garrod, Simon
2014-01-01
In the psychology of language, most accounts of self-monitoring assume that it is based on comprehension. Here we outline and develop the alternative account proposed by Pickering and Garrod (2013), in which speakers construct forward models of their upcoming utterances and compare them with the utterance as they produce them. We propose that speakers compute inverse models derived from the discrepancy (error) between the utterance and the predicted utterance and use that to modify their production command or (occasionally) begin anew. We then propose that comprehenders monitor other people's speech by simulating their utterances using covert imitation and forward models, and then comparing those forward models with what they hear. They use the discrepancy to compute inverse models and modify their representation of the speaker's production command, or realize that their representation is incorrect and may develop a new production command. We then discuss monitoring in dialogue, paying attention to sequential contributions, concurrent feedback, and the relationship between monitoring and alignment.
Self-, other-, and joint monitoring using forward models
Pickering, Martin J.; Garrod, Simon
2014-01-01
In the psychology of language, most accounts of self-monitoring assume that it is based on comprehension. Here we outline and develop the alternative account proposed by Pickering and Garrod (2013), in which speakers construct forward models of their upcoming utterances and compare them with the utterance as they produce them. We propose that speakers compute inverse models derived from the discrepancy (error) between the utterance and the predicted utterance and use that to modify their production command or (occasionally) begin anew. We then propose that comprehenders monitor other people’s speech by simulating their utterances using covert imitation and forward models, and then comparing those forward models with what they hear. They use the discrepancy to compute inverse models and modify their representation of the speaker’s production command, or realize that their representation is incorrect and may develop a new production command. We then discuss monitoring in dialogue, paying attention to sequential contributions, concurrent feedback, and the relationship between monitoring and alignment. PMID:24723869
ERIC Educational Resources Information Center
Tesink, Cathelijne M. J. Y.; Buitelaar, Jan K.; Petersson, Karl Magnus; van der Gaag, Rutger Jan; Teunisse, Jan-Pieter; Hagoort, Peter
2011-01-01
In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language…
ERIC Educational Resources Information Center
Eisenberg, Sarita L.; Guo, Ling-Yu
2015-01-01
Purpose: The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method: Language samples were elicited by asking forty…
ERIC Educational Resources Information Center
Kapantzoglou, Maria; Fergadiotis, Gerasimos; Restrepo, M. Adelaida
2017-01-01
Purpose: This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination…
Learning to Show You're Listening
ERIC Educational Resources Information Center
Ward, Nigel G.; Escalante, Rafael; Al Bayyari, Yaffa; Solorio, Thamar
2007-01-01
Good listeners generally produce back-channel feedback, that is, short utterances such as "uh-huh" which signal active listening. As the rules governing back-channeling vary from language to language, second-language learners may need help acquiring this skill. This paper is an initial exploration of how to provide this. It presents a training…
The Reliability of Morphological Analyses in Language Samples
ERIC Educational Resources Information Center
Tommerdahl, Jodi; Kilpatrick, Cynthia D
2014-01-01
It is currently unclear to what extent a spontaneous language sample of a given number of utterances is representative of a child's ability in morphology and syntax. This lack of information about the regularity of children's linguistic productions and the reliability of spontaneous language samples have serious implications for language…
Extended Article: Situated Language Understanding as Filtering Perceived Affordances
ERIC Educational Resources Information Center
Gorniak, Peter; Roy, Deb
2007-01-01
We introduce a computational theory of situated language understanding in which the meaning of words and utterances depends on the physical environment and the goals and plans of communication partners. According to the theory, concepts that ground linguistic meaning are neither internal nor external to language users, but instead span the…
Functional Analysis of Language Interactions between Down Syndrome Children and Their Mothers.
ERIC Educational Resources Information Center
Hooshyar, Nahid T.
A 20-minute videotape sample was obtained of the language interactions between 20 Down syndrome children (ages 38 to 107 months) and their mothers during informal playtime. Linguistic utterances of mothers and children were coded according to the following language categories: query, declarative, imperative, performative, feedback, imitation,…
A Computer Assisted Language Analysis System.
ERIC Educational Resources Information Center
Rush, J. E.; And Others
A description is presented of a computer-assisted language analysis system (CALAS) which can serve as a method for isolating and displaying language utterances found in conversation. The purpose of CALAS is stated as being to deal with the question of whether it is possible to detect, isolate, and display information indicative of what is…
ERIC Educational Resources Information Center
Melzer, Dawn K.; Palermo, Cori A.
2016-01-01
The present study investigated the relationship between complexity of pretend play, initiation of pretense activities, and mental state utterances used during play. Children 3 to 4 years of age were videotaped while engaging in pretend play with a parent. The videotapes were coded according to mental state utterances (i.e. desire, emotion,…
ERIC Educational Resources Information Center
Horgan, Dianne
A study was conducted to determine whether the child expresses linguistic knowledge during the single-word period. The order of mention in 65 sets of successive single-word utterances from five children at Stage 1, two to four years old, were analyzed. To elicit speech, the children were shown line drawings representing such situations as animate…
ERIC Educational Resources Information Center
Peter, Christine Atieno; Mukuthuria, Mwenda; Muriung, Peter
2016-01-01
Presupposition, a linguistic element can be employed in utterances. When this is done it enhances the comprehension of what is being communicated. This aspect of language that is implicit assumption of an utterance is a strategy that may be used to express a speaker's socio-political dominance. The truth of what is said is taken for granted and…
Infant Word Segmentation Revisited: Edge Alignment Facilitates Target Extraction
ERIC Educational Resources Information Center
Seidl, Amanda; Johnson, Elizabeth K.
2006-01-01
In a landmark study, Jusczyk and Aslin (1995 ) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in…
Language identification from visual-only speech signals
Ronquest, Rebecca E.; Levi, Susannah V.; Pisoni, David B.
2010-01-01
Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the language-identification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification. PMID:20675804
Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin
2016-01-01
Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.
Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; T. Toledano, Doroteo; Gonzalez-Rodriguez, Joaquin
2016-01-01
Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved. PMID:26824467
Do Adults Show an Effect of Delayed First Language Acquisition When Calculating Scalar Implicatures?
ERIC Educational Resources Information Center
Davidson, Kathryn; Mayberry, Rachel I.
2015-01-01
Language acquisition involves learning not only grammatical rules and a lexicon but also what people are intending to convey with their utterances: the semantic/pragmatic component of language. In this article we separate the contributions of linguistic development and cognitive maturity to the acquisition of the semantic/pragmatic component of…
ERIC Educational Resources Information Center
High, Virginia Lacastro
Errors can be considered concrete representations of stages through which one must go in order to acquire one's native language and a second language. It has been discovered that certain errors appear systematically, revealing an approximate system, or "interlanguage," behind the erroneous utterances. Present research in second language…
Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?
ERIC Educational Resources Information Center
Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.
2013-01-01
Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…
ERIC Educational Resources Information Center
Marinac, Julie V.; Woodyatt, Gail C.; Ozanne, Anne E.
2008-01-01
This paper reports the design and trial of an original Observational Framework for quantitative investigation of young children's responses to adult language in their typical language learning environments. The Framework permits recording of both the response expectation of the adult utterances, and the degree of compliance in the child's…
ERIC Educational Resources Information Center
Préfontaine, Yvonne; Kormos, Judit
2015-01-01
While there exists a considerable body of literature on task-based difficulty and second language (L2) fluency in English as a second language (ESL), there has been little investigation with French learners. This mixed methods study examines learner appraisals of task difficulty and their relationship to automated utterance fluency measures in…
Utterance-Final Lengthening Is Predictive of Infants' Discrimination of English Accents
ERIC Educational Resources Information Center
White, Laurence; Floccia, Caroline; Goslin, Jeremy; Butler, Joseph
2014-01-01
Infants in their first year manifest selective patterns of discrimination between languages and between accents of the same language. Prosodic differences are held to be important in whether languages can be discriminated, together with the infant's familiarity with one or both of the accents heard. However, the nature of the prosodic cues that…
Gutiérrez-Clellen, Vera F.; Simon-Cereijido, Gabriela
2012-01-01
Current language tests designed to assess Spanish-English-speaking children have limited clinical accuracy and do not provide sufficient information to plan language intervention. In contrast, spontaneous language samples obtained in the two languages can help identify language impairment with higher accuracy. In this article, we describe several diagnostic indicators that can be used in language assessments based on spontaneous language samples. First, based on previous research with monolingual and bilingual English speakers, we show that a verb morphology composite measure in combination with a measure of mean length of utterance (MLU) can provide valuable diagnostic information for English development in bilingual children. Dialectal considerations are discussed. Second, we discuss the available research with bilingual Spanish speakers and show a series of procedures to be used for the analysis of Spanish samples: (a) limited MLU and proportional use of ungrammatical utterances; (b) limited grammatical accuracy on articles, verbs, and clitic pronouns; and (c) limited MLU, omission of theme arguments, and limited use of ditransitive verbs. Third, we illustrate the analysis of verb argument structure using a rubric as an assessment tool. Estimated scores on morphological and syntactic measures are expected to increase the sensitivity of clinical assessments with young bilingual children. Further research using other measures of language will be needed for older school-age children. PMID:19851951
Phrase Frequency Effects in Language Production
Janssen, Niels; Barber, Horacio A.
2012-01-01
A classic debate in the psychology of language concerns the question of the grain-size of the linguistic information that is stored in memory. One view is that only morphologically simple forms are stored (e.g., ‘car’, ‘red’), and that more complex forms of language such as multi-word phrases (e.g., ‘red car’) are generated on-line from the simple forms. In two experiments we tested this view. In Experiment 1, participants produced noun+adjective and noun+noun phrases that were elicited by experimental displays consisting of colored line drawings and two superimposed line drawings. In Experiment 2, participants produced noun+adjective and determiner+noun+adjective utterances elicited by colored line drawings. In both experiments, naming latencies decreased with increasing frequency of the multi-word phrase, and were unaffected by the frequency of the object name in the utterance. These results suggest that the language system is sensitive to the distribution of linguistic information at grain-sizes beyond individual words. PMID:22479370
Conversational Profiles of Children with ADHD, SLI and Typical Development
ERIC Educational Resources Information Center
Redmond, Sean M.
2004-01-01
Conversational indices of language impairment were used to investigate similarities and differences among children with Attention-Deficit/Hyperactivity Disorder (ADHD), children with Specific Language Impairment (SLI) and children with typical development (TD). Utterance formulation measures (per cent words mazed and average number of words per…
ERIC Educational Resources Information Center
O'Connell, Daniel C.; Kowal, Sabine; Ageneau, Carie
2005-01-01
A psycholinguistic hypothesis regarding the use of interjections in spoken utterances, originally formulated by Ameka (1992b, 1994) for the English language, but not confirmed in the German-language research of Kowal and O'Connell (2004 a & c), was tested: The local syntactic isolation of interjections is paralleled by their articulatory isolation…
DeThorne, Laura Segebart; Deater-Deckard, Kirby; Mahurin-Smith, Jamie; Coletto, Mary-Kelsey; Petrill, Stephen A.
2015-01-01
Background Despite support for the use of conversational language measures, concerns remain regarding the extent to which they may be confounded with aspects of child temperament, extraversion in particular. Aims This study of 161 twins from the Western Reserve Reading Project (WRRP) examined the associations between children’s conversational language use and three key aspects of child temperament: Surgency (i.e., introversion/extraversion), Effortful Control (i.e., attention and task persistence), and Negative Affectivity (e.g., fear, anger, sadness). Child biological sex was considered as a possible moderating factor. Methods & Procedures Correlational analyses were conducted between aspects of temperament during early school-age years (i.e., 7 to 8 yrs), as measured by the Children’s Behavior Questionnaire-Short Form (CBQ; Putnam & Rothbart, 2006), and six different measures of children’s conversational language use: total number of complete and intelligible utterances (TCICU), number of total words (NTW), mean length of utterance (MLU), total number of conjunctions (TNC), number of different words (NDW), and measure D (i.e., a measure of lexical diversity). Values for NTW, TNC, and NDW were derived both on the entire sample and on the first 100 C-units. Correlations between language and temperament were compared between girls and boys using the Fisher r-to-z transformation to examine the significance of potential moderating effects. Outcomes & Results Children’s reported variability in Effortful Control did not correlate significantly with any of the child language measures. In contrast, children’s Negative Affectivity and Surgency tended to demonstrate positive, albeit modest, correlations with those conversational language measures that were derived from the sample as a whole, rather than from a standardized number of utterances. MLU, as well as measures of NDW and NTW derived from standardized sample lengths of 100 C-units, did not correlate with any measure of child temperament. TNC demonstrated an unexpected negative correlation with child Surgency when it was derived from a standardized number of C-units but not when derived from the entire sample length. Child biological sex did not moderate the significant associations between language and temperament measures. Conclusions & Implications Overall, measures that control for volubility did not correlate significantly with child temperament; however, measures that reflected volubility tended to correlate weakly with some aspects of temperament, particularly Surgency. Results provide a degree of discriminant evidence for the validity of MLU and measures of type (i.e., NDW) and token use (i.e., NTW) when derived from a standardized number of utterances. PMID:22026571
ERIC Educational Resources Information Center
Vihman, Marilyn May
The use of formulaic speech is seen as a learning strategy in children's first language (L1) acquisition to a limited extent, and to an even greater extent in their second language (L2) acquisition. While the first utterances of the child learning L1 are mostly one-word constructions, many of them are routine words or phrases that the child learns…
Utterance complexity and stuttering on function words in preschool-age children who stutter.
Richels, Corrin; Buhr, Anthony; Conture, Edward; Ntourou, Katerina
2010-09-01
The purpose of the present investigation was to examine the relation between utterance complexity and utterance position and the tendency to stutter on function words in preschool-age children who stutter (CWS). Two separate studies involving two different groups of participants (Study 1, n=30; Study 2, n=30) were conducted. Participants were preschool-age CWS between the age of 3, 0 and 5, 11 who engaged in 15-20min parent-child conversational interactions. From audio-video recordings of each interaction, every child utterance of each parent-child sample was transcribed. From these transcripts, for each participant, measures of language (e.g., length and complexity) and measures of stuttering (e.g., word type and utterance position) were obtained. Results of Study 1 indicated that children stuttered more frequently on function words, but that this tendency was not greater for complex than simple utterances. Results of Study 2, involving the assessment of utterance position and MLU quartile, indicated that that stuttering was more likely to occur with increasing sentence length, and that stuttering tended to occur at the utterance-initial position, the position where function words were also more likely to occur. Findings were taken to suggest that, although word-level influences cannot be discounted, utterance-level influences contribute to the loci of stuttering in preschool-age children, and may help account for developmental changes in the loci of stuttering. The reader will learn about and be able to: (a) describe the influence of word type (function versus content words), and grammatical complexity, on disfluent speech; (b) compare the effect of stuttering frequency based on the position of the word in the utterance; (c) discuss the contribution of utterance position on the frequency of stuttering on function words; and (d) explain possible reasons why preschoolers stutter more frequently on function words than content words.
Student-patient communication during physical examination.
Cleland, Jennifer; de la Croix, Anne; Cotton, Philip; Coull, Sharon; Skelton, John
2013-04-01
Communication during the physical examination has been understudied. Explicit, evidence-based guidance is not available as to the most effective content or process of communication while performing physical examination, or indeed how to teach this to medical students. The objective of this exploratory study was to explore how medical students communicate with patients when performing a physical examination in the absence of formal teaching on how to communicate in this situation. We recorded 15 senior UK medical students as they performed physical examinations with real patients in general practice situations. The transcriptions were analysed for linguistic functions to identify the use of different categories of utterances. Student utterances fell into four categories: minimising language; using positive evaluative language; repeating the patient; and stating intentions or explanations and requesting consent. Students would often preface an explanation or action by phrases showing 'togetherness', by using 'we' rather than 'you'. They also used linguistic 'hedges' to minimise the impact of an utterance. Senior medical students speak very little during the physical examination. When they do, they use a taxonomy of utterances that reflects those reported in doctor-patient interactions. Identifying how medical students communicate when carrying out the physical examination is the first step in planning how to best teach specific communication skills. Further work is needed to identify how best to explore communication during physical examination, and how this is taught and learned. © Blackwell Publishing Ltd 2013.
Acquisition of locative utterances in Norwegian: structure-building via lexical learning.
Mitrofanova, Natalia; Westergaard, Marit
2018-03-15
This paper focuses on the acquisition of locative prepositional phrases in L1 Norwegian. We report on two production experiments with children acquiring Norwegian as their first language and compare the results to similar experiments conducted with Russian children. The results of the experiments show that Norwegian children at age 2 regularly produce locative utterances lacking overt prepositions, with the rate of preposition omission decreasing significantly by age 3. Furthermore, our results suggest that phonologically strong and semantically unambiguous locative items appear earlier in Norwegian children's utterances than their phonologically weak and semantically ambiguous counterparts. This conclusion is confirmed by a corpus study. We argue that our results are best captured by the Underspecified P Hypothesis (UPH; Mitrofanova, 2017), which assumes that, at early stages of grammatical development, the underlying structure of locative utterances is underspecified, with more complex functional representations emerging gradually based on the input. This approach predicts that the rate of acquisition in the domain of locative PPs should be influenced by the lexical properties of individual language-specific grammatical elements (such as frequency, morphological complexity, phonological salience, or semantic ambiguity). Our data from child Norwegian show that this prediction is borne out. Specifically, the results of our study suggest that phonologically more salient and semantically unambiguous items are mastered earlier than their ambiguous and phonologically less salient counterparts, despite the higher frequency of the latter in the input (Clahsen et al., 1996).
ERIC Educational Resources Information Center
Lado, Robert; Higgs, Theodore
Experimental hypotheses are proposed which assert (1) that "thought" and "language" are distinct but both are part of linguistic performance; (2) that "thought" is central, and "language" is a symbolic system that one uses to refer in various ways to what he thinks; and (3) that immediate memory works with utterances and linguistic texts over a…
ERIC Educational Resources Information Center
Haebig, Eileen; Sterling, Audra; Hoover, Jill
2016-01-01
Purpose: One aspect of morphosyntax, finiteness marking, was compared in children with fragile X syndrome (FXS), specific language impairment (SLI), and typical development matched on mean length of utterance (MLU). Method: Nineteen children with typical development (mean age = 3.3 years), 20 children with SLI (mean age = 4.9 years), and 17 boys…
Verb Schema Use and Input Dependence in 5-Year-Old Children with Specific Language Impairment (SLI)
ERIC Educational Resources Information Center
Riches, N. G.; Faragher, B.; Conti-Ramsden, G.
2006-01-01
It has been argued that children with Specific Language Impairment (SLI) use language in a conservative manner. For example, they are reluctant to produce word-plus-frame combinations that they have not heard in the input. In addition, there is evidence to suggest that their utterances replicate lexical and syntactic material from the immediate…
ERIC Educational Resources Information Center
Tan, Tony Xing; Loker, Troy; Dedrick, Robert F.; Marfo, Kofi
2012-01-01
In this study we investigated adopted Chinese girls' expressive English language outcomes in relation to their age at adoption, chronological age, length of exposure to English and developmental risk status at the time of adoption. Vocabulary and phrase utterance data on 318 girls were collected from the adoptive mothers using the Language…
ERIC Educational Resources Information Center
Yoon, Sumi
2012-01-01
Korean learners of the Japanese language and Japanese learners of the Korean language not only feel that it is easier to learn the respective foreign language, but also acquire Japanese and Korean faster than learners from other countries because of the grammatical similarity between Japanese and Korean. However, the similarity of grammatical…
ERIC Educational Resources Information Center
Qi, Cathy H.; Kaiser, Ann P.; Marley, Scott C.; Milan, Stephanie
2012-01-01
The purposes of the study were to determine (a) the ability of two spontaneous language measures, mean length of utterance in morphemes (MLU-m) and number of different words (NDW), to identify African American preschool children at low and high levels of language ability; (b) whether child chronological age was related to the performance of either…
How Do Utterance Measures Predict Raters' Perceptions of Fluency in French as a Second Language?
ERIC Educational Resources Information Center
Préfontaine, Yvonne; Kormos, Judit; Johnson, Daniel Ezra
2016-01-01
While the research literature on second language (L2) fluency is replete with descriptions of fluency and its influence with regard to English as an additional language, little is known about what fluency features influence judgments of fluency in L2 French. This study reports the results of an investigation that analyzed the relationship between…
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
2014-10-01
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their ability to judge emotion in a signed utterance is impaired (Reilly et al. in Sign Lang Stud 75:113-118, 1992). We examined the role of the face in the comprehension of emotion in sign language in a group of typically developing (TD) deaf children and in a group of deaf children with autism spectrum disorder (ASD). We replicated Reilly et al.'s (Sign Lang Stud 75:113-118, 1992) adult results in the TD deaf signing children, confirming the importance of the face in understanding emotion in sign language. The ASD group performed more poorly on the emotion recognition task than the TD children. The deaf children with ASD showed a deficit in emotion recognition during sign language processing analogous to the deficit in vocal emotion recognition that has been observed in hearing children with ASD.
Burgess, Sloane; Audet, Lisa; Harjusola-Webb, Sanna
2013-01-01
The purpose of this research was to begin to characterize and compare the school and home language environments of 10 preschool-aged children with Autism Spectrum Disorders (ASD). Naturalistic language samples were collected from each child, utilizing Language ENvironment Analysis (LENA) digital voice recorder technology, at 3-month intervals over the course of one year. LENA software was used to identify 15-min segments of each sample that represented the highest number of adult words used during interactions with each child for all school and home language samples. Selected segments were transcribed and analyzed using Systematic Analysis of Language Transcripts (SALT). LENA data was utilized to evaluate quantitative characteristics of the school and home language environments and SALT data was utilized to evaluate quantitative and qualitative characteristics of language environment. Results revealed many similarities in home and school language environments including the degree of semantic richness, and complexity of adult language, types of utterances, and pragmatic functions of utterances used by adults during interactions with child participants. Study implications and recommendations for future research are discussed. The reader will be able to, (1) describe how two language sampling technologies can be utilized together to collect and analyze language samples, (2) describe characteristics of the school and home language environments of young children with ASD, and (3) identify environmental factors that may lead to more positive expressive language outcomes of young children with ASD. Copyright © 2013 Elsevier Inc. All rights reserved.
Patterns of Adult-Child Linguistic Interaction in Integrated Day Care Groups.
Girolametto, Luigi; Hoaken, Lisa; Weitzman, Elaine; Lieshout, Riet van
2000-04-01
This study investigated the language input of eight childcare providers to children with developmental disabilities, including language delay, who were integrated into community day care centers. Structural and discourse features of the adults' language input was compared across two groups (integrated, typical) and two naturalistic day care contexts (book reading, play dough activity). The eight children with developmental disabilities and language delay were between 33-50 months of age; 32 normally developing peers ranged in age from 32-53 months of age. Adult-child interactions were transcribed and coded to yield estimates of structural indices (number of utterances, rate, mean length of utterances, ratio of different words to total words used (TTR) and discourse features (directive, interactive, language-modelling) of their language input. The language input addressed to the children with developmental disabilities was directive and not finely tuned to their expressive language levels. In turn, these children interacted infrequently with the adult or with the other children. Contextual comparisons indicated that the play dough activity promoted adult-child interaction that was less directive and more interaction-promoting than book reading, and that children interacted more frequently in the play-dough activity. Implications for speech-language pathologists include the need for collaborative consultation in integrated settings, modification of adult-child play contexts to promote interaction, and training childcare providers to use language input that promotes communication development.
Referent Salience Affects Second Language Article Use
ERIC Educational Resources Information Center
Trenkic, Danijela; Pongpairoj, Nattama
2013-01-01
The effect of referent salience on second language (L2) article production in real time was explored. Thai (-articles) and French (+articles) learners of English described dynamic events involving two referents, one visually cued to be more salient at the point of utterance formulation. Definiteness marking was made communicatively redundant with…
Effects of Utterance Length on Lip Kinematics in Aphasia
ERIC Educational Resources Information Center
Bose, Arpita; van Lieshout, Pascal
2008-01-01
Most existing models of language production and speech motor control do not explicitly address how language requirements affect speech motor functions, as these domains are usually treated as separate and independent from one another. This investigation compared lip movements during bilabial closure between five individuals with mild aphasia and…
Input Generation by Young Second Language Learners.
ERIC Educational Resources Information Center
Cathcart-Strong, Ruth L.
1986-01-01
Examined spontaneous communicative acts (requests for information, calls for attention, intention statements, etc.) of a group of young second-language learners and their native-speaker interlocutors in three play situations. Results showed that, while the response rate to some types of utterances was predictable, others did not generate the…
On the Margins of Discourse: The Relation of Literature to Language.
ERIC Educational Resources Information Center
Smith, Barbara Herrnstein
This centrally focused collection of articles and lectures examines literary interpretation and the relation of literature to language. The first of the book's three parts introduces the distinction between natural discourse and fictive discourse (verbal structures that function as representatives of natural utterances). It also deals with the…
ERIC Educational Resources Information Center
Koehlinger, Keegan M.; Van Horne, Amanda J. Owen; Moeller, Mary Pat
2013-01-01
Purpose: Spoken language skills of 3- and 6-year-old children who are hard of hearing (HH) were compared with those of children with normal hearing (NH). Method: Language skills were measured via mean length of utterance in words (MLUw) and percent correct use of finite verb morphology in obligatory contexts based on spontaneous conversational…
Development of the Prosodic Features of Infants' Vocalizing.
ERIC Educational Resources Information Center
Lane, Harlan; Sheppard, William
Traditional research methods of recording infant verbal behavior, namely, descriptions by a single observer transcribing the utterances of a single infant in a naturalistic setting, have been inadequate to provide data necessary for modern linguistic analyses. The Center for Research on Language and Language Behavior has undertaken to correct this…
Early Language and Communicative Abilities of Children with Periventricular Leukomalacia.
ERIC Educational Resources Information Center
Feldman, Heidi M.; And Others
1992-01-01
Ten two-year-old children with periventricular leukomalacia (PVL), a brain injury associated with prematurity, were evaluated using language samples. The five children with delayed cognitive ability produced significantly fewer lexical tokens and spontaneous verbal utterances than did chronological age-matched nondelayed PVL children. (Author/DB)
Fast Mapping Word-Learning Abilities of Language-Delayed Preschoolers.
ERIC Educational Resources Information Center
Rice, Mabel L.; And Others
1990-01-01
Twenty language-delayed children (age three to six) viewed a presentation incorporating object, action, attribute, and affective state words into a narrative script. In pre- and postviewing word comprehension measurements, subjects scored lower than children matched for chronological age and children matched for mean length of utterance.…
Yes, You Can Learn Foreign Language Pronunciation by Sight!
ERIC Educational Resources Information Center
Richmond, Edmun B.; And Others
1979-01-01
Describes the Envelope Vowel Approximation System (EVAS), a foreign language pronunciation learning system which allows students to see as well as hear a pedagogical model of a sound, and to compare their own utterances of that sound to the model as they pronounce the same sound. (Author/CMV)
Significance of Social Applications on a Mobile Phone for English Task-Based Language Learning
ERIC Educational Resources Information Center
Ahmad, Anmol; Farrukh, Fizza
2015-01-01
The utter importance of knowing the English language cannot be denied today. Despite the existence of traditional methods for teaching a language in schools, a big number of children are left without the requisite knowledge of English as a result of which they fail to compete in the modern world. With English being a Lingua Franca, more efforts…
ERIC Educational Resources Information Center
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
2014-01-01
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…
ERIC Educational Resources Information Center
Bottema-Beutel, Kristen; Yoder, Paul J.; Hochman, Julia M.; Watson, Linda R.
2014-01-01
This study examined associations between three parent-child engagement states and social communication, expressive language, and receptive language at 8 month follow-up, in 63 preschool-age children with autism spectrum disorder. We extend the literature on supported joint engagement by dividing this state into higher order (HSJE) and lower order…
Evaluating Dialogue Competence in Naturally Occurring Child-Child Interactions
ERIC Educational Resources Information Center
Naerland, Terje
2011-01-01
The principal aim of this paper is to contribute to the pursuit of evaluating pragmatic language competence in preschool years by observation-based data. Initially, the relations between age and language development measured as mean length of utterance (MLU) and three dialogue skills are described. The occurrences of "focus on the dialogue…
Child and Maternal Contributions to Shared Reading: Effects on Language and Literacy Development
ERIC Educational Resources Information Center
Deckner, Deborah F.; Adamson, Lauren B.; Bakeman, Roger
2006-01-01
Fifty-five children and their mothers were studied longitudinally from 18 to 42 months to determine the effects of home literacy practices, children's interest in reading, and mothers' metalingual utterances during reading on children's expressive and receptive language development, letter knowledge, and knowledge of print concepts. At 27 months,…
Recasts Used with Preschoolers Learning English as Their Second Language
ERIC Educational Resources Information Center
Tsybina, Irina; Girolametto, Luigi E.; Weitzman, Elaine; Greenberg, Janice
2006-01-01
This study examined linguistic recasts provided by 16 early childhood educators to preschool children learning English as a second language (EL2). Recasts are semantic and syntactic revisions of children's utterances. The educator-child interactions were filmed during book reading and play dough activities with small groups of four children, one…
ERIC Educational Resources Information Center
Czerwionka, Lori Ann
2010-01-01
"Mitigation" is the modification of language in response to social or cognitive challenges ("stressors") in contexts of linguistic interaction (Martinovski, Mao, Gratch, & Marsella 2005). Previous mitigation research has been largely from social perspectives, addressing the word or utterance levels of language. This dissertation presents an…
A Chatbot for a Dialogue-Based Second Language Learning System
ERIC Educational Resources Information Center
Huang, Jin-Xia; Lee, Kyung-Soon; Kwon, Oh-Woog; Kim, Young-Kil
2017-01-01
This paper presents a chatbot for a Dialogue-Based Computer-Assisted second Language Learning (DB-CALL) system. A DB-CALL system normally leads dialogues by asking questions according to given scenarios. User utterances outside the scenarios are normally considered as semantically improper and simply rejected. In this paper, we assume that raising…
ERIC Educational Resources Information Center
Holiday, D. Alexander
The language of Black America is rich and diverse in its utterance, whether through music (Jazz, Blues, Soul, Gospel, and Rap), through street corner "shuckin''n jivin'," or through writing. This language is used as a means of survival, of getting from one day to the next. Blacks have developed a system of taking the fewest words and…
ERIC Educational Resources Information Center
Zheng, Chun
2017-01-01
Producing a sensible utterance requires speakers to select conceptual content, lexical items, and syntactic structures almost instantaneously during speech planning. Each language offers its speakers flexibility in the selection of lexical and syntactic options to talk about the same scenarios involving movement. Languages also vary typologically…
ERIC Educational Resources Information Center
Sohail, Juwairia; Johnson, Elizabeth K.
2016-01-01
Much of what we know about the development of listeners' word segmentation strategies originates from the artificial language-learning literature. However, many artificial speech streams designed to study word segmentation lack a salient cue found in all natural languages: utterance boundaries. In this study, participants listened to a…
Lexically restricted utterances in Russian, german, and english child-directed speech.
Stoll, Sabine; Abbot-Smith, Kirsten; Lieven, Elena
2009-01-01
This study investigates the child-directed speech (CDS) of four Russian-, six German, and six English-speaking mothers to their 2-year-old children. Typologically Russian has considerably less restricted word order than either German or English, with German showing more word-order variants than English. This could lead to the prediction that the lexical restrictiveness previously found in the initial strings of English CDS by Cameron-Faulkner, Lieven, and Tomasello (2003) would not be found in Russian or German CDS. However, despite differences between the three corpora that clearly derive from typological differences between the languages, the most significant finding of this study is a high degree of lexical restrictiveness at the beginnings of CDS utterances in all three languages. Copyright © 2009 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Stevenson, Alma R.
2013-12-01
This qualitative, sociolinguistic research study examines how bilingual Latino/a students use their linguistic resources in the classroom and laboratory during science instruction. This study was conducted in a school in the southwestern United States serving an economically depressed, predominantly Latino population. The object of study was a fifth grade science class entirely comprised of language minority students transitioning out of bilingual education. Therefore, English was the means of instruction in science, supported by informal peer-to-peer Spanish-language communication. This study is grounded in a social constructivist paradigm. From this standpoint, learning science is a social process where social, cultural, and linguistic factors are all considered crucial to the process of acquiring scientific knowledge. The study was descriptive in nature, examining specific linguistic behaviors with the purpose of identifying and analyzing the linguistic functions of students' utterances while participating in science learning. The results suggest that students purposefully adapt their use of linguistic resources in order to facilitate their participation in science leaning. What is underscored in this study is the importance of explicitly acknowledging, supporting, and incorporating bilingual students' linguistic resources both in Spanish and English into the science classroom in order to optimize students' participation and facilitate their understanding.
Effects of Disfluency in Online Interpretation of Deception.
Loy, Jia E; Rohde, Hannah; Corley, Martin
2017-05-01
A speaker's manner of delivery of an utterance can affect a listener's pragmatic interpretation of the message. Disfluencies (such as filled pauses) influence a listener's off-line assessment of whether the speaker is truthful or deceptive. Do listeners also form this assessment during the moment-by-moment processing of the linguistic message? Here we present two experiments that examined listeners' judgments of whether a speaker was indicating the true location of the prize in a game during fluent and disfluent utterances. Participants' eye and mouse movements were biased toward the location named by the speaker during fluent utterances, whereas the opposite bias was observed during disfluent utterances. This difference emerged rapidly after the onset of the critical noun. Participants were similarly sensitive to disfluencies at the start of the utterance (Experiment 1) and in the middle (Experiment 2). Our findings support recent research showing that listeners integrate pragmatic information alongside semantic content during the earliest moments of language processing. Unlike prior work which has focused on pragmatic effects in the interpretation of the literal message, here we highlight disfluency's role in guiding a listener to an alternative non-literal message. Copyright © 2016 Cognitive Science Society, Inc.
Hardan, Antonio Y; Gengoux, Grace W; Berquist, Kari L; Libove, Robin A; Ardel, Christina M; Phillips, Jennifer; Frazier, Thomas W; Minjarez, Mendy B
2015-08-01
With rates of autism diagnosis continuing to rise, there is an urgent need for effective and efficient service delivery models. Pivotal Response Treatment (PRT) is considered an established treatment for autism spectrum disorder (ASD); however, there have been few well-controlled studies with adequate sample size. The aim of this study was to conduct a randomized controlled trial to evaluate PRT parent training group (PRTG) for targeting language deficits in young children with ASD. Fifty-three children with autism and significant language delay between 2 and 6 years old were randomized to PRTG (N = 27) or psychoeducation group (PEG; N = 26) for 12 weeks. The PRTG taught parents behavioral techniques to facilitate language development. The PEG taught general information about ASD (clinical trial NCT01881750; http://www.clinicaltrials.gov). Analysis of child utterances during the structured laboratory observation (primary outcome) indicated that, compared with children in the PEG, children in the PRTG demonstrated greater improvement in frequency of utterances (F(2, 43) = 3.53, p = .038, d = 0.42). Results indicated that parents were able to learn PRT in a group format, as the majority of parents in the PRTG (84%) met fidelity of implementation criteria after 12 weeks. Children also demonstrated greater improvement in adaptive communication skills (Vineland-II) following PRTG and baseline Mullen visual reception scores predicted treatment response to PRTG. This is the first randomized controlled trial of group-delivered PRT and one of the largest experimental investigations of the PRT model to date. The findings suggest that specific instruction in PRT results in greater skill acquisition for both parents and children, especially in functional and adaptive communication skills. Further research in PRT is warranted to replicate the observed results and address other core ASD symptoms. © 2014 Association for Child and Adolescent Mental Health.
Dube, Sithembinkosi; Kung, Carmen; Peter, Varghese; Brock, Jon; Demuth, Katherine
2016-01-01
Previous ERP studies have often reported two ERP components—LAN and P600—in response to subject-verb (S-V) agreement violations (e.g., the boys *runs). However, the latency, amplitude and scalp distribution of these components have been shown to vary depending on various experiment-related factors. One factor that has not received attention is the extent to which the relative perceptual salience related to either the utterance position (verbal inflection in utterance-medial vs. utterance-final contexts) or the type of agreement violation (errors of omission vs. errors of commission) may influence the auditory processing of S-V agreement. The lack of reports on these effects in ERP studies may be due to the fact that most studies have used the visual modality, which does not reveal acoustic information. To address this gap, we used ERPs to measure the brain activity of Australian English-speaking adults while they listened to sentences in which the S-V agreement differed by type of agreement violation and utterance position. We observed early negative and positive clusters (AN/P600 effects) for the overall grammaticality effect. Further analysis revealed that the mean amplitude and distribution of the P600 effect was only significant in contexts where the S-V agreement violation occurred utterance-finally, regardless of type of agreement violation. The mean amplitude and distribution of the negativity did not differ significantly across types of agreement violation and utterance position. These findings suggest that the increased perceptual salience of the violation in utterance final position (due to phrase-final lengthening) influenced how S-V agreement violations were processed during sentence comprehension. Implications for the functional interpretation of language-related ERPs and experimental design are discussed. PMID:27625617
Dube, Sithembinkosi; Kung, Carmen; Peter, Varghese; Brock, Jon; Demuth, Katherine
2016-01-01
Previous ERP studies have often reported two ERP components-LAN and P600-in response to subject-verb (S-V) agreement violations (e.g., the boys (*) runs). However, the latency, amplitude and scalp distribution of these components have been shown to vary depending on various experiment-related factors. One factor that has not received attention is the extent to which the relative perceptual salience related to either the utterance position (verbal inflection in utterance-medial vs. utterance-final contexts) or the type of agreement violation (errors of omission vs. errors of commission) may influence the auditory processing of S-V agreement. The lack of reports on these effects in ERP studies may be due to the fact that most studies have used the visual modality, which does not reveal acoustic information. To address this gap, we used ERPs to measure the brain activity of Australian English-speaking adults while they listened to sentences in which the S-V agreement differed by type of agreement violation and utterance position. We observed early negative and positive clusters (AN/P600 effects) for the overall grammaticality effect. Further analysis revealed that the mean amplitude and distribution of the P600 effect was only significant in contexts where the S-V agreement violation occurred utterance-finally, regardless of type of agreement violation. The mean amplitude and distribution of the negativity did not differ significantly across types of agreement violation and utterance position. These findings suggest that the increased perceptual salience of the violation in utterance final position (due to phrase-final lengthening) influenced how S-V agreement violations were processed during sentence comprehension. Implications for the functional interpretation of language-related ERPs and experimental design are discussed.
Narratives in Two Languages: Storytelling of Bilingual Cantonese-English Preschoolers.
Rezzonico, Stefano; Goldberg, Ahuva; Mak, Katy Ka-Yan; Yap, Stephanie; Milburn, Trelani; Belletti, Adriana; Girolametto, Luigi
2016-06-01
The aim of this study was to compare narratives generated by 4-year-old and 5-year-old children who were bilingual in English and Cantonese. The sample included 47 children (23 who were 4 years old and 24 who were 5 years old) living in Toronto, Ontario, Canada, who spoke both Cantonese and English. The participants spoke and heard predominantly Cantonese in the home. Participants generated a story in English and Cantonese by using a wordless picture book; language order was counterbalanced. Data were transcribed and coded for story grammar, morphosyntactic quality, mean length of utterance in words, and the number of different words. Repeated measures analysis of variance revealed higher story grammar scores in English than in Cantonese, but no other significant main effects of language were observed. Analyses also revealed that older children had higher story grammar, mean length of utterance in words, and morphosyntactic quality scores than younger children in both languages. Hierarchical regressions indicated that Cantonese story grammar predicted English story grammar and Cantonese microstructure predicted English microstructure. However, no correlation was observed between Cantonese and English morphosyntactic quality. The results of this study have implications for speech-language pathologists who collect narratives in Cantonese and English from bilingual preschoolers. The results suggest that there is a possible transfer in narrative abilities between the two languages.
Gesture and speech during shared book reading with preschoolers with specific language impairment.
Lavelli, Manuela; Barachetti, Chiara; Florit, Elena
2015-11-01
This study examined (a) the relationship between gesture and speech produced by children with specific language impairment (SLI) and typically developing (TD) children, and their mothers, during shared book-reading, and (b) the potential effectiveness of gestures accompanying maternal speech on the conversational responsiveness of children. Fifteen preschoolers with expressive SLI were compared with fifteen age-matched and fifteen language-matched TD children. Child and maternal utterances were coded for modality, gesture type, gesture-speech informational relationship, and communicative function. Relative to TD peers, children with SLI used more bimodal utterances and gestures adding unique information to co-occurring speech. Some differences were mirrored in maternal communication. Sequential analysis revealed that only in the SLI group maternal reading accompanied by gestures was significantly followed by child's initiatives, and when maternal non-informative repairs were accompanied by gestures, they were more likely to elicit adequate answers from children. These findings support the 'gesture advantage' hypothesis in children with SLI, and have implications for educational and clinical practice.
Some empirical observations about early stuttering: a possible link to language development.
Bloodstein, O
2006-01-01
This article suggests a possible link between incipient stuttering and early difficulty in language formulation. The hypothesis offers a unifying explanation of an array of empirical observations. Among these observations are the following: early stuttering occurs only on the first word of a syntactic structure; stuttering does not appear to be influenced by word-related factors; early stuttering seldom occurs on one-word utterances; the earliest age at which stuttering is reported is 18 months, with the beginning of grammatical development; the age at which most onset of stuttering is reported, 2-5 years, coincides with the period during which children acquire syntax; considerable spontaneous recovery takes place at the time most children have mastered syntax; incipient stuttering is influenced by the length and grammatical complexity of utterances; young children who stutter may be somewhat deficient in language skills; boys who stutter outnumber girls. The reader will learn about a number of empirical observations about incipient stuttering and how they may be explained by a syntax-based hypothesis about its etiology.
Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech
NASA Astrophysics Data System (ADS)
Furui, Sadaoki
This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.
Concord, Convergence and Accommodation in Bilingual Children
ERIC Educational Resources Information Center
Radford, Andrew; Kupisch, Tanja; Koppe, Regina; Azzaro, Gabriele
2007-01-01
This paper examines the syntax of "GENDER CONCORD" in mixed utterances where bilingual children switch between a modifier in one language and a noun in another. Particular attention is paid to how children deal with potential gender mismatches between modifier and noun, i.e., if one of the languages has grammatical gender but the other does not,…
ERIC Educational Resources Information Center
Kwon, Oh-Woog; Lee, Kiyoung; Kim, Young-Kil; Lee, Yunkeun
2015-01-01
This paper introduces a Dialog-Based Computer-Assisted second-Language Learning (DB-CALL) system using semantic and grammar correctness evaluations and the results of its experiment. While the system dialogues with English learners about a given topic, it automatically evaluates the grammar and content properness of their English utterances, then…
Comprehension of Indirect Requests Is Influenced by Their Degree of Imposition
ERIC Educational Resources Information Center
Stewart, Andrew J.; Le-luan, Elizabeth; Wood, Jeffrey S.; Yao, Bo; Haigh, Matthew
2018-01-01
In everyday conversation much communication is achieved using indirect language. This is particularly true when we utter requests. The decision to use indirect language is influenced by a number of factors, including deniability, politeness, and the degree of imposition on the receiver of a request. In this article we report the results of an…
Word Frequency, Function Words and the Second Gavagai Problem
ERIC Educational Resources Information Center
Hochmann, Jean-Remy
2013-01-01
The classic gavagai problem exemplifies the difficulty to identify the referent of a novel word uttered in a foreign language. Here, we consider the reverse problem: identifying the referential part of a label. Assuming "gavagai" indicates a rabbit in a foreign language, it may very well mean ""a" rabbit" or ""that" rabbit". How can a learner know…
ERIC Educational Resources Information Center
Kormos, Judit; Préfontaine, Yvonne
2017-01-01
The present mixed-methods study examined the role of learner appraisals of speech tasks in second language (L2) French fluency. Forty adult learners in a Canadian immersion program participated in the study that compared four sources of data: (1) objectively measured utterance fluency in participants' performances of three narrative tasks…
ERIC Educational Resources Information Center
Rice, Mabel L.; Wexler, Kenneth; Hershberger, Scott
1998-01-01
A longitudinal study of 43 typical children (ages 2 to 8) and 21 children with specific language impairments (SLI) found that a diverse set of morphemes share the property of tense marking, that acquisition shows linear and nonlinear components, and that mean length of utterance predicts rate of acquisition. (Author/CR)
Specific-Language-Impaired Children's Quick Incidental Learning of Words: The Effect of a Pause.
ERIC Educational Resources Information Center
Rice, Mabel L.; And Others
1992-01-01
Comparison of 2 methods of presenting novel words, either preceded by a pause or in normal prosody, on initial word comprehension of 20 5-year-old children with language impairments (and 2 control groups matched for either age or mean length of utterance) found no effect for presentation method. (Author/DB)
Cross-language comparisons of contextual variation in the production and perception of vowels
NASA Astrophysics Data System (ADS)
Strange, Winifred
2005-04-01
In the last two decades, a considerable amount of research has investigated second-language (L2) learners problems with perception and production of non-native vowels. Most studies have been conducted using stimuli in which the vowels are produced and presented in simple, citation-form (lists) monosyllabic or disyllabic utterances. In my laboratory, we have investigated the spectral (static/dynamic formant patterns) and temporal (syllable duration) variation in vowel productions as a function of speech-style (list/sentence utterances), speaking rate (normal/rapid), sentence focus (narrow focus/post-focus) and phonetic context (voicing/place of surrounding consonants). Data will be presented for a set of languages that include large and small vowel inventories, stress-, syllable-, and mora-timed prosody, and that vary in the phonological/phonetic function of vowel length, diphthongization, and palatalization. Results show language-specific patterns of contextual variation that affect the cross-language acoustic similarity of vowels. Research on cross-language patterns of perceived phonetic similarity by naive listeners suggests that listener's knowledge of native language (L1) patterns of contextual variation influences their L1/L2 similarity judgments and subsequently, their discrimination of L2 contrasts. Implications of these findings for assessing L2 learners perception of vowels and for developing laboratory training procedures to improve L2 vowel perception will be discussed. [Work supported by NIDCD.
Language choice in bimodal bilingual development.
Lillo-Martin, Diane; de Quadros, Ronice M; Chen Pichler, Deborah; Fieldsteel, Zoe
2014-01-01
Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending-expressions in both speech and sign simultaneously-an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language.
Language choice in bimodal bilingual development
Lillo-Martin, Diane; de Quadros, Ronice M.; Chen Pichler, Deborah; Fieldsteel, Zoe
2014-01-01
Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending—expressions in both speech and sign simultaneously—an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language. PMID:25368591
Echolalia and Comprehension in Autistic Children.
ERIC Educational Resources Information Center
Roberts, Jacqueline M. A.
1989-01-01
The study with 10 autistic children (ages 4-17) found that those children with poor receptive language skills produced significantly more echolalic utterances than those children whose receptive skills were more age-appropriate. (Author/DB)
ERIC Educational Resources Information Center
Arndt, Karen Barako; Schuele, C. Melanie
2013-01-01
Complex syntax production emerges shortly after the emergence of two-word combinations in oral language and continues to develop through the school-age years. This article defines a framework for the analysis of complex syntax in the spontaneous language of preschool- and early school-age children. The purpose of this article is to provide…
Intelligibility assessment in developmental phonological disorders: accuracy of caregiver gloss.
Kwiatkowski, J; Shriberg, L D
1992-10-01
Fifteen caregivers each glossed a simultaneously videotaped and audiotaped sample of their child with speech delay engaged in conversation with a clinician. One of the authors generated a reference gloss for each sample, aided by (a) prior knowledge of the child's speech-language status and error patterns, (b) glosses from the child's clinician and the child's caregiver, (c) unlimited replays of the taped sample, and (d) the information gained from completing a narrow phonetic transcription of the sample. Caregivers glossed an average of 78% of the utterances and 81% of the words. A comparison of their glosses to the reference glosses suggested that they accurately understood an average of 58% of the utterances and 73% of the words. Discussion considers the implications of such findings for methodological and theoretical issues underlying children's moment-to-moment intelligibility breakdowns during speech-language processing.
de Boer, J N; Heringa, S M; van Dellen, E; Wijnen, F N K; Sommer, I E C
2016-11-01
Auditory verbal hallucinations (AVH) in psychotic patients are associated with activation of right hemisphere language areas, although this hemisphere is non-dominant in most people. Language generated in the right hemisphere can be observed in aphasia patients with left hemisphere damage. It is called "automatic speech", characterized by low syntactic complexity and negative emotional valence. AVH in nonpsychotic individuals, by contrast, predominantly have a neutral or positive emotional content and may be less dependent on right hemisphere activity. We hypothesize that right hemisphere language characteristics can be observed in the language of AVH, differentiating psychotic from nonpsychotic individuals. 17 patients with a psychotic disorder and 19 nonpsychotic individuals were instructed to repeat their AVH verbatim directly upon hearing them. Responses were recorded, transcribed and analyzed for total words, mean length of utterance, proportion of grammatical utterances, proportion of negations, literal and thematic perseverations, abuses, type-token ratio, embeddings, verb complexity, noun-verb ratio, and open-closed class ratio. Linguistic features of AVH overall differed between groups F(13,24)=3.920, p=0.002; Pillai's Trace 0.680. AVH of psychotic patients compared with AVH of nonpsychotic individuals had a shorter mean length of utterance, lower verb complexity, and more verbal abuses and perseverations (all p<0.05). Other features were similar between groups. AVH of psychotic patients showed lower syntactic complexity and higher levels of repetition and abuses than AVH of nonpsychotic individuals. These differences are in line with a stronger involvement of the right hemisphere in the origination of AVH in patients than in nonpsychotic voice hearers. Copyright © 2016 Elsevier Inc. All rights reserved.
Meinzen-Derr, Jareen; Wiley, Susan; McAuley, Rose; Smith, Laura; Grether, Sandra
2017-11-01
Pilot study to assess the effect of augmentative and alternative communication technology to enhance language development in children who are deaf or hard-of-hearing. Five children ages 5-10 years with permanent bilateral hearing loss who were identified with language underperformance participated in an individualized 24-week structured program using the application TouchChat WordPower on iPads ® . Language samples were analyzed for changes in mean length of utterance, vocabulary words and mean turn length. Repeated measures models assessed change over time. The baseline median mean length of utterance was 2.41 (range 1.09-6.63; mean 2.88) and significantly increased over time (p = 0.002) to a median of 3.68 at final visit (range 1.97-6.81; mean 3.62). At baseline, the median total number of words spoken per language sample was 251 (range 101-458), with 100 (range 36-100) different words spoken. Total words and different words significantly increased over time (β = 26.8 (7.1), p = 0.001 for total words; β = 8.0 (2.7), p = 0.008 for different words). Mean turn length values also slightly increased over time. Using augmentative and alternative communication technology on iPads ® shows promise in supporting rapid language growth among elementary school-age children who are deaf or hard-of-hearing with language underperformance.
Language acquisition: hesitations in the question/answer dialogic pair.
Chacon, Lourenço; Villega, Cristyane de Camargo Sampaio
2015-01-01
(1) To verify the existence (or not) of hesitation marks in the beginning of utterances in children's discourse; and (2) to determine to what extent the presence/absence of these marks could be explained by retrievable facts in the production conditions of their discourses. Interview situations with four children aged 5-6 years attending Kindergarten level II in a public preschool at the time of the data collection were analyzed. The interviews were recorded on audio and video, inside a soundproof booth, with high fidelity equipment. Afterwards, the recordings were transcribed by six transcribers that were specially trained for this task. Transcription rules that prioritized the analyses of hesitations were used. For the analysis of retrievable facts in the production conditions of children's discourse, the dialogic pair question-answer was adopted. A correlation between presence/absence of hesitation in the beginning of utterances in children and type of question (open/closed) made by the collocutor was observed. When the question was closed ended, the utterances were preferably initiated without hesitation marks, and when the question was open ended, the utterances were preferably initiated with hesitation marks. The presence/absence of hesitation marks in the beginning of utterances in children was found to be dependent on the production conditions of their discourses.
Input and Output in Code Switching: A Case Study of a Japanese-Chinese Bilingual Infant
ERIC Educational Resources Information Center
Meng, Hairong; Miyamoto, Tadao
2012-01-01
Code switching (CS) (or language mixing) generally takes place in bilingual children's utterances, even if their parents adhere to the "one parent-one language" principle. The present case study of a Japanese-Chinese bilingual infant provides both quantitative and qualitative analyses on the impact of input on output, as manifested in CS. The…
ERIC Educational Resources Information Center
Leffert, Beatrice G.
From the perspective of a reading consultant, the processes of thinking and reading apply to efficient learning. Language teachers should know: (1) the difference between surface structure and deep meaning of an utterance, (2) the importance of "affect" on learning: the reader's personal involvement with the material and with its presentation,…
An Extended Optional Infinitive Stage in German-Speaking Children with Specific Language Impairment.
ERIC Educational Resources Information Center
Rice, Mabel L.; Noll, Karen Ruff; Grimm, Hannelore
1997-01-01
Predictions were formulated for extended Optional Infinitives (OIs) stage in German-speaking children with specific language impairment and evaluated in clinical sample of 8 SLI German-speaking children, ages 4; 0 to 4; 8; and control group of 8 younger utterance-equivalent children, ages 2; 1 to 2; 7. Samples reveal that affected group more…
For a new look at 'lexical errors': evidence from semantic approximations with verbs in aphasia.
Duvignau, Karine; Tran, Thi Mai; Manchon, Mélanie
2013-08-01
The ability to understand the similarity between two phenomena is fundamental for humans. Designated by the term analogy in psychology, this ability plays a role in the categorization of phenomena in the world and in the organisation of the linguistic system. The use of analogy in language often results in non-standard utterances, particularly in speakers with aphasia. These non-standard utterances are almost always studied in a nominal context and considered as errors. We propose a study of the verbal lexicon and present findings that measure, by an action-video naming task, the importance of verb-based non-standard utterances made by 17 speakers with aphasia ("la dame déshabille l'orange"/the lady undresses the orange, "elle casse la tomate"/she breaks the tomato). The first results we have obtained allow us to consider these type of utterances from a new perspective: we propose to eliminate the label of "error", suggesting that they may be viewed as semantic approximations based upon a relationship of inter-domain synonymy and are ingrained in the heart of the lexical system.
Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition
Rigoulot, Simon; Wassiliwizky, Eugen; Pell, Marc D.
2013-01-01
Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400–1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech. PMID:23805115
Rational integration of noisy evidence and prior semantic expectations in sentence interpretation.
Gibson, Edward; Bergen, Leon; Piantadosi, Steven T
2013-05-14
Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be "well designed"--in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian "size principle"; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel.
Beyond the language given: the neural correlates of inferring speaker meaning.
Bašnáková, Jana; Weber, Kirsten; Petersson, Karl Magnus; van Berkum, Jos; Hagoort, Peter
2014-10-01
Even though language allows us to say exactly what we mean, we often use language to say things indirectly, in a way that depends on the specific communicative context. For example, we can use an apparently straightforward sentence like "It is hard to give a good presentation" to convey deeper meanings, like "Your talk was a mess!" One of the big puzzles in language science is how listeners work out what speakers really mean, which is a skill absolutely central to communication. However, most neuroimaging studies of language comprehension have focused on the arguably much simpler, context-independent process of understanding direct utterances. To examine the neural systems involved in getting at contextually constrained indirect meaning, we used functional magnetic resonance imaging as people listened to indirect replies in spoken dialog. Relative to direct control utterances, indirect replies engaged dorsomedial prefrontal cortex, right temporo-parietal junction and insula, as well as bilateral inferior frontal gyrus and right medial temporal gyrus. This suggests that listeners take the speaker's perspective on both cognitive (theory of mind) and affective (empathy-like) levels. In line with classic pragmatic theories, our results also indicate that currently popular "simulationist" accounts of language comprehension fail to explain how listeners understand the speaker's intended message. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Rational integration of noisy evidence and prior semantic expectations in sentence interpretation
Gibson, Edward; Bergen, Leon; Piantadosi, Steven T.
2013-01-01
Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be “well designed”–in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian “size principle”; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel. PMID:23637344
Validity of a parent-report measure of vocabulary and grammar for Spanish-speaking toddlers.
Thal, D; Jackson-Maldonado, D; Acosta, D
2000-10-01
The validity of the Fundación MacArthur Inventario del Desarrollo de Habilidades Comunicativas: Palabras y Enunciados (IDHC:PE) was examined with twenty 20- and nineteen 28-month-old, typically developing, monolingual, Spanish-speaking children living in Mexico. One measure of vocabulary (number of words) and two measures of grammar (mean of the three longest utterances and grammatical complexity score) from the IDHC:PE were compared to behavioral measures of vocabulary (number of different words from a language sample and number of objects named in a confrontation naming task) and one behavioral measure of grammar (mean length of utterance from a language sample). Only vocabulary measures were assessed in the 20-month-olds because of floor effects on the grammar measures. Results indicated validity for assessing expressive vocabulary in 20-month-olds and expressive vocabulary and grammar in 28-month-olds.
Analyzing the Language of Therapist Empathy in Motivational Interview based Psychotherapy
Xiao, Bo; Can, Dogan; Georgiou, Panayiotis G.; Atkins, David; Narayanan, Shrikanth S.
2016-01-01
Empathy is an important aspect of social communication, especially in medical and psychotherapy applications. Measures of empathy can offer insights into the quality of therapy. We use an N-gram language model based maximum likelihood strategy to classify empathic versus non-empathic utterances and report the precision and recall of classification for various parameters. High recall is obtained with unigram while bigram features achieved the highest F1-score. Based on the utterance level models, a group of lexical features are extracted at the therapy session level. The effectiveness of these features in modeling session level annotator perceptions of empathy is evaluated through correlation with expert-coded session level empathy scores. Our combined feature set achieved a correlation of 0.558 between predicted and expert-coded empathy scores. Results also suggest that the longer term empathy perception process may be more related to isolated empathic salient events. PMID:27602411
Leonard, Laurence B.; Fey, Marc E.; Deevy, Patricia; Bredin-Oja, Shelley L.
2015-01-01
We tested four predictions based on the assumption that optional infinitives can be attributed to properties of the input whereby children inappropriately extract nonfinite subject-verb sequences (e.g. the girl run) from larger input utterances (e.g. Does the girl run? Let’s watch the girl run). Thirty children with specific language impairment (SLI) and 30 typically developing children heard novel and familiar verbs that appeared exclusively either in utterances containing nonfinite subject-verb sequences or in simple sentences with the verb inflected for third person singular –s. Subsequent testing showed strong input effects, especially for the SLI group. The results provide support for input-based factors as significant contributors not only to the optional infinitive period in typical development, but also to the especially protracted optional infinitive period seen in SLI. PMID:25076070
Targeted Help for Spoken Dialogue Systems: Intelligent Feedback Improves Naive Users' Performance
NASA Technical Reports Server (NTRS)
Hockey, Beth Ann; Lemon, Oliver; Campana, Ellen; Hiatt, Laura; Aist, Gregory; Hieronymous, Jim; Gruenstein, Alexander; Dowding, John
2003-01-01
We present experimental evidence that providing naive users of a spoken dialogue system with immediate help messages related to their out-of-coverage utterances improves their success in using the system. A grammar-based recognizer and a Statistical Language Model (SLM) recognizer are run simultaneously. If the grammar-based recognizer suceeds, the less accurate SLM recognizer hypothesis is not used. When the grammar-based recognizer fails and the SLM recognizer produces a recognition hypothesis, this result is used by the Targeted Help agent to give the user feed-back on what was recognized, a diagnosis of what was problematic about the utterance, and a related in-coverage example. The in-coverage example is intended to encourage alignment between user inputs and the language model of the system. We report on controlled experiments on a spoken dialogue system for command and control of a simulated robotic helicopter.
Recognizing intentions in infant-directed speech: evidence for universals.
Bryant, Gregory A; Barrett, H Clark
2007-08-01
In all languages studied to date, distinct prosodic contours characterize different intention categories of infant-directed (ID) speech. This vocal behavior likely exists universally as a species-typical trait, but little research has examined whether listeners can accurately recognize intentions in ID speech using only vocal cues, without access to semantic information. We recorded native-English-speaking mothers producing four intention categories of utterances (prohibition, approval, comfort, and attention) as both ID and adult-directed (AD) speech, and we then presented the utterances to Shuar adults (South American hunter-horticulturalists). Shuar subjects were able to reliably distinguish ID from AD speech and were able to reliably recognize the intention categories in both types of speech, although performance was significantly better with ID speech. This is the first demonstration that adult listeners in an indigenous, nonindustrialized, and nonliterate culture can accurately infer intentions from both ID speech and AD speech in a language they do not speak.
D'Mello, Sidney K; Dowell, Nia; Graesser, Arthur
2011-03-01
There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with natural language dialogues. Theoretical bases exist for three contrasting hypotheses. The speech facilitation hypothesis predicts that spoken input will increase learning, whereas the text facilitation hypothesis predicts typed input will be superior. The modality equivalence hypothesis claims that learning gains will be equivalent. Previous experiments that tested these hypotheses were confounded by automated speech recognition systems with substantial error rates that were detected by learners. We addressed this concern in two experiments via a Wizard of Oz procedure, where a human intercepted the learner's speech and transcribed the utterances before submitting them to the tutor. The overall pattern of the results supported the following conclusions: (1) learning gains associated with spoken and typed input were on par and quantitatively higher than a no-intervention control, (2) participants' evaluations of the session were not influenced by modality, and (3) there were no modality effects associated with differences in prior knowledge and typing proficiency. Although the results generally support the modality equivalence hypothesis, highly motivated learners reported lower cognitive load and demonstrated increased learning when typing compared with speaking. We discuss the implications of our findings for intelligent tutoring systems that can support typed and spoken input.
Le Normand, M T; Moreno-Torres, I; Parisse, C; Dellatolas, G
2013-01-01
In the last 50 years, researchers have debated over the lexical or grammatical nature of children's early multiword utterances. Due to methodological limitations, the issue remains controversial. This corpus study explores the effect of grammatical, lexical, and pragmatic categories on mean length of utterances (MLU). A total of 312 speech samples from high-low socioeconomic status (SES) French-speaking children aged 2-4 years were annotated with a part-of-speech-tagger. Multiple regression analyses show that grammatical categories, particularly the most frequent subcategories, were the best predictors of MLU both across age and SES groups. These findings support the view that early language learning is guided by grammatical rather than by lexical words. This corpus research design can be used for future cross-linguistic and cross-pathology studies. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc.
Lexically-based learning and early grammatical development.
Lieven, E V; Pine, J M; Baldwin, G
1997-02-01
Pine & Lieven (1993) suggest that a lexically-based positional analysis can account for the structure of a considerable proportion of children's early multiword corpora. The present study tests this claim on a second, larger sample of eleven children aged between 1;0 and 3;0 from a different social background, and extends the analysis to later in development. Results indicate that the positional analysis can account for a mean of 60% of all the children's multiword utterances and that the great majority of all other utterances are defined as frozen by the analysis. Alternative explanations of the data based on hypothesizing underlying syntactic or semantic relations are investigated through analyses of pronoun case marking and of verbs with prototypical agent-patient roles. Neither supports the view that the children's utterances are being produced on the basis of general underlying rules and categories. The implications of widespread distributional learning in early language development are discussed.
ERIC Educational Resources Information Center
Bonnet, Lauren Kravetz
2012-01-01
This single-subject research study was designed to examine the effects of point-of-view video modeling (POVM) on the symbolic play actions and play-associated language of four preschool students with autism. A multiple baseline design across participants was conducted in order to evaluate the effectiveness of using POVM as an intervention for…
ERIC Educational Resources Information Center
Travis, Julia; Geiger, Martha
2010-01-01
This study investigated the effects of introducing the Picture Exchange Communication System (PECS) on the frequency of requesting and commenting and the length of verbal utterances of two children with autism spectrum disorder (ASD) who presented with some spoken language, but limited use of language in communicative exchanges. A mixed research…
ERIC Educational Resources Information Center
Gámez, Perla B.; Vasilyeva, Marina
2015-01-01
This investigation extended the use of the priming methodology to 5- and 6-year-olds at the beginning stages of learning English as a second language (L2). In Study 1, 14 L2 children described transitive scenes without an experimenter's input. They produced no passives and minimal actives; most of their utterances were incomplete. In Study 2, 56…
Child implant users' imitation of happy- and sad-sounding speech
Wang, David J.; Trehub, Sandra E.; Volkova, Anna; van Lieshout, Pascal
2013-01-01
Cochlear implants have enabled many congenitally or prelingually deaf children to acquire their native language and communicate successfully on the basis of electrical rather than acoustic input. Nevertheless, degraded spectral input provided by the device reduces the ability to perceive emotion in speech. We compared the vocal imitations of 5- to 7-year-old deaf children who were highly successful bilateral implant users with those of a control sample of children who had normal hearing. First, the children imitated several happy and sad sentences produced by a child model. When adults in Experiment 1 rated the similarity of imitated to model utterances, ratings were significantly higher for the hearing children. Both hearing and deaf children produced poorer imitations of happy than sad utterances because of difficulty matching the greater pitch modulation of the happy versions. When adults in Experiment 2 rated electronically filtered versions of the utterances, which obscured the verbal content, ratings of happy and sad utterances were significantly differentiated for deaf as well as hearing children. The ratings of deaf children, however, were significantly less differentiated. Although deaf children's utterances exhibited culturally typical pitch modulation, their pitch modulation was reduced relative to that of hearing children. One practical implication is that therapeutic interventions for deaf children could expand their focus on suprasegmental aspects of speech perception and production, especially intonation patterns. PMID:23801976
Spatial Language and the Embedded Listener Model in Parents’ Input to Children
Ferrara, Katrina; Silva, Malena; Wilson, Colin; Landau, Barbara
2015-01-01
Language is a collaborative act: in order to communicate successfully, speakers must generate utterances that are not only semantically valid, but also sensitive to the knowledge state of the listener. Such sensitivity could reflect use of an “embedded listener model,” where speakers choose utterances on the basis of an internal model of the listeners’ conceptual and linguistic knowledge. In this paper, we ask whether parents’ spatial descriptions incorporate an embedded listener model that reflects their children’s understanding of spatial relations and spatial terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3–4 year-old children encoded relationships in ways that were well-matched to their children’s level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter (Experiment 1). By contrast, parents of individuals with severe spatial impairments (Williams syndrome) did not show clear evidence of sensitivity to their children’s level of spatial language (Experiment 2). The results provide evidence for an embedded listener model in the domain of spatial language, and indicate conditions under which the ability to model listener knowledge may be more challenging. PMID:26717804
Spatial Language and the Embedded Listener Model in Parents' Input to Children.
Ferrara, Katrina; Silva, Malena; Wilson, Colin; Landau, Barbara
2016-11-01
Language is a collaborative act: To communicate successfully, speakers must generate utterances that are not only semantically valid but also sensitive to the knowledge state of the listener. Such sensitivity could reflect the use of an "embedded listener model," where speakers choose utterances on the basis of an internal model of the listener's conceptual and linguistic knowledge. In this study, we ask whether parents' spatial descriptions incorporate an embedded listener model that reflects their children's understanding of spatial relations and spatial terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3-4-year-old children encoded relationships in ways that were well-matched to their children's level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter (Experiment 1). In contrast, parents of individuals with severe spatial impairments (Williams syndrome) did not show clear evidence of sensitivity to their children's level of spatial language (Experiment 2). The results provide evidence for an embedded listener model in the domain of spatial language and indicate conditions under which the ability to model listener knowledge may be more challenging. Copyright © 2015 Cognitive Science Society, Inc.
Bootstrapping language acquisition.
Abend, Omri; Kwiatkowski, Tom; Smith, Nathaniel J; Goldwater, Sharon; Steedman, Mark
2017-07-01
The semantic bootstrapping hypothesis proposes that children acquire their native language through exposure to sentences of the language paired with structured representations of their meaning, whose component substructures can be associated with words and syntactic structures used to express these concepts. The child's task is then to learn a language-specific grammar and lexicon based on (probably contextually ambiguous, possibly somewhat noisy) pairs of sentences and their meaning representations (logical forms). Starting from these assumptions, we develop a Bayesian probabilistic account of semantically bootstrapped first-language acquisition in the child, based on techniques from computational parsing and interpretation of unrestricted text. Our learner jointly models (a) word learning: the mapping between components of the given sentential meaning and lexical words (or phrases) of the language, and (b) syntax learning: the projection of lexical elements onto sentences by universal construction-free syntactic rules. Using an incremental learning algorithm, we apply the model to a dataset of real syntactically complex child-directed utterances and (pseudo) logical forms, the latter including contextually plausible but irrelevant distractors. Taking the Eve section of the CHILDES corpus as input, the model simulates several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena. Copyright © 2017 Elsevier B.V. All rights reserved.
The Evolution of a Connectionist Model of Situated Human Language Understanding
NASA Astrophysics Data System (ADS)
Mayberry, Marshall R.; Crocker, Matthew W.
The Adaptive Mechanisms in Human Language Processing (ALPHA) project features both experimental and computational tracks designed to complement each other in the investigation of the cognitive mechanisms that underlie situated human utterance processing. The models developed in the computational track replicate results obtained in the experimental track and, in turn, suggest further experiments by virtue of behavior that arises as a by-product of their operation.
Universal Principles in the Repair of Communication Problems
Dingemanse, Mark; Roberts, Seán G.; Baranova, Julija; Blythe, Joe; Drew, Paul; Floyd, Simeon; Gisladottir, Rosa S.; Kendrick, Kobin H.; Levinson, Stephen C.; Manrique, Elizabeth; Rossi, Giovanni; Enfield, N. J.
2015-01-01
There would be little adaptive value in a complex communication system like human language if there were no ways to detect and correct problems. A systematic comparison of conversation in a broad sample of the world’s languages reveals a universal system for the real-time resolution of frequent breakdowns in communication. In a sample of 12 languages of 8 language families of varied typological profiles we find a system of ‘other-initiated repair’, where the recipient of an unclear message can signal trouble and the sender can repair the original message. We find that this system is frequently used (on average about once per 1.4 minutes in any language), and that it has detailed common properties, contrary to assumptions of radical cultural variation. Unrelated languages share the same three functionally distinct types of repair initiator for signalling problems and use them in the same kinds of contexts. People prefer to choose the type that is the most specific possible, a principle that minimizes cost both for the sender being asked to fix the problem and for the dyad as a social unit. Disruption to the conversation is kept to a minimum, with the two-utterance repair sequence being on average no longer that the single utterance which is being fixed. The findings, controlled for historical relationships, situation types and other dependencies, reveal the fundamentally cooperative nature of human communication and offer support for the pragmatic universals hypothesis: while languages may vary in the organization of grammar and meaning, key systems of language use may be largely similar across cultural groups. They also provide a fresh perspective on controversies about the core properties of language, by revealing a common infrastructure for social interaction which may be the universal bedrock upon which linguistic diversity rests. PMID:26375483
Majorano, Marinella; Guidotti, Laura; Guerzoni, Letizia; Murri, Alessandra; Morelli, Marika; Cuda, Domenico; Lavelli, Manuela
2018-01-01
In recent years many studies have shown that the use of cochlear implants (CIs) improves children's skills in processing the auditory signal and, consequently, the development of both language comprehension and production. Nevertheless, many authors have also reported that the development of language skills in children with CIs is variable and influenced by individual factors (e.g., age at CI activation) and contextual aspects (e.g., maternal linguistic input). To assess the characteristics of the spontaneous language production of Italian children with CIs, their mothers' input and the relationship between the two during shared book reading and semi-structured play. Twenty preschool children with CIs and 40 typically developing children, 20 matched for chronological age (CATD group) and 20 matched for hearing age (HATD group), were observed during shared book reading and semi-structured play with their mothers. Samples of spontaneous language were transcribed and analysed for each participant. The numbers of types, tokens, mean length of utterance (MLU) and grammatical categories were considered, and the familiarity of each mother's word was calculated. The children with CIs produced shorter utterances than the children in the CATD group. Their mothers produced language with lower levels of lexical variability and grammatical complexity, and higher proportions of verbs with higher familiarity than did the mothers in the other groups during shared book reading. The children's language was more strongly related to that of their mothers in the CI group than in the other groups, and it was associated with the age at CI activation. The findings suggest that the language of children with CIs is related both to their mothers' input and to age at CI activation. They might prompt suggestions for intervention programs focused on shared-book reading. © 2017 Royal College of Speech and Language Therapists.
Listeners feel the beat: entrainment to English and French speech rhythms.
Lidji, Pascale; Palmer, Caroline; Peretz, Isabelle; Morningstar, Michele
2011-12-01
Can listeners entrain to speech rhythms? Monolingual speakers of English and French and balanced English-French bilinguals tapped along with the beat they perceived in sentences spoken in a stress-timed language, English, and a syllable-timed language, French. All groups of participants tapped more regularly to English than to French utterances. Tapping performance was also influenced by the participants' native language: English-speaking participants and bilinguals tapped more regularly and at higher metrical levels than did French-speaking participants, suggesting that long-term linguistic experience with a stress-timed language can differentiate speakers' entrainment to speech rhythm.
Howard, Ian S.; Messum, Piers
2014-01-01
Words are made up of speech sounds. Almost all accounts of child speech development assume that children learn the pronunciation of first language (L1) speech sounds by imitation, most claiming that the child performs some kind of auditory matching to the elements of ambient speech. However, there is evidence to support an alternative account and we investigate the non-imitative child behavior and well-attested caregiver behavior that this account posits using Elija, a computational model of an infant. Through unsupervised active learning, Elija began by discovering motor patterns, which produced sounds. In separate interaction experiments, native speakers of English, French and German then played the role of his caregiver. In their first interactions with Elija, they were allowed to respond to his sounds if they felt this was natural. We analyzed the interactions through phonemic transcriptions of the caregivers' utterances and found that they interpreted his output within the framework of their native languages. Their form of response was almost always a reformulation of Elija's utterance into well-formed sounds of L1. Elija retained those motor patterns to which a caregiver responded and formed associations between his motor pattern and the response it provoked. Thus in a second phase of interaction, he was able to parse input utterances in terms of the caregiver responses he had heard previously, and respond using his associated motor patterns. This capacity enabled the caregivers to teach Elija to pronounce some simple words in their native languages, by his serial imitation of the words' component speech sounds. Overall, our results demonstrate that the natural responses and behaviors of human subjects to infant-like vocalizations can take a computational model from a biologically plausible initial state through to word pronunciation. This provides support for an alternative to current auditory matching hypotheses for how children learn to pronounce. PMID:25333740
Girolametto, Luigi; Weitzman, Elaine; Lefebvre, Pascal; Greenberg, Janice
2007-01-01
The purpose of this study was to determine the feasibility of a 2-day in-service education program for (a) promoting the use of two emergent literacy strategies by early childhood educators and (b) increasing children's responses to these strategies. Sixteen early childhood educators were randomly assigned to an experimental and a control group. The experimental in-service program sought to increase educators' use of abstract utterances and print references. Educators were videotaped with small groups of preschoolers during storybook reading and a post-story craft activity. Pretest and posttest videotapes were coded to yield rates of abstract language, verbal print references, and children's responses. In comparison to the control group, educators in the experimental program used more abstract utterances that elicited talk about emotions and children's past experiences during storybook reading. They also used significantly more print references during a post-story craft activity. In addition, children in the experimental group responded more often with appropriate responses to abstract utterances and print references in comparison to children in the control group. A 2-day in-service education program resulted in short-term behavioral changes in educators' use of abstract language and print references. Suggestions for improving instruction include providing opportunities for classroom practice with feedback, modeling the use of strategies in classroom routines, and long-term mentoring of educators to promote retention of gains.
Verb bias and verb-specific competition effects on sentence production
Thothathiri, Malathi; Evans, Daniel G.; Poudel, Sonali
2017-01-01
How do speakers choose between structural options for expressing a given meaning? Overall preference for some structures over others as well as prior statistical association between specific verbs and sentence structures (“verb bias”) are known to broadly influence language use. However, the effects of prior statistical experience on the planning and execution of utterances and the mechanisms that facilitate structural choice for verbs with different biases have not been fully explored. In this study, we manipulated verb bias for English double-object (DO) and prepositional-object (PO) dative structures: some verbs appeared solely in the DO structure (DO-only), others solely in PO (PO-only) and yet others equally in both (Equi). Structural choices during subsequent free-choice sentence production revealed the expected dispreference for DO overall but critically also a reliable linear trend in DO production that was consistent with verb bias (DO-only > Equi > PO-only). Going beyond the general verb bias effect, three results suggested that Equi verbs, which were associated equally with the two structures, engendered verb-specific competition and required additional resources for choosing the dispreferred DO structure. First, DO production with Equi verbs but not the other verbs correlated with participants’ inhibition ability. Second, utterance duration prior to the choice of a DO structure showed a quadratic trend (DO-only < Equi > PO-only) with the longest durations for Equi verbs. Third, eye movements consistent with reimagining the event also showed a quadratic trend (DO-only < Equi > PO-only) prior to choosing DO, suggesting that participants used such recall particularly for Equi verbs. Together, these analyses of structural choices, utterance durations, eye movements and individual differences in executive functions shed light on the effects of verb bias and verb-specific competition on sentence production and the role of different executive functions in choosing between sentence structures. PMID:28672009
Stahl, Benjamin; Mohr, Bettina; Dreyer, Felix R; Lucchese, Guglielmo; Pulvermüller, Friedemann
2016-12-01
Clinical research highlights the importance of massed practice in the rehabilitation of chronic post-stroke aphasia. However, while necessary, massed practice may not be sufficient for ensuring progress in speech-language therapy. Motivated by recent advances in neuroscience, it has been claimed that using language as a tool for communication and social interaction leads to synergistic effects in left perisylvian eloquent areas. Here, we conducted a crossover randomized controlled trial to determine the influence of communicative language function on the outcome of intensive aphasia therapy. Eighteen individuals with left-hemisphere lesions and chronic non-fluent aphasia each received two types of training in counterbalanced order: (i) Intensive Language-Action Therapy (ILAT, an extended form of Constraint-Induced Aphasia Therapy) embedding verbal utterances in the context of communication and social interaction, and (ii) Naming Therapy focusing on speech production per se. Both types of training were delivered with the same high intensity (3.5 h per session) and duration (six consecutive working days), with therapy materials and number of utterances matched between treatment groups. A standardized aphasia test battery revealed significantly improved language performance with ILAT, independent of when this method was administered. In contrast, Naming Therapy tended to benefit language performance only when given at the onset of the treatment, but not when applied after previous intensive training. The current results challenge the notion that massed practice alone promotes recovery from chronic post-stroke aphasia. Instead, our results demonstrate that using language for communication and social interaction increases the efficacy of intensive aphasia therapy. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Domaneschi, Filippo; Passarelli, Marcello; Chiorri, Carlo
2017-08-01
Language scientists have broadly addressed the problem of explaining how language users recognize the kind of speech act performed by a speaker uttering a sentence in a particular context. They have done so by investigating the role played by the illocutionary force indicating devices (IFIDs), i.e., all linguistic elements that indicate the illocutionary force of an utterance. The present work takes a first step in the direction of an experimental investigation of non-verbal IFIDs because it investigates the role played by facial expressions and, in particular, of upper-face action units (AUs) in the comprehension of three basic types of illocutionary force: assertions, questions, and orders. The results from a pilot experiment on production and two comprehension experiments showed that (1) certain upper-face AUs seem to constitute non-verbal signals that contribute to the understanding of the illocutionary force of questions and orders; (2) assertions are not expected to be marked by any upper-face AU; (3) some upper-face AUs can be associated, with different degrees of compatibility, with both questions and orders.
Utterance selection model of language change
NASA Astrophysics Data System (ADS)
Baxter, G. J.; Blythe, R. A.; Croft, W.; McKane, A. J.
2006-04-01
We present a mathematical formulation of a theory of language change. The theory is evolutionary in nature and has close analogies with theories of population genetics. The mathematical structure we construct similarly has correspondences with the Fisher-Wright model of population genetics, but there are significant differences. The continuous time formulation of the model is expressed in terms of a Fokker-Planck equation. This equation is exactly soluble in the case of a single speaker and can be investigated analytically in the case of multiple speakers who communicate equally with all other speakers and give their utterances equal weight. Whilst the stationary properties of this system have much in common with the single-speaker case, time-dependent properties are richer. In the particular case where linguistic forms can become extinct, we find that the presence of many speakers causes a two-stage relaxation, the first being a common marginal distribution that persists for a long time as a consequence of ultimate extinction being due to rare fluctuations.
Effects of contextual relevance on pragmatic inference during conversation: An fMRI study.
Feng, Wangshu; Wu, Yue; Jan, Catherine; Yu, Hongbo; Jiang, Xiaoming; Zhou, Xiaolin
2017-08-01
Contextual relevance, which is vital for understanding conversational implicatures (CI), engages both the frontal-temporal language and theory-of-mind networks. Here we investigate how contextual relevance affects CI processing and regulates the connectivity between CI-processing-related brain regions. Participants listened to dialogues in which the level of contextual relevance to dialogue-final utterance (reply) was manipulated. This utterance was either direct, indirect but relevant, irrelevant with contextual hint, or irrelevant with no contextual hint. Results indicated that compared with direct replies, indirect replies showed increased activations in bilateral IFG, bilateral MTG, bilateral TPJ, dmPFC, and precuneus, and increased connectivity between rTPJ/dmPFC and both IFG and MTG. Moreover, irrelevant replies activated right MTG along an anterior-posterior gradient as a function of the level of irrelevance. Our study provides novel evidence concerning how the language and theory-of-mind networks interact for pragmatic inference and how the processing of CI is modulated by level of contextual relevance. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Desjardins, Elia Nelson
2011-12-01
This dissertation examines the ways children use language to construct scientific knowledge in designed informal learning environments such as museums, aquariums, and zoos, with particular attention to autobiographical storytelling. This study takes as its foundation cultural-historical activity theory, defining learning as increased participation in meaningful, knowledge-based activity. It aims to improve experience design in informal learning environments by facilitating and building upon language interactions that are already in use by learners in these contexts. Fieldwork consists of audio recordings of individual children aged 4--12 as they explored a museum of science and technology with their families. Recordings were transcribed and coded according to the activity (task) and context (artifact/exhibit) in which the child was participating during each sequence of utterances. Additional evidence is provided by supplemental interviews with museum educators. Analysis suggests that short autobiographical stories can provide opportunities for learners to access metacognitive knowledge, for educators to assess learners' prior experience and knowledge, and for designers to engage affective pathways in order to increase participation that is both active and contemplative. Design implications are discussed and a design proposal for a distributed informal learning environment is presented.
Pragmatics in action: indirect requests engage theory of mind areas and the cortical motor network.
van Ackeren, Markus J; Casasanto, Daniel; Bekkering, Harold; Hagoort, Peter; Rueschemeyer, Shirley-Ann
2012-11-01
Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word "grasp" elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59-70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416-423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical-semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance "It is hot here!" in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.
Language as Description, Indication, and Depiction
Ferrara, Lindsay; Hodge, Gabrielle
2018-01-01
Signers and speakers coordinate a broad range of intentionally expressive actions within the spatiotemporal context of their face-to-face interactions (Parmentier, 1994; Clark, 1996; Johnston, 1996; Kendon, 2004). Varied semiotic repertoires combine in different ways, the details of which are rooted in the interactions occurring in a specific time and place (Goodwin, 2000; Kusters et al., 2017). However, intense focus in linguistics on conventionalized symbolic form/meaning pairings (especially those which are arbitrary) has obscured the importance of other semiotics in face-to-face communication. A consequence is that the communicative practices resulting from diverse ways of being (e.g., deaf, hearing) are not easily united into a global theoretical framework. Here we promote a theory of language that accounts for how diverse humans coordinate their semiotic repertoires in face-to-face communication, bringing together evidence from anthropology, semiotics, gesture studies and linguistics. Our aim is to facilitate direct comparison of different communicative ecologies. We build on Clark’s (1996) theory of language use as ‘actioned’ via three methods of signaling: describing, indicating, and depicting. Each method is fundamentally different to the other, and they can be used alone or in combination with others during the joint creation of multimodal ‘composite utterances’ (Enfield, 2009). We argue that a theory of language must be able to account for all three methods of signaling as they manifest within and across composite utterances. From this perspective, language—and not only language use—can be viewed as intentionally communicative action involving the specific range of semiotic resources available in situated human interactions. PMID:29875712
Inoculating against Jargonitis
ERIC Educational Resources Information Center
Sword, Helen
2012-01-01
Every discipline has its own specialized language, its membership rites, its secret handshake. In its most benign and neutral definition, jargon signifies "the technical terminology or characteristic idiom of a special activity or group." More often, however, the jingly word that Chaucer used to describe "the inarticulate utterance of birds" takes…
Authentic Discourse and the Survival English Curriculum.
ERIC Educational Resources Information Center
Cathcart, Ruth Larimer
1989-01-01
In-depth analysis of topic distribution, utterance functions, and structural and lexical elements in a doctor-patient interaction revealed significant differences between authentic discourse and English-as-a-Second-Language text discourse, suggesting a need for better collection of more authentic data, for a distributional analysis of…
1974-07-01
iiWU -immmemmmmm This document was generated by the Stanford Artificial Intelligence Laboratory’s document compiler, "PUB" and reproducec’ on a...for more sophisticated artificial (programming) languages. The new issues became those of how to represent a grammar as precise syntactic structures...challenge lies in discovering - either by synthesis of an artificial system, or by analysis of a natural one - the underlying logical (a. opposed to
Eisenberg, Sarita; Guo, Ling-Yu
2016-05-01
This article reviews the existing literature on the diagnostic accuracy of two grammatical accuracy measures for differentiating children with and without language impairment (LI) at preschool and early school age based on language samples. The first measure, the finite verb morphology composite (FVMC), is a narrow grammatical measure that computes children's overall accuracy of four verb tense morphemes. The second measure, percent grammatical utterances (PGU), is a broader grammatical measure that computes children's accuracy in producing grammatical utterances. The extant studies show that FVMC demonstrates acceptable (i.e., 80 to 89% accurate) to good (i.e., 90% accurate or higher) diagnostic accuracy for children between 4;0 (years;months) and 6;11 in conversational or narrative samples. In contrast, PGU yields acceptable to good diagnostic accuracy for children between 3;0 and 8;11 regardless of sample types. Given the diagnostic accuracy shown in the literature, we suggest that FVMC and PGU can be used as one piece of evidence for identifying children with LI in assessment when appropriate. However, FVMC or PGU should not be used as therapy goals directly. Instead, when children are low in FVMC or PGU, we suggest that follow-up analyses should be conducted to determine the verb tense morphemes or grammatical structures that children have difficulty with. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Repair Negotiation by English L2 Learners
ERIC Educational Resources Information Center
Choi, Yujeong
2012-01-01
It is widely accepted that L2 learners often face communication problems due to lack of competency in the target language and familiarity with its culture of origin. One way to resolve miscommunication problems is to seek clarification of the utterance; this process is called "repair negotiation" (Nakahama et al. 2001). Repair…
Principles for Pragmatics Teaching: "Apologies" in the EFL Classroom
ERIC Educational Resources Information Center
Limberg, Holger
2015-01-01
Intercultural Communicative Competence is a paramount goal of modern foreign language teaching. It is the ability to communicate in culturally sensitive and contextually appropriate ways with speakers from other cultures. Being able to apologize is one component of this competence. Uttering apologies allows learners to rectify breaches of social…
Prosody Production and Perception with Conversational Speech
ERIC Educational Resources Information Center
Mo, Yoonsook
2010-01-01
Speech utterances are more than the linear concatenation of individual phonemes or words. They are organized by prosodic structures comprising phonological units of different sizes (e.g., syllable, foot, word, and phrase) and the prominence relations among them. As the linguistic structure of spoken languages, prosody serves an important function…
Call Combinations in Monkeys: Compositional or Idiomatic Expressions?
ERIC Educational Resources Information Center
Arnold, Kate; Zuberbuhler, Klaus
2012-01-01
Syntax is widely considered the feature that most decisively sets human language apart from other natural communication systems. Animal vocalisations are generally considered to be holistic with few examples of utterances meaning something other than the sum of their parts. Previously, we have shown that male putty-nosed monkeys produce call…
Spatial and Linguistic Aspects of Visual Imagery in Sentence Comprehension
ERIC Educational Resources Information Center
Bergen, Benjamin K.; Lindsay, Shane; Matlock, Teenie; Narayanan, Srini
2007-01-01
There is mounting evidence that language comprehension involves the activation of mental imagery of the content of utterances (Barsalou, 1999; Bergen, Chang, & Narayan, 2004; Bergen, Narayan, & Feldman, 2003; Narayan, Bergen, & Weinberg, 2004; Richardson, Spivey, McRae, & Barsalou, 2003; Stanfield & Zwaan, 2001; Zwaan, Stanfield, & Yaxley, 2002).…
Knowledge Representation and Natural-Language Semantics.
1986-11-07
involving things ( Wittgenstein spoke of facts or states-of-affairs) that represent. Thus, it is not the hydrangea that carries information, it is the...assuming certain conditions on a’s utterance-roughly, that it be intentional.) Let’s connect this bit of wisdom with that coming from Wittgenstein . In
Creative Criticism: Dialogue and Aesthetics in the English Language Arts Classroom
ERIC Educational Resources Information Center
Blom, Nathan
2017-01-01
The author discusses the theoretical foundations for creative criticism, a term denoting the application of multimodal responses to literature for the construction of meaning. The author develops a theoretical framework from Bakhtin's principles of dialogical interpenetration, internally persuasive discourse, and utterance, as well as Dewey's…
Cognitive Skills Associated with the Onset of Multiword Utterances.
ERIC Educational Resources Information Center
Kelly, Charleen A.; Dale, Philip S.
1989-01-01
The relationship between early language and cognition was studied in 20 children between 1 and 2 years of age. Four cognitive areas were tested: object permanence, means-end, play, and imitation. Results indicated that specific cognitive skills seem temporarily associated with some linguistic abilities, although attainment of skills can be…
43 CFR 423.22 - Interference with agency functions and disorderly conduct.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Interference with agency functions and..., AND WATERBODIES Rules of Conduct § 423.22 Interference with agency functions and disorderly conduct... behavior; (2) Language, utterance, gesture, display, or act that is obscene, physically threatening or...
Fourteen-Month-Olds' Decontextualized Understanding of Words for Absent Objects
ERIC Educational Resources Information Center
Hendrickson, Kristi; Sundara, Megha
2017-01-01
The majority of research examining infants' decontextualized word knowledge comes from studies in which words and pictures are presented simultaneously. However, comprehending utterances about unseen objects is a hallmark of language. Do infants demonstrate decontextualized absent object knowledge early in the second year of life? Further, to what…
Voice Modulations in German Ironic Speech
ERIC Educational Resources Information Center
Scharrer, Lisa; Christmann, Ursula; Knoll, Monja
2011-01-01
Previous research has shown that in different languages ironic speech is acoustically modulated compared to literal speech, and these modulations are assumed to aid the listener in the comprehension process by acting as cues that mark utterances as ironic. The present study was conducted to identify paraverbal features of German "ironic…
A Closer Look at Formulaic Language: Prosodic Characteristics of Swedish Proverbs
ERIC Educational Resources Information Center
Hallin, Anna Eva; Van Lancker Sidtis, Diana
2017-01-01
Formulaic expressions (such as idioms, proverbs, and conversational speech formulas) are currently a topic of interest. Examination of prosody in formulaic utterances, a less explored property of formulaic expressions, has yielded controversial views. The present study investigates prosodic characteristics of proverbs, as one type of formulaic…
Phonological Deficits in French Speaking Children with SLI
ERIC Educational Resources Information Center
Maillart, Christelle; Parisse, Christophe
2006-01-01
Background: This study investigated the phonological disorders of French-speaking children with specific language impairment (SLI) in production. Aims: The main goal was to confirm whether children with SLI have limitations in phonological ability as compared with normally developing children matched by mean length of utterance (MLU) and phonemic…
Evolution: Language Use and the Evolution of Languages
NASA Astrophysics Data System (ADS)
Croft, William
Language change can be understood as an evolutionary process. Language change occurs at two different timescales, corresponding to the two steps of the evolutionary process. The first timescale is very short, namely, the production of an utterance: this is where linguistic structures are replicated and language variation is generated. The second timescale is (or can be) very long, namely, the propagation of linguistic variants in the speech community: this is where certain variants are selected over others. At both timescales, the evolutionary process is driven by social interaction and the role language plays in it. An understanding of social interaction at the micro-level—face-to-face interactions—and at the macro-level—the structure of speech communities—gives us the basis for understanding the generation and propagation of language structures, and understanding the nature of language itself.
ERIC Educational Resources Information Center
Bonner, Timothy E.
2013-01-01
The study of language production by adults who are learning a second language (L2) has received a good deal of attention especially when it comes to omission of inflectional morphemes within L2 utterances. Several explanations have been proposed for these inflectional errors. One explanation is that the L2 learner simply does not have the L2…
ERIC Educational Resources Information Center
Black, Catherine
2001-01-01
Looks at the Quebecois TV show, "Un gars, Une fille" used in a university French course to teach the socio-cultural reality that underlies all linguistic utterances in a university-level French course. Attempts to identify what makes the show more authentic than videos and CD ROMs that accompany most language textbooks in French.…
Genetic and Environmental Links Between Natural Language Use and Cognitive Ability in Toddlers.
Canfield, Caitlin F; Edelson, Lisa R; Saudino, Kimberly J
2017-03-01
Although the phenotypic correlation between language and nonverbal cognitive ability is well-documented, studies examining the etiology of the covariance between these abilities are scant, particularly in very young children. The goal of this study was to address this gap in the literature by examining the genetic and environmental links between language use, assessed through conversational language samples, and nonverbal cognition in a sample of 3-year-old twins (N = 281 pairs). Significant genetic and nonshared environmental influences were found for nonverbal cognitive ability and language measures, including mean length of utterance and number of different words, as well as significant genetic covariance between cognitive ability and both language measures. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Dilley, Laura C; Wieland, Elizabeth A; Gamache, Jessica L; McAuley, J Devin; Redford, Melissa A
2013-02-01
As children mature, changes in voice spectral characteristics co-vary with changes in speech, language, and behavior. In this study, spectral characteristics were manipulated to alter the perceived ages of talkers' voices while leaving critical acoustic-prosodic correlates intact, to determine whether perceived age differences were associated with differences in judgments of prosodic, segmental, and talker attributes. Speech was modified by lowering formants and fundamental frequency, for 5-year-old children's utterances, or raising them, for adult caregivers' utterances. Next, participants differing in awareness of the manipulation (Experiment 1A) or amount of speech-language training (Experiment 1B) made judgments of prosodic, segmental, and talker attributes. Experiment 2 investigated the effects of spectral modification on intelligibility. Finally, in Experiment 3, trained analysts used formal prosody coding to assess prosodic characteristics of spectrally modified and unmodified speech. Differences in perceived age were associated with differences in ratings of speech rate, fluency, intelligibility, likeability, anxiety, cognitive impairment, and speech-language disorder/delay; effects of training and awareness of the manipulation on ratings were limited. There were no significant effects of the manipulation on intelligibility or formally coded prosody judgments. Age-related voice characteristics can greatly affect judgments of speech and talker characteristics, raising cautionary notes for developmental research and clinical work.
Experimental pragmatics: a Gricean turn in the study of language.
Noveck, Ira A; Reboul, Anne
2008-11-01
Discerning the meaning of an utterance requires not only mastering grammar and knowing the meanings of words but also understanding the communicative (i.e., pragmatic) features of language. Although it has been an ever present aspect of linguistic analyses and discussions, it is only over the last ten years or so that cognitive scientists have been investigating--in a concerted fashion--the pragmatic features of language experimentally. We begin by highlighting Paul Grice's contributions to ordinary language philosophy and show how it has led to this active area of experimental investigation. We then focus on two exemplary phenomena--'scalar inference' and 'reference resolution'--before considering other topics that fit into the paradigm known as 'experimental pragmatics'.
Language in boys with fragile X syndrome.
Levy, Yonata; Gottesman, Riki; Borochowitz, Zvi; Frydman, Moshe; Sagi, Michal
2006-02-01
The current paper reports of language production in 15 Hebrew-speaking boys, aged 9;0-13;0, with fully methylated, non-mosaic fragile X syndrome and no concomitant diagnosis of autism. Contrary to expectations, seven children were non-verbal. Language production in the verbal children was studied in free conversations and in context-bound speech. Despite extra caution in calculating MLU, participants' language level was not predicted by mean utterance length. Context bound speech resulted in grammatically more advanced performance than free conversation, and performance in both contexts differed in important ways from performance of typically developing MLU-matched controls. The relevance of MLU as a predictor of productive grammar in disordered populations is briefly discussed.
Multiple levels of bilingual language control: evidence from language intrusions in reading aloud.
Gollan, Tamar H; Schotter, Elizabeth R; Gomez, Joanne; Murillo, Mayra; Rayner, Keith
2014-02-01
Bilinguals rarely produce words in an unintended language. However, we induced such intrusion errors (e.g., saying el instead of he) in 32 Spanish-English bilinguals who read aloud single-language (English or Spanish) and mixed-language (haphazard mix of English and Spanish) paragraphs with English or Spanish word order. These bilinguals produced language intrusions almost exclusively in mixed-language paragraphs, and most often when attempting to produce dominant-language targets (accent-only errors also exhibited reversed language-dominance effects). Most intrusion errors occurred for function words, especially when they were not from the language that determined the word order in the paragraph. Eye movements showed that fixating a word in the nontarget language increased intrusion errors only for function words. Together, these results imply multiple mechanisms of language control, including (a) inhibition of the dominant language at both lexical and sublexical processing levels, (b) special retrieval mechanisms for function words in mixed-language utterances, and (c) attentional monitoring of the target word for its match with the intended language.
Teaching Children with Autism to Detect and Respond to Sarcasm
ERIC Educational Resources Information Center
Persicke, Angela; Tarbox, Jonathan; Ranick, Jennifer; St. Clair, Megan
2013-01-01
Previous research has demonstrated that children with autism often have difficulty using and understanding non-literal language ("e.g.," irony, sarcasm, deception, humor, and metaphors). Irony and sarcasm may be especially difficult for children with autism because the meaning of an utterance is the opposite of what is stated. The current study…
The Impact of Memory Demands on Audience Design during Language Production
ERIC Educational Resources Information Center
Horton, W.S.; Gerrig, R.J.
2005-01-01
Speakers often tailor their utterances to the needs of particular addressees-a process called audience design. We argue that important aspects of audience design can be understood as emergent features of ordinary memory processes. This perspective contrasts with earlier views that presume special processes or representations. To support our…
Neural Responses to the Production and Comprehension of Syntax in Identical Utterances
ERIC Educational Resources Information Center
Indefrey, Peter; Hellwig, Frauke; Herzog, Hans; Seitz, Rudiger J.; Hagoort, Peter
2004-01-01
Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment,…
The Impact of Teachers' Commenting Strategies on Children's Vocabulary Growth
ERIC Educational Resources Information Center
Barnes, Erica M.; Dickinson, David K.
2017-01-01
We examined the relations between teachers' use of comments during book reading sessions in preschool classrooms and the vocabulary growth of children with low and moderately low language ability. Using data from a larger randomized controlled trial, we analyzed comments defined as utterances that give, explain, expand, or define. Comments were…
Teacher Scaffolding of Oral Language Production
ERIC Educational Resources Information Center
George, May G.
2011-01-01
This research involved two observational studies. It explored the scaffolding processes as part of classroom pedagogy. The research shed light on the way a teacher's instructional methodology took shape in the classroom. The target event for this study was the time in which a novice learner was engaged publicly in uttering a sentence in Arabic in…
Nursery Rhymes: Foundation for Learning
ERIC Educational Resources Information Center
Kenney, Susan
2005-01-01
The article considers nursery rhymes as the foundation for learning. It is said that nursery rhymes carry all the parts of language that lead to speaking and reading. Because rhymes are short, they are easy for children to repeat, and become some of the first sentences children utter. The rhymes expand vocabulary, exposing children to words they…
Integrating Linguistic, Motor, and Perceptual Information in Language Production
ERIC Educational Resources Information Center
Frank, Austin F.
2011-01-01
Speakers show remarkable adaptability in updating and correcting their utterances in response to changes in the environment. When an interlocutor raises an eyebrow or the AC kicks on and introduces ambient noise, it seems that speakers are able to quickly integrate this information into their speech plans and adapt appropriately. This ability to…
Characterizing the Bilingual Disadvantage in Noun Phrase Production
ERIC Educational Resources Information Center
Sadat, Jasmin; Martin, Clara D.; Alario, F. Xavier; Costa, Albert
2012-01-01
Up to now, evidence on bilingual disadvantages in language production comes from tasks requiring single word retrieval. The present study aimed to assess whether there is a bilingual disadvantage in multiword utterances, and to determine the extent to which such effect is present in onset latencies, articulatory durations, or both. To do so, we…
Variability in Phonetics. York Papers in Linguistics, No. 6.
ERIC Educational Resources Information Center
Tatham, M. A. A.
Variability is a term used to cover several types of phenomena in language sound patterns and in phonetic realization of those patterns. Variability refers to the fact that every repetition of an utterance is different, in amplitude, rate of delivery, formant frequencies, fundamental frequency or minor phase relationship changes across the sound…
ERIC Educational Resources Information Center
Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andree
2010-01-01
Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression…
Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andrée
2010-01-01
Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression (visual). This study explored the ability of three- and four-year-old children without disabilities to perform tasks involving sequences of graphic symbols. Thirty participants were asked to transpose spoken simple sentences into graphic symbols by selecting individual symbols corresponding to the spoken words, and to interpret graphic symbol utterances by selecting one of four photographs corresponding to a sequence of three graphic symbols. The results showed that these were not simple tasks for the participants, and few of them performed in the expected manner - only one in transposition, and only one-third of participants in interpretation. Individual response strategies in some cases lead to contrasting response patterns. Children at this age level have not yet developed the skills required to deal with graphic symbols even though they have mastered the corresponding spoken language structures.
Learning word order at birth: A NIRS study.
Benavides-Varela, Silvia; Gervain, Judit
2017-06-01
In language, the relative order of words in sentences carries important grammatical functions. However, the developmental origins and the neural correlates of the ability to track word order are to date poorly understood. The current study therefore investigates the origins of infants' ability to learn about the sequential order of words, using near-infrared spectroscopy (NIRS) with newborn infants. We have conducted two experiments: one in which a word order change was implemented in 4-word sequences recorded with a list intonation (as if each word was a separate item in a list; list prosody condition, Experiment 1) and one in which the same 4-word sequences were recorded with a well-formed utterance-level prosodic contour (utterance prosody condition, Experiment 2). We found that newborns could detect the violation of the word order in the list prosody condition, but not in the utterance prosody condition. These results suggest that while newborns are already sensitive to word order in linguistic sequences, prosody appears to be a stronger cue than word order for the identification of linguistic units at birth. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Carlsen, William S.
This article describes the effects of science teacher subject-matter knowledge on classroom discourse at the level of individual utterances. It details one of three parallel analyses conducted in a year-long study of language in the classrooms of four new biology teachers. The conceptual framework of the study predicts that when teaching unfamiliar subject matter, teachers use a variety of discourse strategies to constrain student talk to a narrowly circumscribed topic domain. This article includes the results of an utterance-by-utterance analysis of teacher and student talk in a 30-lesson sample of science instruction. Data are broken down by classroom activity (e.g., lecture, laboratory, group work) for several measures, including mean duration of utterances, domination of the speaking floor by the teacher, frequency of teacher questioning, cognitive level of teacher questions, and student verbal participation. When teaching unfamiliar topics, the four teachers in this study tended to talk more often and for longer periods of time, ask questions frequently, and rely heavily on low cognitive level questions. The rate of student questions to the teacher varied with classroom activity. In common classroom communicative settings, student questions were less common when the teacher was teaching unfamiliar subject matter. The implications of these findings include a suggestion that teacher knowledge may be an important unconsidered variable in research on the cognitive level of questions and teacher wait-time.
A computational neural model of goal-directed utterance selection.
Klein, Michael; Kamp, Hans; Palm, Guenther; Doya, Kenji
2010-06-01
It is generally agreed that much of human communication is motivated by extra-linguistic goals: we often make utterances in order to get others to do something, or to make them support our cause, or adopt our point of view, etc. However, thus far a computational foundation for this view on language use has been lacking. In this paper we propose such a foundation using Markov Decision Processes. We borrow computational components from the field of action selection and motor control, where a neurobiological basis of these components has been established. In particular, we make use of internal models (i.e., next-state transition functions defined on current state action pairs). The internal model is coupled with reinforcement learning of a value function that is used to assess the desirability of any state that utterances (as well as certain non-verbal actions) can bring about. This cognitive architecture is tested in a number of multi-agent game simulations. In these computational experiments an agent learns to predict the context-dependent effects of utterances by interacting with other agents that are already competent speakers. We show that the cognitive architecture can account for acquiring the capability of deciding when to speak in order to achieve a certain goal (instead of performing a non-verbal action or simply doing nothing), whom to address and what to say. Copyright 2010 Elsevier Ltd. All rights reserved.
Co-development of manner and path concepts in language, action, and eye-gaze behavior.
Lohan, Katrin S; Griffiths, Sascha S; Sciutti, Alessandra; Partmann, Tim C; Rohlfing, Katharina J
2014-07-01
In order for artificial intelligent systems to interact naturally with human users, they need to be able to learn from human instructions when actions should be imitated. Human tutoring will typically consist of action demonstrations accompanied by speech. In the following, the characteristics of human tutoring during action demonstration will be examined. A special focus will be put on the distinction between two kinds of motion events: path-oriented actions and manner-oriented actions. Such a distinction is inspired by the literature pertaining to cognitive linguistics, which indicates that the human conceptual system can distinguish these two distinct types of motion. These two kinds of actions are described in language by more path-oriented or more manner-oriented utterances. In path-oriented utterances, the source, trajectory, or goal is emphasized, whereas in manner-oriented utterances the medium, velocity, or means of motion are highlighted. We examined a video corpus of adult-child interactions comprised of three age groups of children-pre-lexical, early lexical, and lexical-and two different tasks, one emphasizing manner more strongly and one emphasizing path more strongly. We analyzed the language and motion of the caregiver and the gazing behavior of the child to highlight the differences between the tutoring and the acquisition of the manner and path concepts. The results suggest that age is an important factor in the development of these action categories. The analysis of this corpus has also been exploited to develop an intelligent robotic behavior-the tutoring spotter system-able to emulate children's behaviors in a tutoring situation, with the aim of evoking in human subjects a natural and effective behavior in teaching to a robot. The findings related to the development of manner and path concepts have been used to implement new effective feedback strategies in the tutoring spotter system, which should provide improvements in human-robot interaction. Copyright © 2014 Cognitive Science Society, Inc.
Cultural analysis of communication behaviors among juveniles in a correctional facility.
Sanger, D D; Creswell, J W; Dworak, J; Schultz, L
2000-01-01
This study addressed communication behaviors of female juvenile delinquents in a correctional facility. Qualitative methodology was used to study 78 participants ranging in age from 13.1 to 18.9 (years; months), over a five-month period. Data collection consisted of observations, participant observation, interviews, and a review of documents. Additionally, participants were tested on the Clinical Evaluation of Language Fundamentals-3. Listening and following rules, utterance types, topics of conversion, politeness, and conversational management emerged as themes. Findings indicated that as many as 22% of participants were potential candidates for language services. Implications for speech-language pathologists (SLPs) providing communication services will be provided.
Wu, Ying Choon; Coulson, Seana
2015-11-01
To understand a speaker's gestures, people may draw on kinesthetic working memory (KWM)-a system for temporarily remembering body movements. The present study explored whether sensitivity to gesture meaning was related to differences in KWM capacity. KWM was evaluated through sequences of novel movements that participants viewed and reproduced with their own bodies. Gesture sensitivity was assessed through a priming paradigm. Participants judged whether multimodal utterances containing congruent, incongruent, or no gestures were related to subsequent picture probes depicting the referents of those utterances. Individuals with low KWM were primarily inhibited by incongruent speech-gesture primes, whereas those with high KWM showed facilitation-that is, they were able to identify picture probes more quickly when preceded by congruent speech and gestures than by speech alone. Group differences were most apparent for discourse with weakly congruent speech and gestures. Overall, speech-gesture congruency effects were positively correlated with KWM abilities, which may help listeners match spatial properties of gestures to concepts evoked by speech. © The Author(s) 2015.
Dave, Shruti; Mastergeorge, Ann M; Olswang, Lesley B
2018-07-01
Responsive parental communication during an infant's first year has been positively associated with later language outcomes. This study explores responsivity in mother-infant communication by modeling how change in guiding language between 7 and 11 months influences toddler vocabulary development. In a group of 32 mother-child dyads, change in early maternal guiding language positively predicted child language outcomes measured at 18 and 24 months. In contrast, a number of other linguistic variables - including total utterances and non-guiding language - did not correlate with toddler vocabulary development, suggesting a critical role of responsive change in infant-directed communication. We further assessed whether maternal affect during early communication influenced toddler vocabulary outcomes, finding that dominant affect during early mother-infant communications correlated to lower child language outcomes. These findings provide evidence that responsive parenting should not only be assessed longitudinally, but unique contributions of language and affect should also be concurrently considered in future study.
ERIC Educational Resources Information Center
Horton, William S.
2007-01-01
In typical interactions, speakers frequently produce utterances that appear to reflect beliefs about the common ground shared with particular addressees. Horton and Gerrig (2005a) proposed that one important basis for audience design is the manner in which conversational partners serve as cues for the automatic retrieval of associated information…
Generation and Evaluation of User Tailored Responses in Multimodal Dialogue
ERIC Educational Resources Information Center
Walker, M. A.; Whittaker, S. J.; Stent, A.; Maloor, P.; Moore, J.; Johnston, M.; Vasireddy, G.
2004-01-01
When people engage in conversation, they tailor their utterances to their conversational partners, whether these partners are other humans or computational systems. This tailoring, or adaptation to the partner takes place in all facets of human language use, and is based on a "mental model" or a "user model" of the conversational partner. Such…
Abstract Knowledge of Word Order by 19 Months: An Eye-Tracking Study
ERIC Educational Resources Information Center
Franck, Julie; Millotte, Severine; Posada, Andres; Rizzi, Luigi
2013-01-01
Word order is one of the earliest aspects of grammar that the child acquires, because her early utterances already respect the basic word order of the target language. However, the question of the nature of early syntactic representations is subject to debate. Approaches inspired by formal syntax assume that the head-complement order,…
ERIC Educational Resources Information Center
Cintrón-Valentín, Myrna; Ellis, Nick C.
2015-01-01
Eye-tracking was used to investigate the attentional processes whereby different types of focus on form (FonF) instruction assist learners in overcoming learned attention and blocking effects in their online processing of second language input. English native speakers viewed Latin utterances combining lexical and morphological cues to temporality…
ERIC Educational Resources Information Center
Garcia-Ponce, Edgar Emmanuell; Mora-Pablo, Irasema
2017-01-01
Extensive research literature suggests that corrective feedback (CF), when effective, has a beneficial impact on the development of learners' interlanguage. This is because CF provides learners with language data concerning the correctness of their utterances and thus pushes their oral production towards greater clarity, accuracy and…
Expressive Language during Conversational Speech in Boys with Fragile X Syndrome
ERIC Educational Resources Information Center
Roberts, Joanne E.; Hennon, Elizabeth A.; Price, Johanna R.; Dear, Elizabeth; Anderson, Kathleen; Vandergrift, Nathan A.
2007-01-01
We compared the expressive syntax and vocabulary skills of 35 boys with fragile X syndrome and 27 younger typically developing boys who were at similar nonverbal mental levels. During a conversational speech sample, the boys with fragile X syndrome used shorter, less complex utterances and produced fewer different words than did the typically…
How Do Children Restrict Their Linguistic Generalizations? An (Un- )Grammaticality Judgment Study
ERIC Educational Resources Information Center
Ambridge, Ben
2013-01-01
A paradox at the heart of language acquisition research is that, to achieve adult-like competence, children must acquire the ability to generalize verbs into non-attested structures, while avoiding utterances that are deemed ungrammatical by native speakers. For example, children must learn that, to denote the reversal of an action,…
ERIC Educational Resources Information Center
Gerde, Hope K.; Powell, Douglas R.
2009-01-01
Research Findings: An observational study of 60 Head Start teachers and 341 children (177 boys, 164 girls) enrolled in their classrooms found teachers' book-reading practices to predict growth in children's receptive vocabulary. Multilevel growth analyses indicated that children in classrooms where teachers used more book-focused utterances made…
Schönberger, Eva; Heim, Stefan; Meffert, Elisabeth; Pieperhoff, Peter; da Costa Avelar, Patricia; Huber, Walter; Binkofski, Ferdinand; Grande, Marion
2014-01-01
Functional brain imaging studies have improved our knowledge of the neural localization of language functions and the functional reorganization after a lesion. However, the neural correlates of agrammatic symptoms in aphasia remain largely unknown. The present fMRI study examined the neural correlates of morpho-syntactic encoding and agrammatic errors in continuous language production by combining three approaches. First, the neural mechanisms underlying natural morpho-syntactic processing in a picture description task were analyzed in 15 healthy speakers. Second, agrammatic-like speech behavior was induced in the same group of healthy speakers to study the underlying functional processes by limiting the utterance length. In a third approach, five agrammatic participants performed the picture description task to gain insights in the neural correlates of agrammatism and the functional reorganization of language processing after stroke. In all approaches, utterances were analyzed for syntactic completeness, complexity, and morphology. Event-related data analysis was conducted by defining every clause-like unit (CLU) as an event with its onset-time and duration. Agrammatic and correct CLUs were contrasted. Due to the small sample size as well as heterogeneous lesion sizes and sites with lesion foci in the insula lobe, inferior frontal, superior temporal and inferior parietal areas the activation patterns in the agrammatic speakers were analyzed on a single subject level. In the group of healthy speakers, posterior temporal and inferior parietal areas were associated with greater morpho-syntactic demands in complete and complex CLUs. The intentional manipulation of morpho-syntactic structures and the omission of function words were associated with additional inferior frontal activation. Overall, the results revealed that the investigation of the neural correlates of agrammatic language production can be reasonably conducted with an overt language production paradigm. PMID:24711802
Dilley, Laura C.; Wieland, Elizabeth A.; Gamache, Jessica L.; McAuley, J. Devin; Redford, Melissa A.
2013-01-01
Purpose As children mature, changes in voice spectral characteristics covary with changes in speech, language, and behavior. Spectral characteristics were manipulated to alter the perceived ages of talkers’ voices while leaving critical acoustic-prosodic correlates intact, to determine whether perceived age differences were associated with differences in judgments of prosodic, segmental, and talker attributes. Method Speech was modified by lowering formants and fundamental frequency, for 5-year-old children’s utterances, or raising them, for adult caregivers’ utterances. Next, participants differing in awareness of the manipulation (Exp. 1a) or amount of speech-language training (Exp. 1b) made judgments of prosodic, segmental, and talker attributes. Exp. 2 investigated the effects of spectral modification on intelligibility. Finally, in Exp. 3 trained analysts used formal prosody coding to assess prosodic characteristics of spectrally-modified and unmodified speech. Results Differences in perceived age were associated with differences in ratings of speech rate, fluency, intelligibility, likeability, anxiety, cognitive impairment, and speech-language disorder/delay; effects of training and awareness of the manipulation on ratings were limited. There were no significant effects of the manipulation on intelligibility or formally coded prosody judgments. Conclusions Age-related voice characteristics can greatly affect judgments of speech and talker characteristics, raising cautionary notes for developmental research and clinical work. PMID:23275414
Language as a multimodal phenomenon: implications for language learning, processing and evolution
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
2014-01-01
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. PMID:25092660
Referential communication abilities in children with 22q11.2 deletion syndrome.
Van Den Heuvel, Ellen; ReuterskiöLd, Christina; Solot, Cynthia; Manders, Eric; Swillen, Ann; Zink, Inge
2017-10-01
This study describes the performance on a perspective- and role-taking task in 27 children, ages 6-13 years, with 22q11.2 deletion syndrome (22q11.2DS). A cross-cultural design comparing Dutch- and English-speaking children with 22q11.2DS explored the possibility of cultural differences. Chronologically age-matched and younger typically developing (TD) children matched for receptive vocabulary served as control groups to identify challenges in referential communication. The utterances of children with 22q11.2DS were characterised as short and simple in lexical and grammatical terms. However, from a language use perspective, their utterances were verbose, ambiguous and irrelevant given the pictured scenes. They tended to elaborate on visual details and conveyed off-topic, extraneous information when participating in a barrier-game procedure. Both types of aberrant utterances forced a listener to consistently infer the intended message. Moreover, children with 22q11.2DS demonstrated difficulty selecting correct speech acts in accordance with contextual cues during a role-taking task. Both English- and Dutch-speaking children with 22q11.2DS showed impoverished information transfer and an increased number of elaborations, suggesting a cross-cultural syndrome-specific feature.
ERIC Educational Resources Information Center
Scullard, Sue
1986-01-01
The task of the teacher of foreign languages is to enable the students to progress gradually from teacher/coursebook controlled utterances to complete linguistic autonomy. Role play and a progression of information-gap activities are discussed in terms of developing students' personal autonomy at each level of linguistic competence. (Author/LMO)
The Role of the Secondary Stress in Teaching the English Rhythm
ERIC Educational Resources Information Center
Yurtbasi, Metin
2017-01-01
In the phonological literature in English, which is a stress-timed language, the existence of at least three levels of stress is usually taken for granted. Words, phrases, utterances or sentences have a prominent element in one of their syllables, which usually correlates with a partner in the same unit, called the secondary stress. It so happens…
One or More Labels on the Bottles? Notional Concord in Dutch and French.
ERIC Educational Resources Information Center
Vigliocco, Gabriella; And Others
1996-01-01
Investigated the effects of the number of tokens in the conceptual representation of the to-be-uttered subject noun phrase in experiments in Dutch and French, in which subject-verb agreement errors were induced. Findings revealed a distributivity effect in both languages, supporting an account in which neither null nor post-verbal subjects are the…
ERIC Educational Resources Information Center
Huensch, Amanda; Tracy-Ventura, Nicole
2017-01-01
This study investigated second language fluency development over a nearly 2-year period which included an academic year abroad and the year immediately following the participants' return to their home university. Data from 24 L1 English learners of Spanish were collected 6 times: once before, 3 times during, and 2 times after a 9-month stay…
ERIC Educational Resources Information Center
O'Reilly, Naziya
2017-01-01
In recent years restorative practice in schools has been heralded as a new paradigm for thinking about student behaviour. Its premise is to provide solutions to indiscipline, to restore relationships where there has been conflict or harm, and to give pupils a language with which to understand wrongdoing. This article offers a critique of…
The effects of mands and models on the speech of unresponsive language-delayed preschool children.
Warren, S F; McQuarter, R J; Rogers-Warren, A K
1984-02-01
The effects of the systematic use of mands (non-yes/no questions and instructions to verbalize), models (imitative prompts), and specific consequent events on the productive verbal behavior of three unresponsive, socially isolate, language-delayed preschool children were investigated in a multiple-baseline design within a classroom free play period. Following a lengthy intervention condition, experimental procedures were systematically faded out to check for maintenance effects. The treatment resulted in increases in total verbalizations and nonobligatory speech (initiations) by the subjects. Subjects also became more responsive in obligatory speech situations. In a second free play (generalization) setting, increased rates of total child verbalizations and nonobligatory verbalizations were observed for all three subjects, and two of the three subjects were more responsive compared to their baselines in the first free play setting. Rate of total teacher verbalizations and questions were also higher in this setting. Maintenance of the treatment effects was shown during the fading condition in the intervention setting. The subjects' MLUs (mean length of utterance) increased during the intervention condition when the teacher began prompting a minimum of two-word utterances in response to a mand or model.
Soller, R. William; Chan, Philip; Higa, Amy
2012-01-01
Background Language barriers are significant hurdles for chronic disease patients in achieving self-management goals of therapy, particularly in settings where practitioners have limited nonprimary language skills, and in-person translators may not always be available. S-MINDS© (Speaking Multilingual Interactive Natural Dialog System), a concept-based speech translation approach developed by Fluential Inc., can be applied to bridge the technologic gaps that limit the complexity and length of utterances that can be recognized and translated by devices and has the potential to broaden access to translation services in the clinical settings. Methods The prototype translation system was evaluated prospectively for accuracy and patient satisfaction in underserved Spanish-speaking patients with diabetes and limited English proficiency and was compared with other commercial systems for robustness against degradation of translation due to ambient noise and speech patterns. Results Accuracy related to translating the English–Spanish–English communication string from practitioner to device to patient to device to practitioner was high (97–100%). Patient satisfaction was high (means of 4.7–4.9 over four domains on a 5-point Likert scale). The device outperformed three other commercial speech translation systems in terms of accuracy during fast speech utterances, under quiet and noisy fluent speech conditions, and when challenged with various speech disfluencies (i.e., fillers, false starts, stutters, repairs, and long pauses). Conclusions A concept-based English–Spanish speech translation system has been successfully developed in prototype form that can accept long utterances (up to 20 words) with limited to no degradation in accuracy. The functionality of the system is superior to leading commercial speech translation systems. PMID:22920821
Vitevitch, Michael S.
2008-01-01
A comparison of the lexical characteristics of 88 auditory misperceptions (i.e., slips of the ear) showed no difference in word-frequency, neighborhood density, and neighborhood frequency between the actual and the perceived utterances. Another comparison of slip of the ear tokens (i.e., actual and perceived utterances) and words in general (i.e., randomly selected from the lexicon) showed that slip of the ear tokens had denser neighborhoods and higher neighborhood frequency than words in general, as predicted from laboratory studies. Contrary to prediction, slip of the ear tokens were higher in frequency of occurrence than words in general. Additional laboratory-based investigations examined the possible source of the contradictory word frequency finding, highlighting the importance of using naturalistic and experimental data to develop models of spoken language processing. PMID:12866911
Chon, HeeCheong; Sawyer, Jean; Ambrose, Nicoline G.
2014-01-01
Purpose The purpose of this study was to investigate characteristics of four types of utterances in preschool children who stutter: perceptually fluent, containing normal disfluencies (OD utterance), containing stuttering-like disfluencies (SLD utterance), and containing both normal and stuttering-like disfluencies (SLD+OD utterance). Articulation rate and length of utterance were measured to seek the differences. Because articulation rate may reflect temporal aspects of speech motor control, it was predicted that the articulation rate would be different between perceptually fluent utterances and utterances containing disfluencies. The length of utterance was also expected to show different patterns. Method Participants were 14 preschool children who stutter. Disfluencies were identified from their spontaneous speech samples, and articulation rate in syllables per second and utterance length in syllables were measured for the four types of utterances. Results and discussion There was no significant difference in articulation rate between each type of utterance. Significantly longer utterances were found only in SLD+OD utterances compared to fluent utterances, suggesting that utterance length may be related to efforts in executing motor as well as linguistic planning. The SLD utterance revealed a significant negative correlation in that longer utterances tended to be slower in articulation rates. Longer utterances may place more demand on speech motor control due to more linguistic and/or grammatical features, resulting in stuttering-like disfluencies and a decreased rate. PMID:22995336
Prosodic Temporal Alignment of Co-Speech Gestures to Speech Facilitates Referent Resolution
ERIC Educational Resources Information Center
Jesse, Alexandra; Johnson, Elizabeth K.
2012-01-01
Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so,…
Word frequency cues word order in adults: cross-linguistic evidence
Gervain, Judit; Sebastián-Gallés, Núria; Díaz, Begoña; Laka, Itziar; Mazuka, Reiko; Yamane, Naoto; Nespor, Marina; Mehler, Jacques
2013-01-01
One universal feature of human languages is the division between grammatical functors and content words. From a learnability point of view, functors might provide entry points or anchors into the syntactic structure of utterances due to their high frequency. Despite its potentially universal scope, this hypothesis has not yet been tested on typologically different languages and on populations of different ages. Here we report a corpus study and an artificial grammar learning experiment testing the anchoring hypothesis in Basque, Japanese, French, and Italian adults. We show that adults are sensitive to the distribution of functors in their native language and use them when learning new linguistic material. However, compared to infants' performance on a similar task, adults exhibit a slightly different behavior, matching the frequency distributions of their native language more closely than infants do. This finding bears on the issue of the continuity of language learning mechanisms. PMID:24106483
ERIC Educational Resources Information Center
Patkowski, Mark
2014-01-01
Previously published corpora of two-word utterances by three chimpanzees and three human children were compared to determine whether, as has been claimed, apes possess the same basic syntactic and semantic capacities as 2-year old children. Some similarities were observed in the type of semantic relations expressed by the two groups; however,…
ERIC Educational Resources Information Center
rad, Shadi Khojasteh; Abdullah, Ain Nadzimah
2012-01-01
Hesitation strategies appear in speech in the form of filled or unfilled pauses, paralinguistic markers like nervous laughter or coughing, or signals which are used to justify units in the coming utterances in which the speaker struggles to produce. The main functions of these forms of hesitation strategies have been associated with speech…
Information transfer and shared mental models for decision making
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Fischer, Ute
1991-01-01
A study to determine how communication influences flight crew performance is presented. This analysis focuses on the content of communication, principally asking what an utterance does from a cognitive, problem solving viewpoint. Two questions are addressed in this study: how is language utilized to manage problems in the cockpit, and are there differences between two- and three-member crews in their communication and problem solving strategies?
Linguistic and pragmatic constraints on utterance interpretation
NASA Astrophysics Data System (ADS)
Hinkelman, Elizabeth A.
1990-05-01
In order to model how people understand language, it is necessary to understand not only grammar and logic but also how people use language to affect their environment. This area of study is known as natural language pragmatics. Speech acts, for instance, are the offers, promises, announcements, etc., that people make by talking. The same expression may be different acts in different contexts, and yet not every expression performs every act. We want to understand how people are able to recognize other's intentions and implications in saying something. Previous plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. They can, however, be made sensitive to both linguistic and propositional information. This dissertation presents a method of speech act interpretation which uses patterns of linguistic features (e.g., mood, verb form, sentence adverbials, thematic roles) to identify a range of speech act interpretations for the utterance. These are then filtered and elaborated by inferences about agents' goals and plans. In many cases the plan reasoning consists of short, local inference chains (that are in fact conversational implicatures) and, extended reasoning is necessary only for the most difficult cases. The method is able to accommodate a wide range of cases, from those which seem very idiomatic to those which must be analyzed using knowledge about the world and human behavior. It explains how, Can you pass the salt, can be a request while, Are you able to pass the salt, is not.
Nelson, Eve-Lynn; Miller, Edward Alan; Larson, Kiley A
2010-01-01
This study's purpose was to adapt the Roter Interaction Analysis System (RIAS) for telemedicine clinics and to investigate the adapted measure's reliability. The study also sought to better understand the volume of technology-related utterance in established telemedicine clinics and the feasibility of using the measure within the telemedicine setting. This initial evaluation is a first step before broadly using the adapted measure across technologies and raters. An expert panel adapted the RIAS for the telemedicine context. This involved accounting for all consultation participants (patient, provider, presenter, family) and adding technology-specific subcategories. Ten new and 36 follow-up telemedicine encounters were videotaped and double coded using the adapted RIAS. These consisted primarily of follow-up visits (78.0%) involving patients, providers, presenters, and other parties. Reliability was calculated for those categories with 15 or more utterances. Traditional RIAS categories related to socioemotional and task-focused clusters had fair to excellent levels of reliability in the telemedicine setting. Although there were too few utterances to calculate the reliability of the specific technology-related subcategories, the summary technology-related category proved reliable for patients, providers, and presenters. Overall patterns seen in traditional patient-provider interactions were observed, with the number of provider utterances far exceeding patient, presenter, and family utterances, and few technology-specific utterances. The traditional RIAS is reliable when applied across multiple participants in the telemedicine context. Reliability of technology-related subcategories could not be evaluated; however, the aggregate technology-related cluster was found to be reliable and may be especially relevant in understanding communication patterns with patients new to the telemedicine setting. Use of the RIAS instrument is encouraged to facilitate comparison between traditional, face-to-face clinics and telemedicine; among diverse consultation mediums and technologies; and across different specialties. Future research is necessary to further investigate the reliability and validity of adding technology-related subcategories to the RIAS. The limited number of technology-related utterances, however, implies a certain degree of comfort with two-way interactive video consultation among study participants. Telemedicine continues to increase access to healthcare. The technology-related categories of the adapted RIAS were reliable when aggregated, thereby providing a tool to better understand how telemedicine affects provider-patient communication and outcomes.
Signal dimensionality and the emergence of combinatorial structure.
Little, Hannah; Eryılmaz, Kerem; de Boer, Bart
2017-11-01
In language, a small number of meaningless building blocks can be combined into an unlimited set of meaningful utterances. This is known as combinatorial structure. One hypothesis for the initial emergence of combinatorial structure in language is that recombining elements of signals solves the problem of overcrowding in a signal space. Another hypothesis is that iconicity may impede the emergence of combinatorial structure. However, how these two hypotheses relate to each other is not often discussed. In this paper, we explore how signal space dimensionality relates to both overcrowding in the signal space and iconicity. We use an artificial signalling experiment to test whether a signal space and a meaning space having similar topologies will generate an iconic system and whether, when the topologies differ, the emergence of combinatorially structured signals is facilitated. In our experiments, signals are created from participants' hand movements, which are measured using an infrared sensor. We found that participants take advantage of iconic signal-meaning mappings where possible. Further, we use trajectory predictability, measures of variance, and Hidden Markov Models to measure the use of structure within the signals produced and found that when topologies do not match, then there is more evidence of combinatorial structure. The results from these experiments are interpreted in the context of the differences between the emergence of combinatorial structure in different linguistic modalities (speech and sign). Copyright © 2017 Elsevier B.V. All rights reserved.
Tanana, Michael; Hallgren, Kevin A; Imel, Zac E; Atkins, David C; Srikumar, Vivek
2016-06-01
Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Natural language processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two natural language processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Natural language processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted. Copyright © 2016 Elsevier Inc. All rights reserved.
Language as a multimodal phenomenon: implications for language learning, processing and evolution.
Vigliocco, Gabriella; Perniss, Pamela; Vinson, David
2014-09-19
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Hustad, Katherine C; Allison, Kristen M; Sakash, Ashley; McFadd, Emily; Broman, Aimee Teo; Rathouz, Paul J
2017-08-01
To determine whether communication at 2 years predicted communication at 4 years in children with cerebral palsy (CP); and whether the age a child first produces words imitatively predicts change in speech production. 30 children (15 males) with CP participated and were seen 5 times at 6-month intervals between 24 and 53 months (mean age at time 1 = 26.9 months (SD 1.9)). Variables were communication classification at 24 and 53 months, age that children were first able to produce words imitatively, single-word intelligibility, and longest utterance produced. Communication at 24 months was highly predictive of abilities at 53 months. Speaking earlier led to faster gains in intelligibility and length of utterance and better outcomes at 53 months than speaking later. Inability to speak at 24 months indicates greater speech and language difficulty at 53 months and a strong need for early communication intervention.
A characterization of verb use in Turkish agrammatic narrative speech.
Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien
2016-01-01
This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.
Children's perception of their synthetically corrected speech production.
Strömbergsson, Sofia; Wengelin, Asa; House, David
2014-06-01
We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.
Stuttering on function words in bilingual children who stutter: A preliminary study.
Gkalitsiou, Zoi; Byrd, Courtney T; Bedore, Lisa M; Taliancich-Klinger, Casey L
2017-01-01
Evidence suggests young monolingual children who stutter (CWS) are more disfluent on function than content words, particularly when produced in the initial utterance position. The purpose of the present preliminary study was to investigate whether young bilingual CWS present with this same pattern. The narrative and conversational samples of four bilingual Spanish- and English-speaking CWS were analysed. All four bilingual participants produced significantly more stuttering on function words compared to content words, irrespective of their position in the utterance, in their Spanish narrative and conversational speech samples. Three of the four participants also demonstrated more stuttering on function compared to content words in their narrative speech samples in English, but only one participant produced more stuttering on function than content words in her English conversational sample. These preliminary findings are discussed relative to linguistic planning and language proficiency and their potential contribution to stuttered speech.
Huber, Jessica E.; Darling, Meghan
2012-01-01
Purpose The purpose of the present study was to examine the effects of cognitive-linguistic deficits and respiratory physiologic changes on respiratory support for speech in PD, using two speech tasks, reading and extemporaneous speech. Methods Five women with PD, 9 men with PD, and 14 age- and sex-matched control participants read a passage and spoke extemporaneously on a topic of their choice at comfortable loudness. Sound pressure level, syllables per breath group, speech rate, and lung volume parameters were measured. Number of formulation errors, disfluencies, and filled pauses were counted. Results Individuals with PD produced shorter utterances as compared to control participants. The relationships between utterance length and lung volume initiation and inspiratory duration were weaker in individuals with PD than for control participants, particularly for the extemporaneous speech task. These results suggest less consistent planning for utterance length by individuals with PD in extemporaneous speech. Individuals with PD produced more formulation errors in both tasks and significantly fewer filled pauses in extemporaneous speech. Conclusions Both respiratory physiologic and cognitive-linguistic issues affected speech production by individuals with PD. Overall, individuals with PD had difficulty planning or coordinating language formulation and respiratory support, in particular during extemporaneous speech. PMID:20844256
Malin, Jenessa L.; Karberg, Elizabeth; Cabrera, Natasha J.; Rowe, Meredith; Cristaforo, Tonia; Tamis-LeMonda, Catherine S.
2014-01-01
Using data from a racially and ethnically diverse sample of low-income fathers and their 2-year-old children who participated in the Early Head Start Research Evaluation Project (n = 80), the current study explored the association among paternal depressive symptoms and level of education, fathers’ language to their children, and children’s language skills. There were three main findings. First, there was large variability in the quality and quantity of language used during linguistic interactions between low-income fathers and their toddlers. Second, fathers with higher levels of education had children who spoke more (i.e. utterances) and had more diverse vocabularies (i.e. word types) than fathers with lower levels of education. However, fathers with more depressive symptoms had children with less grammatically complex language (i.e. smaller MLUs) than fathers with fewer depressive symptoms. Third, direct effects between fathers’ depressive symptoms and level of education and children’s language outcomes were partially mediated by fathers’ quantity and quality of language. PMID:25232446
When language gets emotional: irony and the embodiment of affect in discourse.
Filik, Ruth; Hunter, Christian Mark; Leuthold, Hartmut
2015-03-01
Although there is increasing evidence to suggest that language is grounded in perception and action, the relationship between language and emotion is less well understood. We investigate the grounding of language in emotion using a novel approach that examines the relationship between the comprehension of a written discourse and the performance of affect-related motor actions (hand movements towards and away from the body). Results indicate that positively and negatively valenced words presented in context influence motor responses (Experiment 1), whilst valenced words presented in isolation do not (Experiment 3). Furthermore, whether discourse context indicates that an utterance should be interpreted literally or ironically can influence motor responding, suggesting that the grounding of language in emotional states can be influenced by discourse-level factors (Experiment 2). In addition, the finding of affect-related motor responses to certain forms of ironic language, but not to non-ironic control sentences, suggests that phrasing a message ironically may influence the emotional response that is elicited. Copyright © 2014. Published by Elsevier B.V.
Codeswitching in Bilingual Children with Specific Language Impairment
Gutiérrez-Clellen, Vera F.; Cereijido, Gabriela Simon; Leone, Angela Erickson
2009-01-01
Children with specific language impairment (SLI) exhibit limited grammatical skills compared to their peers with typical language. These difficulties may be revealed when alternating their two languages (i.e., codeswitching) within sentences. Fifty-eight Spanish-English speaking children with and without SLI produced narratives using wordless picture books and conversational samples. The results indicated no significant differences in the proportion of utterances with codeswitching (CS) across age groups or contexts of elicitation. There were significant effects for language dominance, language of testing, and a significant dominance by language of testing interaction. The English-dominant children demonstrated more CS when tested in their nondominant language (Spanish) compared to the Spanish-dominant children tested in their weaker English. The children with SLI did not display more CS or more instances of atypical CS patterns compared to their typical peers. The findings indicate that children with SLI are capable of using grammatical CS, in spite of their language difficulties. In addition, the analyses suggest that CS is sensitive to sociolinguistic variables such as when the home language is not socially supported in the larger sociocultural context. In these cases, children may refrain from switching to the home language, even if that is their dominant language. PMID:22611333
Kanto, Laura; Huttunen, Kerttu; Laakso, Marja-Leena
2013-04-01
We explored variation in the linguistic environments of hearing children of Deaf parents and how it was associated with their early bilingual language development. For that purpose we followed up the children's productive vocabulary (measured with the MCDI; MacArthur Communicative Development Inventory) and syntactic complexity (measured with the MLU10; mean length of the 10 longest utterances the child produced during videorecorded play sessions) in both Finnish Sign Language and spoken Finnish between the ages of 12 and 30 months. Additionally, we developed new methodology for describing the linguistic environments of the children (N = 10). Large variation was uncovered in both the amount and type of language input and language acquisition among the children. Language exposure and increases in productive vocabulary and syntactic complexity were interconnected. Language acquisition was found to be more dependent on the amount of exposure in sign language than in spoken language. This was judged to be related to the status of sign language as a minority language. The results are discussed in terms of parents' language choices, family dynamics in Deaf-parented families and optimal conditions for bilingual development.
ERIC Educational Resources Information Center
Luo, Yang; Lin, Yuewu
2017-01-01
Illustration is always used as an example to make the written text or the utterance more clear in general. In Winarski's opinion (1997), one picture equals thousands of words. That is to say, illustrations are capable to express the meaning of unfamiliar language or a great deal of information in the reading material by vivid pictures, tables,…
ERIC Educational Resources Information Center
Blount, Ben G.; Padgug, Elise J.
Features of parental speech to young children was studied in four English-speaking and four Spanish-speaking families. Children ranged in age from 9 to 12 months for the English speakers and from 8 to 22 months for the Spanish speakers. Examination of the utterances led to the identification of 34 prosodic, paralinguistic, and interactional…
Designing a Constraint Based Parser for Sanskrit
NASA Astrophysics Data System (ADS)
Kulkarni, Amba; Pokar, Sheetal; Shukl, Devanand
Verbal understanding (śā bdabodha) of any utterance requires the knowledge of how words in that utterance are related to each other. Such knowledge is usually available in the form of cognition of grammatical relations. Generative grammars describe how a language codes these relations. Thus the knowledge of what information various grammatical relations convey is available from the generation point of view and not the analysis point of view. In order to develop a parser based on any grammar one should then know precisely the semantic content of the grammatical relations expressed in a language string, the clues for extracting these relations and finally whether these relations are expressed explicitly or implicitly. Based on the design principles that emerge from this knowledge, we model the parser as finding a directed Tree, given a graph with nodes representing the words and edges representing the possible relations between them. Further, we also use the Mīmā ṃsā constraint of ākā ṅkṣā (expectancy) to rule out non-solutions and sannidhi (proximity) to prioritize the solutions. We have implemented a parser based on these principles and its performance was found to be satisfactory giving us a confidence to extend its functionality to handle the complex sentences.
Wang, J Jessica; Ali, Muna; Frisson, Steven; Apperly, Ian A
2016-09-01
Basic competence in theory of mind is acquired during early childhood. Nonetheless, evidence suggests that the ability to take others' perspectives in communication improves continuously from middle childhood to the late teenage years. This indicates that theory of mind performance undergoes protracted developmental changes after the acquisition of basic competence. Currently, little is known about the factors that constrain children's performance or that contribute to age-related improvement. A sample of 39 8-year-olds and 56 10-year-olds were tested on a communication task in which a speaker's limited perspective needed to be taken into account and the complexity of the speaker's utterance varied. Our findings showed that 10-year-olds were generally less egocentric than 8-year-olds. Children of both ages committed more egocentric errors when a speaker uttered complex sentences compared with simple sentences. Both 8- and 10-year-olds were affected by the demand to integrate complex sentences with the speaker's limited perspective and to a similar degree. These results suggest that long after children's development of simple visual perspective-taking, their use of this ability to assist communication is substantially constrained by the complexity of the language involved. Copyright © 2015 Elsevier Inc. All rights reserved.
Multilingual vocal emotion recognition and classification using back propagation neural network
NASA Astrophysics Data System (ADS)
Kayal, Apoorva J.; Nirmal, Jagannath
2016-03-01
This work implements classification of different emotions in different languages using Artificial Neural Networks (ANN). Mel Frequency Cepstral Coefficients (MFCC) and Short Term Energy (STE) have been considered for creation of feature set. An emotional speech corpus consisting of 30 acted utterances per emotion has been developed. The emotions portrayed in this work are Anger, Joy and Neutral in each of English, Marathi and Hindi languages. Different configurations of Artificial Neural Networks have been employed for classification purposes. The performance of the classifiers has been evaluated by False Negative Rate (FNR), False Positive Rate (FPR), True Positive Rate (TPR) and True Negative Rate (TNR).
Intonation and dialog context as constraints for speech recognition.
Taylor, P; King, S; Isard, S; Wright, H
1998-01-01
This paper describes a way of using intonation and dialog context to improve the performance of an automatic speech recognition (ASR) system. Our experiments were run on the DCIEM Maptask corpus, a corpus of spontaneous task-oriented dialog speech. This corpus has been tagged according to a dialog analysis scheme that assigns each utterance to one of 12 "move types," such as "acknowledge," "query-yes/no" or "instruct." Most ASR systems use a bigram language model to constrain the possible sequences of words that might be recognized. Here we use a separate bigram language model for each move type. We show that when the "correct" move-specific language model is used for each utterance in the test set, the word error rate of the recognizer drops. Of course when the recognizer is run on previously unseen data, it cannot know in advance what move type the speaker has just produced. To determine the move type we use an intonation model combined with a dialog model that puts constraints on possible sequences of move types, as well as the speech recognizer likelihoods for the different move-specific models. In the full recognition system, the combination of automatic move type recognition with the move specific language models reduces the overall word error rate by a small but significant amount when compared with a baseline system that does not take intonation or dialog acts into account. Interestingly, the word error improvement is restricted to "initiating" move types, where word recognition is important. In "response" move types, where the important information is conveyed by the move type itself--for example, positive versus negative response--there is no word error improvement, but recognition of the response types themselves is good. The paper discusses the intonation model, the language models, and the dialog model in detail and describes the architecture in which they are combined.
Kao, Chung-Shan; Dietrich, Rainer; Sommer, Werner
2010-01-01
Background Languages differ in the marking of the sentence mood of a polar interrogative (yes/no question). For instance, the interrogative mood is marked at the beginning of the surface structure in Polish, whereas the marker appears at the end in Chinese. In order to generate the corresponding sentence frame, the syntactic specification of the interrogative mood is early in Polish and late in Chinese. In this respect, German belongs to an interesting intermediate class. The yes/no question is expressed by a shift of the finite verb from its final position in the underlying structure into the utterance initial position, a move affecting, hence, both the sentence's final and the sentence's initial constituents. The present study aimed to investigate whether during generation of the semantic structure of a polar interrogative, i.e., the processing preceding the grammatical formulation, the interrogative mood is encoded according to its position in the syntactic structure at distinctive time points in Chinese, German, and Polish. Methodology/Principal Findings In a two-choice go/nogo experimental design, native speakers of the three languages responded to pictures by pressing buttons and producing utterances in their native language while their brain potentials were recorded. The emergence and latency of lateralized readiness potentials (LRP) in nogo conditions, in which speakers asked a yes/no question, should indicate the time point of processing the interrogative mood. The results revealed that Chinese, German, and Polish native speakers did not differ from each other in the electrophysiological indicator. Conclusions/Significance The findings suggest that the semantic encoding of the interrogative mood is temporally consistent across languages despite its disparate syntactic specification. The consistent encoding may be ascribed to economic processing of interrogative moods at various sentential positions of the syntactic structures in languages or, more generally, to the overarching status of sentence mood in the semantic structure. PMID:20927373
Multiunit Sequences in First Language Acquisition.
Theakston, Anna; Lieven, Elena
2017-07-01
Theoretical and empirical reasons suggest that children build their language not only out of individual words but also out of multiunit strings. These are the basis for the development of schemas containing slots. The slots are putative categories that build in abstraction while the schemas eventually connect to other schemas in terms of both meaning and form. Evidence comes from the nature of the input, the ways in which children construct novel utterances, the systematic errors that children make, and the computational modeling of children's grammars. However, much of this research is on English, which is unusual in its rigid word order and impoverished inflectional morphology. We summarize these results and explore their implications for languages with more flexible word order and/or much richer inflectional morphology. Copyright © 2017 Cognitive Science Society, Inc.
Why do infants begin to talk? Language as an unintended consequence.
Locke, J L
1996-06-01
Scholars have addressed a range of questions about language development, but for some reason have neglected to ask why infants begin to talk. Biologists often prefer 'how' to 'why' questions, but it is possible to ask about the immediate consequences of developing behaviours--an acceptable strategy for attacking causation--and psycholinguists can study the immediate consequences to the infant of behaviours that lead to linguistic competence. This process is demonstrated with a series of illustrative proposals as to the short- and long-term consequences of vocal learning and utterance storage, two developmental phases that lead to talking, as well as the act of talking itself. The goal is to encourage investigation of behavioural dispositions that nudge the child, by degrees, towards proficiency in the use of spoken language.
The Lidcombe Program and child language development: Long-term assessment.
Imeson, Juliet; Lowe, Robyn; Onslow, Mark; Munro, Natalie; Heard, Rob; O'Brian, Sue; Arnott, Simone
2018-03-15
This study was driven by the need to understand the mechanisms underlying Lidcombe Program treatment efficacy. The aim of the present study was to extend existing data exploring whether stuttering reductions observed when children successfully treated with the Lidcombe Program are associated with restricted language development. Audio recordings of 10-min parent-child conversations at home were transcribed verbatim for 11 pre-school-age children with various stuttering severities. Language samples from three assessments-pre-treatment, 9 and 18 months after beginning treatment-were analysed using SALT software for lexical diversity, utterance length and sentence complexity. At 18 months posttreatment commencement, the children had attained and maintained statistically significant stuttering reductions. During that period, there was no evidence that Lidcombe Program treatment was associated with restricted language development. The continued search for the mechanisms underlying this successful treatment needs to focus on other domains.
Prigent, Gaïd; Parisse, Christophe; Leclercq, Anne-Lise; Maillart, Christelle
2015-01-01
The usage-based theory considers that the morphosyntactic productions of children with SLI are particularly dependent on input frequency. When producing complex syntax, the language of these children is, therefore, predicted to have a lower variability and to contain fewer infrequent morphosyntactic markers than that of younger children matched on morphosyntactic abilities. Using a spontaneous language task, the current study compared the complexity of the morphological and structural productions of 20 children with SLI and 20 language-matched peers (matched on both morphosyntactic comprehension and mean length of utterance). As expected, results showed that although basic structures were produced in the same way in both groups, several complex forms (i.e. tenses such as Imperfect, Future or Conditional and Conjunctions) were less frequent in the productions of children with SLI. Finally, we attempted to highlight complex linguistic forms that could be good clinical markers for these children.
Prat, Chantel S; Mason, Robert A; Just, Marcel Adam
2012-03-01
This study used fMRI to investigate the neural correlates of analogical mapping during metaphor comprehension, with a focus on dynamic configuration of neural networks with changing processing demands and individual abilities. Participants with varying vocabulary sizes and working memory capacities read 3-sentence passages ending in nominal critical utterances of the form "X is a Y." Processing demands were manipulated by varying preceding contexts. Three figurative conditions manipulated difficulty by varying the extent to which preceding contexts mentioned relevant semantic features for relating the vehicle and topic of the critical utterance to one another. In the easy condition, supporting information was mentioned. In the neutral condition, no relevant information was mentioned. In the most difficult condition, opposite features were mentioned, resulting in an ironic interpretation of the critical utterance. A fourth, literal condition included context that supported a literal interpretation of the critical utterance. Activation in lateral and medial frontal regions increased with increasing contextual difficulty. Lower vocabulary readers also had greater activation across conditions in the right inferior frontal gyrus. In addition, volumetric analyses showed increased right temporo-parietal junction and superior medial frontal activation for all figurative conditions over the literal condition. The results from this experiment imply that the cortical regions are dynamically recruited in language comprehension as a function of the processing demands of a task. Individual differences in cognitive capacities were also associated with differences in recruitment and modulation of working memory and executive function regions, highlighting the overlapping computations in metaphor comprehension and general thinking and reasoning. 2012 APA, all rights reserved
ERIC Educational Resources Information Center
Chon, HeeCheong; Sawyer, Jean; Ambrose, Nicoline G.
2012-01-01
Purpose: The purpose of this study was to investigate characteristics of four types of utterances in preschool children who stutter: perceptually fluent, containing normal disfluencies (OD utterance), containing stuttering-like disfluencies (SLD utterance), and containing both normal and stuttering-like disfluencies (SLD+OD utterance).…
Zinken, Katarzyna M; Cradock, Sue; Skinner, T Chas
2008-08-01
The paper presents the development of a coding tool for self-efficacy orientated interventions in diabetes self-management programmes (Analysis System for Self-Efficacy Training, ASSET) and explores its construct validity and clinical utility. Based on four sources of self-efficacy (i.e., mastery experience, role modelling, verbal persuasion and physiological and affective states), published self-efficacy based interventions for diabetes care were analysed in order to identify specific verbal behavioural techniques. Video-recorded facilitating behaviours were evaluated using ASSET. The reliability between four coders was high (K=0.71). ASSET enabled assessment of both self-efficacy based techniques and participants' response to those techniques. Individual patterns of delivery and shifts over time across facilitators were found. In the presented intervention we observed that self-efficacy utterances were followed by longer patient verbal responses than non-self-efficacy utterances. These detailed analyses with ASSET provide rich data and give the researcher an insight into the underlying mechanism of the intervention process. By providing a detailed description of self-efficacy strategies ASSET can be used by health care professionals to guide reflective practice and support training programmes.
Language skills of children during the first 12 months after stuttering onset.
Watts, Amy; Eadie, Patricia; Block, Susan; Mensah, Fiona; Reilly, Sheena
2017-03-01
To describe the language development in a sample of young children who stutter during the first 12 months after stuttering onset was reported. Language production was analysed in a sample of 66 children who stuttered (aged 2-4 years). The sample were identified from a pre-existing prospective, community based longitudinal cohort. Data were collected at three time points within the first year after stuttering onset. Stuttering severity was measured, and global indicators of expressive language proficiency (length of utterances and grammatical complexity) were derived from the samples and summarised. Language production abilities of the children who stutter were contrasted with normative data. The majority of children's stuttering was rated as mild in severity, with more than 83% of participants demonstrating very mild or mild stuttering at each of the time points studied. The participants demonstrated developmentally appropriate spoken language skills comparable with available normative data. In the first year following the report of stuttering onset, the language skills of the children who were stuttering progressed in a manner that is consistent with developmental expectations. Copyright © 2016 Elsevier Inc. All rights reserved.
CLUSTER: An Approach to Contextual Language Understanding
1986-04-01
to the UCB Math Department, to my adviser Robert Wilensky, and to the Computer Science Department at the University of Southern California. And... purely syntactic investigation of an utterance, such as that resulting in a syntactic parse tree. The latter process is traditionally referred to as...only hurts when I laugh! and verbat£m texts, e. g. 99 and 44/100 percent pure . Both of the above expressions can be understood in a productive
Neural correlates of audiovisual speech processing in a second language.
Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador
2013-09-01
Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.
Condouris, Karen; Meyer, Echo; Tager-Flusberg, Helen
2005-01-01
This study investigated the relationship between scores on standardized tests (Clinical Evaluation of Language Fundamentals [CELF], Peabody Picture Vocabulary Test–Third Edition [PPVT-III], and Expressive Vocabulary Test) and measures of spontaneous speech (mean length of utterance [MLU], Index of Productive Syntax, and number of different word roots [NDWR]) derived from natural language samples obtained from 44 children with autism between the ages of 4 and 14 years old. The children with autism were impaired across both groups of measures. The two groups of measures were significantly correlated, and specific relationships were found between lexical–semantic measures (NDWR, vocabulary tests, and the CELF lexical–semantic subtests) and grammatical measures (MLU, and CELF grammar subtests), suggesting that both standardized and spontaneous speech measures tap the same underlying linguistic abilities in children with autism. These findings have important implications for clinicians and researchers who depend on these types of language measures for diagnostic purposes, assessment, and investigations of language impairments in autism. PMID:12971823
Arnulf, Isabelle; Uguccioni, Ginevra; Gay, Frederick; Baldayrou, Etienne; Golmard, Jean-Louis; Gayraud, Frederique; Devevey, Alain
2017-11-01
Speech is a complex function in humans, but the linguistic characteristics of sleep talking are unknown. We analyzed sleep-associated speech in adults, mostly (92%) during parasomnias. The utterances recorded during night-time video-polysomnography were analyzed for number of words, propositions and speech episodes, frequency, gaps and pauses (denoting turn-taking in the conversation), lemmatization, verbosity, negative/imperative/interrogative tone, first/second person, politeness, and abuse. Two hundred thirty-two subjects (aged 49.5 ± 20 years old; 41% women; 129 with rapid eye movement [REM] sleep behavior disorder and 87 with sleepwalking/sleep terrors, 15 healthy subjects, and 1 patient with sleep apnea speaking in non-REM sleep) uttered 883 speech episodes, containing 59% nonverbal utterance (mumbles, shouts, whispers, and laughs) and 3349 understandable words. The most frequent word was "No": negations represented 21.4% of clauses (more in non-REM sleep). Interrogations were found in 26% of speech episodes (more in non-REM sleep), and subordinate clauses were found in 12.9% of speech episodes. As many as 9.7% of clauses contained profanities (more in non-REM sleep). Verbal abuse lasted longer in REM sleep and was mostly directed toward insulting or condemning someone, whereas swearing predominated in non-REM sleep. Men sleep-talked more than women and used a higher proportion of profanities. Apparent turn-taking in the conversation respected the usual language gaps. Sleep talking parallels awake talking for syntax, semantics, and turn-taking in conversation, suggesting that the sleeping brain can function at a high level. Language during sleep is mostly a familiar, tensed conversation with inaudible others, suggestive of conflicts. © Sleep Research Society 2017. Published by Oxford University Press [on behalf of the Sleep Research Society]. All rights reserved. For permissions, please email: journals.permissions@oup.com
The Acquisition of English Focus Marking by Non-Native Speakers
NASA Astrophysics Data System (ADS)
Baker, Rachel Elizabeth
This dissertation examines Mandarin and Korean speakers' acquisition of English focus marking, which is realized by accenting particular words within a focused constituent. It is important for non-native speakers to learn how accent placement relates to focus in English because appropriate accent placement and realization makes a learner's English more native-like and easier to understand. Such knowledge may also improve their English comprehension skills. In this study, 20 native English speakers, 20 native Mandarin speakers, and 20 native Korean speakers participated in four experiments: (1) a production experiment, in which they were recorded reading the answers to questions, (2) a perception experiment, in which they were asked to determine which word in a recording was the last prominent word, (3) an understanding experiment, in which they were asked whether the answers in recorded question-answer pairs had context-appropriate prosody, and (4) an accent placement experiment, in which they were asked which word they would make prominent in a particular context. Finally, a new group of native English speakers listened to utterances produced in the production experiment, and determined whether the prosody of each utterance was appropriate for its context. The results of the five experiments support a novel predictive model for second language prosodic focus marking acquisition. This model holds that both transfer of linguistic features from a learner's native language (L1) and features of their second language (L2) affect learners' acquisition of prosodic focus marking. As a result, the model includes two complementary components: the Transfer Component and the L2 Challenge Component. The Transfer Component predicts that prosodic structures in the L2 will be more easily acquired by language learners that have similar structures in their L1 than those who do not, even if there are differences between the L1 and L2 in how the structures are realized. The L2 Challenge Component predicts that for difficult tasks, language learners will rely on widely-applied prosodic patterns, making them more successful at prosodically marking broad focus than narrow focus. However, for easy tasks, language learners will more successfully mark information structures that have a more direct relationship between focus and accent placement.
Phonological Planning during Sentence Production: Beyond the Verb.
Schnur, Tatiana T
2011-01-01
The current study addresses the extent of phonological planning during spontaneous sentence production. Previous work shows that at articulation, phonological encoding occurs for entire phrases, but encoding beyond the initial phrase may be due to the syntactic relevance of the verb in planning the utterance. I conducted three experiments to investigate whether phonological planning crosses multiple grammatical phrase boundaries (as defined by the number of lexical heads of phrase) within a single phonological phrase. Using the picture-word interference paradigm, I found in two separate experiments a significant phonological facilitation effect to both the verb and noun of sentences like "He opens the gate." I also altered the frequency of the direct object and found longer utterance initiation times for sentences ending with a low-frequency vs. high-frequency object offering further support that the direct object was phonologically encoded at the time of utterance initiation. That phonological information for post-verbal elements was activated suggests that the grammatical importance of the verb does not restrict the extent of phonological planning. These results suggest that the phonological phrase is unit of planning, where all elements within a phonological phrase are encoded before articulation. Thus, consistent with other action sequencing behavior, there is significant phonological planning ahead in sentence production.
Mobbing calls signal predator category in a kin group-living bird species
Griesser, Michael
2009-01-01
Many prey species gather together to approach and harass their predators despite the associated risks. While mobbing, prey usually utter calls and previous experiments have demonstrated that mobbing calls can convey information about risk to conspecifics. However, the risk posed by predators also differs between predator categories. The ability to communicate predator category would be adaptive because it would allow other mobbers to adjust their risk taking. I tested this idea in Siberian jays Perisoreus infaustus, a group-living bird species, by exposing jay groups to mounts of three hawk and three owl species of varying risks. Groups immediately approached to mob the mount and uttered up to 14 different call types. Jays gave more calls when mobbing a more dangerous predator and when in the presence of kin. Five call types were predator-category-specific and jays uttered two hawk-specific and three owl-specific call types. Thus, this is one of the first studies to demonstrate that mobbing calls can simultaneously encode information about both predator category and the risk posed by a predator. Since antipredator calls of Siberian jays are known to specifically aim at reducing the risk to relatives, kin-based sociality could be an important factor in facilitating the evolution of predator-category-specific mobbing calls. PMID:19474047
Mobbing calls signal predator category in a kin group-living bird species.
Griesser, Michael
2009-08-22
Many prey species gather together to approach and harass their predators despite the associated risks. While mobbing, prey usually utter calls and previous experiments have demonstrated that mobbing calls can convey information about risk to conspecifics. However, the risk posed by predators also differs between predator categories. The ability to communicate predator category would be adaptive because it would allow other mobbers to adjust their risk taking. I tested this idea in Siberian jays Perisoreus infaustus, a group-living bird species, by exposing jay groups to mounts of three hawk and three owl species of varying risks. Groups immediately approached to mob the mount and uttered up to 14 different call types. Jays gave more calls when mobbing a more dangerous predator and when in the presence of kin. Five call types were predator-category-specific and jays uttered two hawk-specific and three owl-specific call types. Thus, this is one of the first studies to demonstrate that mobbing calls can simultaneously encode information about both predator category and the risk posed by a predator. Since antipredator calls of Siberian jays are known to specifically aim at reducing the risk to relatives, kin-based sociality could be an important factor in facilitating the evolution of predator-category-specific mobbing calls.
Language from police body camera footage shows racial disparities in officer respect.
Voigt, Rob; Camp, Nicholas P; Prabhakaran, Vinodkumar; Hamilton, William L; Hetey, Rebecca C; Griffiths, Camilla M; Jurgens, David; Jurafsky, Dan; Eberhardt, Jennifer L
2017-06-20
Using footage from body-worn cameras, we analyze the respectfulness of police officer language toward white and black community members during routine traffic stops. We develop computational linguistic methods that extract levels of respect automatically from transcripts, informed by a thin-slicing study of participant ratings of officer utterances. We find that officers speak with consistently less respect toward black versus white community members, even after controlling for the race of the officer, the severity of the infraction, the location of the stop, and the outcome of the stop. Such disparities in common, everyday interactions between police and the communities they serve have important implications for procedural justice and the building of police-community trust.
Perceptual chunking and its effect on memory in speech processing: ERP and behavioral evidence
Gilbert, Annie C.; Boucher, Victor J.; Jemel, Boutheina
2014-01-01
We examined how perceptual chunks of varying size in utterances can influence immediate memory of heard items (monosyllabic words). Using behavioral measures and event-related potentials (N400) we evaluated the quality of the memory trace for targets taken from perceived temporal groups (TGs) of three and four items. Variations in the amplitude of the N400 showed a better memory trace for items presented in TGs of three compared to those in groups of four. Analyses of behavioral responses along with P300 components also revealed effects of chunk position in the utterance. This is the first study to measure the online effects of perceptual chunks on the memory trace of spoken items. Taken together, the N400 and P300 responses demonstrate that the perceptual chunking of speech facilitates information buffering and a processing on a chunk-by-chunk basis. PMID:24678304
Perceptual chunking and its effect on memory in speech processing: ERP and behavioral evidence.
Gilbert, Annie C; Boucher, Victor J; Jemel, Boutheina
2014-01-01
We examined how perceptual chunks of varying size in utterances can influence immediate memory of heard items (monosyllabic words). Using behavioral measures and event-related potentials (N400) we evaluated the quality of the memory trace for targets taken from perceived temporal groups (TGs) of three and four items. Variations in the amplitude of the N400 showed a better memory trace for items presented in TGs of three compared to those in groups of four. Analyses of behavioral responses along with P300 components also revealed effects of chunk position in the utterance. This is the first study to measure the online effects of perceptual chunks on the memory trace of spoken items. Taken together, the N400 and P300 responses demonstrate that the perceptual chunking of speech facilitates information buffering and a processing on a chunk-by-chunk basis.
Reich, Catherine M.; Hack, Samantha M.; Klingaman, Elizabeth A.; Brown, Clayton H.; Fang, Li Juan; Dixon, Lisa B.; Jahn, Danielle R.; Kreyenbuhl, Julie A.
2017-01-01
Objective The study was designed to explore patterns of prescriber communication behaviors as they relate to consumer satisfaction among a serious mental illness sample. Methods Recordings from 175 antipsychotic medication-monitoring appointments between veterans with psychiatric disorders and their prescribers were coded using the Roter Interaction Analysis System (RIAS) for communication behavioral patterns. Results The frequency of prescriber communication behaviors (i.e., facilitation, rapport, procedural, psychosocial, biomedical, and total utterances) did not reliably predict consumer satisfaction. The ratio of prescriber to consumer utterances did predict consumer satisfaction. Conclusion Consistent with client-centered care theory, antipsychotic medication consumers were more satisfied with their encounters when their prescriber did not dominate the conversation. Practice Implications Therefore, one potential recommendation from these findings could be for medication prescribers to spend more of their time listening to, rather than speaking with, their SMI consumers. PMID:28920491
Morphosyntactic annotation of CHILDES transcripts*
SAGAE, KENJI; DAVIS, ERIC; LAVIE, ALON; MACWHINNEY, BRIAN; WINTNER, SHULY
2014-01-01
Corpora of child language are essential for research in child language acquisition and psycholinguistics. Linguistic annotation of the corpora provides researchers with better means for exploring the development of grammatical constructions and their usage. We describe a project whose goal is to annotate the English section of the CHILDES database with grammatical relations in the form of labeled dependency structures. We have produced a corpus of over 18,800 utterances (approximately 65,000 words) with manually curated gold-standard grammatical relation annotations. Using this corpus, we have developed a highly accurate data-driven parser for the English CHILDES data, which we used to automatically annotate the remainder of the English section of CHILDES. We have also extended the parser to Spanish, and are currently working on supporting more languages. The parser and the manually and automatically annotated data are freely available for research purposes. PMID:20334720
Grammar Is a System That Characterizes Talk in Interaction
Ginzburg, Jonathan; Poesio, Massimo
2016-01-01
Much of contemporary mainstream formal grammar theory is unable to provide analyses for language as it occurs in actual spoken interaction. Its analyses are developed for a cleaned up version of language which omits the disfluencies, non-sentential utterances, gestures, and many other phenomena that are ubiquitous in spoken language. Using evidence from linguistics, conversation analysis, multimodal communication, psychology, language acquisition, and neuroscience, we show these aspects of language use are rule governed in much the same way as phenomena captured by conventional grammars. Furthermore, we argue that over the past few years some of the tools required to provide a precise characterizations of such phenomena have begun to emerge in theoretical and computational linguistics; hence, there is no reason for treating them as “second class citizens” other than pre-theoretical assumptions about what should fall under the purview of grammar. Finally, we suggest that grammar formalisms covering such phenomena would provide a better foundation not just for linguistic analysis of face-to-face interaction, but also for sister disciplines, such as research on spoken dialogue systems and/or psychological work on language acquisition. PMID:28066279
Large-scale evidence of dependency length minimization in 37 languages
Futrell, Richard; Mahowald, Kyle; Gibson, Edward
2015-01-01
Explaining the variation between human languages and the constraints on that variation is a core goal of linguistics. In the last 20 y, it has been claimed that many striking universals of cross-linguistic variation follow from a hypothetical principle that dependency length—the distance between syntactically related words in a sentence—is minimized. Various models of human sentence production and comprehension predict that long dependencies are difficult or inefficient to process; minimizing dependency length thus enables effective communication without incurring processing difficulty. However, despite widespread application of this idea in theoretical, empirical, and practical work, there is not yet large-scale evidence that dependency length is actually minimized in real utterances across many languages; previous work has focused either on a small number of languages or on limited kinds of data about each language. Here, using parsed corpora of 37 diverse languages, we show that overall dependency lengths for all languages are shorter than conservative random baselines. The results strongly suggest that dependency length minimization is a universal quantitative property of human languages and support explanations of linguistic variation in terms of general properties of human information processing. PMID:26240370
Letts, C A
1991-08-01
Two pre-school children were recorded at regular intervals over a 9-month period while playing freely together. One child was acquiring English as a second language, whilst the other was a monolingual English speaker. The sociolinguistic domain was such that the children were likely to be motivated to communicate with each other in English. A variety of quantitative measures were taken from the transcribed data, including measures of utterance type, length, type-token ratios, use of auxiliaries and morphology. The child for whom English was a second language was found to be well able to interact on equal terms with his partner, despite being somewhat less advanced in some aspects of English language development by the end of the sampling period. Whilst he appeared to be consolidating his language skills during this time, his monolingual partner appeared to be developing rapidly. It is hoped that normative longitudinal data of this kind will be of use in the accurate assessment of children from dual language backgrounds, who may be referred for speech and language therapy.
Generating Natural Language Under Pragmatic Constraints.
1987-03-01
central issue, Carter’s loss. concentrating on ,more, pleasant aspects. But what would happen in an extreme case ’.’ what if you, a Carter supporter. are...In [Cohen 78], Cohen studied the effect of the hearer’s knowledge on the selection of appropriate speech act (say, REQUEST vs INFORM OF WANT...utterances is studied in [Clark & Carlson 81], [Clark & Murphy 82]; [Gibbs 79] and [Gibbs 81] discuss the effects of context on the processing of indirect
Is a clinical sociolinguistics possible?
Ball, M J
1992-01-01
This paper considers the idea of developing a clinical sociolinguistics. Various areas of the field are examined, and the importance of the 'core' area of the correlation of non-linguistic variables with linguistic variables stressed. Issues concerning language and class, region, sex, age and context of utterance are investigated, together with the implications for clinical linguistics. Finally, the difficulty of integrating such issues into clinical assessment is explored, and a tentative step forward suggested along the lines of a 'clinical sociolinguistic checklist'.
[A language for the psychotic deaf and their families].
Vacola, G
1987-04-01
Deafness and psychosis are two, among the processus of recoiling from the world, where the human being's incommunicability is in question. The extreme pathology which associates them forces to establish links for the first contact with the deaf and psychotic child. Hearing means more than what the ears filter. The denial or forclosure of the uttered words is part of a psychological deafness among families and doctors. The child also plays the sense of hearing.
You changed your mind! Infants interpret a change in word as signaling a change in an agent's goals.
Jin, Kyong-Sun; Song, Hyun-Joo
2017-10-01
Language provides information about our psychological states. For instance, adults can use language to convey information about their goals or preferences. The current research examined whether 14- and 12-month-old infants could interpret a change in an agent's word as signaling a change in her goals. In two experiments, 14-month-olds (Experiment 1) and 12-month-olds (Experiment 2) were first familiarized to an event in which an agent uttered a novel word and then reached for one of two novel objects. During the test trials, the agent uttered a different novel word (different-word condition) or the same word (same-word condition) and then reached for the same object or the other object. Both 14- and 12-month-olds in the different-word condition expected the agent to change her goal and reach for the other object. In contrast, the infants in the same-word condition expected the agent to maintain her goal. In Experiment 3, 12-month-olds who heard two distinct sounds instead of the agent's novel words expected the agent to maintain her goal regardless of the change in the nonlinguistic sounds. Together, these results indicate that by 12months of age infants can use an agent's verbal information to detect a change in her goals. Copyright © 2017 Elsevier Inc. All rights reserved.
Prosody and informativity: A cross-linguistic investigation
NASA Astrophysics Data System (ADS)
Ouyang, Iris Chuoying
This dissertation aims to extend our knowledge of prosody -- in particular, what kinds of information may be conveyed through prosody, which prosodic dimensions may be used to convey them, and how individual speakers differ from one another in how they use prosody. Four production studies were conducted to examine how various factors interact with one another in shaping the prosody of an utterance and how prosody fulfills its multi-functional role. Experiments 1 explores the interaction between two types of informativity, namely information structure and information-theoretic properties. The results show that the prosodic consequences of new-information focus are modulated by the focused word's frequency, whereas the prosodic consequences of corrective focus are modulated by the focused word's probability in the context. Furthermore, f0 ranges appear to be more informative than f0 shapes in reflecting informativity across speakers. Specifically, speakers seem to have individual 'preferences' regarding f0 shapes, the f0 ranges they use for an utterance, and the magnitude of differences in f0 ranges by which they mark information-structural distinctions. In contrast, there is more cross-speaker validity in the actual directions of differences in f0 ranges between information-structural types. Experiments 2 and 3 further show that the interaction found between corrective focus and contextual probability depends on the interlocutor's knowledge state. When the interlocutor has no access to the crucial information concerning utterances' contextual probability, speakers prosodically emphasize contextually improbable corrections, but not contextually probable corrections. Furthermore, speakers prosodically emphasize the corrections in response to contextually probable misstatements, but not the corrections in response to contextually improbable misstatements. In contrast, completely opposite patterns are found when words' contextual probability is shared knowledge between the speaker and the interlocutor: speakers prosodically emphasize contextually probable corrections and the corrections in response to contextually improbable misstatements. Experiment 4 demonstrates the multi-functionality of prosody by investigating its discourse-level functions in Mandarin Chinese, a tone language where a word's prosodic patterns is crucial to its meaning. The results show that, although prosody serves fundamental, lexical-level functions in Mandarin Chinese, it nevertheless provides cues to information structure as well. Similar to what has been found with English, corrective information is prosodically more prominent than non-corrective information, and new information is prosodically more prominent than given information. Taken together, these experiments demonstrate the complex relationship between prosody and the different types of information it encodes in a given language. To better understand prosody, it is important to integrate insights from different traditions of research and to investigate across languages. In addition, the findings of this research suggest that speakers' assumptions about what their interlocutors know -- as well as speakers' ability to update these expectations -- play a key role in shaping the prosody of utterances. I hypothesize that prosodic prominence may reflect the gap between what speakers had expected their interlocutors to say and what their interlocutors have actually said.
Long-term temporal tracking of speech rate affects spoken-word recognition.
Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin
2014-08-01
Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.
Recognition of speaker-dependent continuous speech with KEAL
NASA Astrophysics Data System (ADS)
Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.
1989-04-01
A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.
Suzuki, Yumi; Hirayama, Kazumi; Shimomura, Tatsuo; Uchiyama, Makoto; Fujii, Hiromi; Mori, Etsuro; Nishio, Yoshiyuki; Iizuka, Osamu; Inoue, Ryusuke; Otsuki, Mika; Sakai, Shinya
2017-03-01
Pareidolias are visual illusions of meaningful objects, such as faces and animals, that arise from ambiguous forms embedded in visual scenes. Pareidolias and visual hallucinations have been suggested to have a common underlying neural mechanism in patients with dementia with Lewy bodies (DLB). The aim of the present study was to find an externally observable physiological indicator of pareidolias. Using a pareidolia test developed by Uchiyama and colleagues, we evoked pareidolias in patients with DLB and recorded the resultant changes in the diameters of their pupil. The time frequencies of changes in pupil diameters preceding pareidolic utterances and correct utterances by the patients, as well as correct utterances by healthy control participants, were analyzed by a fast Fourier transform program. The power at time frequencies of 0-0.46 Hz was found to be greatest preceding pareidolic utterances in patients with DLB, followed by that preceding correct utterances in control participants, followed by that preceding correct utterances in patients with DLB. When the changes in power preceding the utterance were greater than the median value of correct utterances by the control group, the frequency of pareidolic utterances was significantly greater than that of correct utterances and when the changes were the same as or lower than the median value, the frequency of correct utterances was significantly greater than that of pareidolic utterances. Greater changes in power preceding the utterance at time frequencies of 0-0.46 Hz may thus be an externally observable physiological indicator of the occurrence of pareidolias.
Suzuki, Yumi; Hirayama, Kazumi; Shimomura, Tatsuo; Uchiyama, Makoto; Fujii, Hiromi; Mori, Etsuro; Nishio, Yoshiyuki; Iizuka, Osamu; Inoue, Ryusuke; Otsuki, Mika
2017-01-01
Pareidolias are visual illusions of meaningful objects, such as faces and animals, that arise from ambiguous forms embedded in visual scenes. Pareidolias and visual hallucinations have been suggested to have a common underlying neural mechanism in patients with dementia with Lewy bodies (DLB). The aim of the present study was to find an externally observable physiological indicator of pareidolias. Using a pareidolia test developed by Uchiyama and colleagues, we evoked pareidolias in patients with DLB and recorded the resultant changes in the diameters of their pupil. The time frequencies of changes in pupil diameters preceding pareidolic utterances and correct utterances by the patients, as well as correct utterances by healthy control participants, were analyzed by a fast Fourier transform program. The power at time frequencies of 0–0.46 Hz was found to be greatest preceding pareidolic utterances in patients with DLB, followed by that preceding correct utterances in control participants, followed by that preceding correct utterances in patients with DLB. When the changes in power preceding the utterance were greater than the median value of correct utterances by the control group, the frequency of pareidolic utterances was significantly greater than that of correct utterances and when the changes were the same as or lower than the median value, the frequency of correct utterances was significantly greater than that of pareidolic utterances. Greater changes in power preceding the utterance at time frequencies of 0–0.46 Hz may thus be an externally observable physiological indicator of the occurrence of pareidolias. PMID:28134631
Glennen, Sharon
2014-07-01
The author followed 56 internationally adopted children during the first 3 years after adoption to determine how and when they reached age-expected language proficiency in Standard American English. The influence of age of adoption was measured, along with the relationship between early and later language and speech outcomes. Children adopted from Eastern Europe at ages 12 months to 4 years, 11 months, were assessed 5 times across 3 years. Norm-referenced measures of receptive and expressive language and articulation were compared over time. In addition, mean length of utterance (MLU) was measured. Across all children, receptive language reached age-expected levels more quickly than expressive language. Children adopted at ages 1 and 2 "caught up" more quickly than children adopted at ages 3 and 4. Three years after adoption, there was no difference in test scores across age of adoption groups, and the percentage of children with language or speech delays matched population estimates. MLU was within the average range 3 years after adoption but significantly lower than other language test scores. Three years after adoption, age of adoption did not influence language or speech outcomes, and most children reached age-expected language levels. Expressive syntax as measured by MLU was an area of relative weakness.
Kapantzoglou, Maria; Fergadiotis, Gerasimos; Restrepo, M Adelaida
2017-10-17
This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination index [SI]), which are commonly used indices for diagnosing primary language impairment in Spanish-English-speaking children in the United States. Twenty bilingual Spanish-English-speaking children with typical language development and 20 with primary language impairment participated in the study. Four analyses of variance were conducted to evaluate the effect of language elicitation technique and group on D, GE/CU, MLUw, and SI. Also, 2 discriminant analyses were conducted to assess which indices were more effective for story retelling and storytelling and their classification accuracy across elicitation techniques. D, MLUw, and SI were influenced by the type of elicitation technique, but GE/CU was not. The classification accuracy of language sample analysis was greater in story retelling than in storytelling, with GE/CU and D being useful indicators of language abilities in story retelling and GE/CU and SI in storytelling. Two indices in language sample analysis may be sufficient for diagnosis in 4- to 5-year-old bilingual Spanish-English-speaking children.
Phonological complexity in school-aged children who stutter and exhibit a language disorder.
Wolk, Lesley; LaSalle, Lisa R
2015-03-01
The Index of Phonological Complexity and the Word Complexity Measure are two measures of the phonological complexity of a word. Other phonological measures such as phonological neighborhood density have been used to compare stuttered versus fluent words. It appears that in preschoolers who stutter, the length and complexity of the utterance is more influential than the phonetic features of the stuttered word. The present hypothesis was that in school-age children who stutter, stuttered words would be more phonologically complex than fluent words, when the length and complexity of the utterance containing them is comparable. School-age speakers who stutter were hypothesized to differ from those with a concomitant language disorder. Sixteen speakers, six females and ten males (M age=12;3; Range=7;7 to 19;5) available from an online database, were divided into eight who had a concomitant language disorder (S+LD) and eight age- and sex-matched speakers who did not (S-Only). When all stuttered content words were identified, S+LD speakers produced more repetitions, and S-Only speakers produced more inaudible sound prolongations. When stuttered content words were matched to fluent content words and when talker groups were combined, stuttered words were significantly (p≤0.01) higher in both the Index of Phonological Complexity and the Word Complexity Measure and lower in density ("sparser") than fluent words. Results corroborate those of previous researchers. Future research directions are suggested, such as cross-sectional designs to evaluate developmental patterns of phonological complexity and stuttering plus language disordered connections. The reader will be able to: (a) Define and describe phonological complexity; (b) Define phonological neighborhood density and summarize the literature on the topic; (c) Describe the Index of Phonological Complexity (IPC) for a given word; (d) Describe the Word Complexity Measure (WCM) for a given word; (e) Summarize two findings from the current study and describe how each relates to studies of phonological complexity and fluency disorders. Copyright © 2014 Elsevier Inc. All rights reserved.
To electrify bilingualism: Electrophysiological insights into bilingual metaphor comprehension
Jankowiak, Katarzyna; Rataj, Karolina; Naskręcki, Ryszard
2017-01-01
Though metaphoric language comprehension has previously been investigated with event-related potentials, little attention has been devoted to extending this research from the monolingual to the bilingual context. In the current study, late proficient unbalanced Polish (L1)–English (L2) bilinguals performed a semantic decision task to novel metaphoric, conventional metaphoric, literal, and anomalous word pairs presented in L1 and L2. The results showed more pronounced P200 amplitudes to L2 than L1, which can be accounted for by differences in the subjective frequency of the native and non-native lexical items. Within the early N400 time window (300–400 ms), L2 word dyads evoked delayed and attenuated amplitudes relative to L1 word pairs, possibly indicating extended lexical search during foreign language processing, and weaker semantic interconnectivity for L2 compared to L1 words within the memory system. The effect of utterance type was observed within the late N400 time window (400–500 ms), with smallest amplitudes evoked by literal, followed by conventional metaphoric, novel metaphoric, and anomalous word dyads. Such findings are interpreted as reflecting more resource intensive cognitive mechanisms governing novel compared to conventional metaphor comprehension in both the native and non-native language. Within the late positivity time window (500–800 ms), Polish novel metaphors evoked reduced amplitudes relative to literal utterances. In English, on the other hand, this effect was observed for both novel and conventional metaphoric word dyads. This finding might indicate continued effort in information retrieval or access to the non-literal route during novel metaphor comprehension in L1, and during novel and conventional metaphor comprehension in L2. Altogether, the present results point to decreased automaticity of cognitive mechanisms engaged in non-native and non-dominant language processing, and suggest a decreased sensitivity to the levels of conventionality of metaphoric meanings in late proficient unbalanced bilingual speakers. PMID:28414742
Speech disruptions in relation to language growth in children who stutter: an exploratory study.
Wagovich, Stacy A; Hall, Nancy E; Clifford, Betsy A
2009-12-01
Young children with typical fluency demonstrate a range of disfluencies, or speech disruptions. One type of disruption, revision, appears to increase in frequency as syntactic skills develop. To date, this phenomenon has not been studied in children who stutter (CWS). Rispoli, Hadley, and Holt (2008) suggest a schema for categorizing speech disruptions in terms of revisions and stalls. The purpose of this exploratory study was to use this schema to evaluate whether CWS show a pattern over time in their production of stuttering, revisions, and stalls. Nine CWS, ages 2;1 to 4;11, participated in the study, producing language samples each month for 10 months. MLU and vocd analyses were performed for samples across three time periods. Active declarative sentences within these samples were examined for the presence of disruptions. Results indicated that the proportion of sentences containing revisions increased over time, but proportions for stalls and stuttering did not. Visual inspection revealed that more stuttering and stalls occurred on longer utterances than on shorter utterances. Upon examination of individual children's language, it appears two-thirds of the children showed a pattern in which, as MLU increased, revisions increased as well. Findings are similar to studies of children with typical fluency, suggesting that, despite the fact that CWS display more (and different) disfluencies relative to typically fluent peers, revisions appear to increase over time and correspond to increases in MLU, just as is the case with peers. The reader will be able to: (1) describe the three types of speech disruptions assessed in this article; (2) compare present findings of disruptions in children who stutter to findings of previous research with children who are typically fluent; and (3) discuss future directions in this area of research, given the findings and implications of this study.
O'Connell, Daniel C; Kowal, Sabine; Ageneau, Carie
2005-03-01
A psycholinguistic hypothesis regarding the use of interjections in spoken utterances, originally formulated by Ameka (1992b, 1994) for the English language, but not confirmed in the German-language research of Kowal and O'Connell (2004 a & c), was tested: The local syntactic isolation of interjections is paralleled by their articulatory isolation in spoken utterances i.e., by their occurrence between a preceding and a following pause. The corpus consisted of four TV and two radio interviews of Hillary Clinton that had coincided with the publication of her book Living History (2003) and one TV interview of Robin Williams by James Lipton. No evidence was found for articulatory isolation of English-language interjections. In the Hillary Clinton interviews and Robin Williams interviews, respectively, 71% and 73% of all interjections occurred initially, i.e., at the onset of various units of spoken discourse: at the beginning of turns; at the beginning of articulatory phrases within turns, i.e., after a preceding pause; and at the beginning of a citation within a turn (either Direct Reported Speech [DRS] or what we have designated Hypothetical Speaker Formulation [HSF]. One conventional interjection (OH) occurred most frequently. The Robin Williams interview had a much higher occurrence of interjections, especially nonconventional ones, than the Hillary Clinton interviews had. It is suggested that the onset or initializing role of interjections reflects the temporal priority of the affective and the intuitive over the analytic, grammatical, and cognitive in speech production. Both this temporal priority and the spontaneous and emotional use of interjections are consonant with Wundt's (1900) characterization of the primary interjection as psychologically primitive. The interjection is indeed the purest verbal implementation of conceptual orality.
Thothathiri, Malathi; Rattinger, Michelle G.
2016-01-01
Learning to produce sentences involves learning patterns that enable the generation of new utterances. Language contains both verb-specific and verb-general regularities that are relevant to this capacity. Previous research has focused on whether one source is more important than the other. We tested whether the production system can flexibly learn to use either source, depending on the predictive validity of different cues in the input. Participants learned new sentence structures in a miniature language paradigm. In three experiments, we manipulated whether individual verbs or verb-general mappings better predicted the structures heard during learning. Evaluation of participants’ subsequent production revealed that they could use either the structural preferences of individual verbs or abstract meaning-to-form mappings to construct new sentences. Further, this choice varied according to cue validity. These results demonstrate flexibility within the production architecture and the importance of considering how language was learned when discussing how language is used. PMID:27047428
Adaptation of fictional and online conversations to communication media
NASA Astrophysics Data System (ADS)
Alis, C. M.; Lim, M. T.
2012-12-01
Conversations allow the quick transfer of short bits of information and it is reasonable to expect that changes in communication medium affect how we converse. Using conversations in works of fiction and in an online social networking platform, we show that the utterance length of conversations is slowly shortening with time but adapts more strongly to the constraints of the communication medium. This indicates that the introduction of any new medium of communication can affect the way natural language evolves.
ERIC Educational Resources Information Center
Obrecht, Dean H.
This report contrasts the results of a rigidly specified, pattern-oriented approach to learning Spanish with an approach that emphasizes the origination of sentences by the learner in direct response to stimuli. Pretesting and posttesting statistics are presented and conclusions are discussed. The experimental method, which required the student to…
Naval War College Review. Volume 60, Number 3, Summer 2007
2007-06-01
you a new idea, and in that context it is interesting to think for a moment about the name of this beautiful , small town on the Narragansett Bay. It is...at everyone’s fingertips • Exploding Internet with bloggers , hackers, and chat rooms • Cell-phone cameras and recorders, making everyone a “reporter...simply The Road, and it is notable in every sense, most particularly for the heartbreaking beauty of its poetic language and as well for the utterly
Utterance-final position and pitch marking aid word learning in school-age children
Laaha, Sabine; Fitch, W. Tecumseh
2017-01-01
We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning. PMID:28878961
Utterance-final position and pitch marking aid word learning in school-age children.
Filippi, Piera; Laaha, Sabine; Fitch, W Tecumseh
2017-08-01
We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word-meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence ( control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
Howell, Peter; Au-Yeung, James; Pilgrim, Lesley
2007-01-01
Two important determinants of variation in stuttering frequency are utterance rate and the linguistic properties of the words being spoken. Little is known how these determinants interrelate. It is hypothesized that those linguistic factors that lead to change in word duration, alter utterance rate locally within an utterance that then gives rise to an increase in stuttering frequency. According to the hypothesis, utterance rate variation should occur locally within the linguistic segments in an utterance that is known to increase the likelihood of stuttering. The hypothesis is tested using length of tone unit as the linguistic factor. Three predictions are confirmed: Utterance rate varies locally within the tone units and this local variation affects stuttering frequency; stuttering frequency is positively related to the length of tone units; variations in utterance rate are correlated with tone unit length. Alternative theoretical formulations of these findings are considered. PMID:9921672
The phonetic rhythm/syntax headedness connection: Evidence from Tagalog
NASA Astrophysics Data System (ADS)
Bird, Sonya; Fais, Laurel; Werker, Janet
2005-04-01
Ramus, Nespor, and Mehler [Cognition (1999)] show that the rhythm of a language (broadly: stress- versus syllable- versus mora-timing) results from the proportion of vocalic material in an utterance (%V) and the standard deviation of consonantal intervals (delta-C). Based on 14 languages, Shukla, Nespor, and Mehler [submitted] further argue that rhythm is correlated with syntactic headedness: low %V is correlated with head-first languages (e.g., English); high %V is correlated with head-final languages (e.g., Japanese). Together, these proposals have important implications for language acquisition: infants can discriminate across rhythm classes [Nazzi, Bertoncini, and Mehler, J. Exp. Psych: Human Perception and Performance (1998)]. If rhythm, as defined by %V and delta-C, can predict headedness, then infants can potentially use rhythm information to bootstrap into their languages syntactic structure. This paper reports on a study analyzing rhythm in a language not yet considered: Tagalog. Results support the Shukla et al. proposal in an interesting way: based on its %V and delta-C, Tagalog falls between head-first and head-last languages, slighty closer to the head-first group. This placement correlates well with the fact that, although Tagalog is said to be primarily head-first syntactically, head-last phrases are permitted and common in the language.
Szagun, Gisela; Stumper, Barbara
2012-12-01
The authors investigated the influence of social environmental variables and age at implantation on language development in children with cochlear implants. Participants were 25 children with cochlear implants and their parents. Age at implantation ranged from 6 months to 42 months ( M (age) = 20.4 months, SD = 22.0 months). Linguistic progress was assessed at 12, 18, 24, and 30 months after implantation. At each data point, language measures were based on parental questionnaire and 45-min spontaneous speech samples. Children's language and parents' child-directed language were analyzed. On all language measures, children displayed considerable vocabulary and grammatical growth over time. Although there was no overall effect of age at implantation, younger and older children had different growth patterns. Children implanted by age 24 months made the most marked progress earlier on, whereas children implanted thereafter did so later on. Higher levels of maternal education were associated with faster linguistic progress; age at implantation was not. Properties of maternal language input, mean length of utterance, and expansions were associated with children's linguistic progress independently of age at implantation. In children implanted within the sensitive period for language learning, children's home language environment contributes more crucially to their linguistic progress than does age at implantation.
Miller, Carol A; Deevy, Patricia
2003-10-01
Children with specific language impairment (SLI) show inconsistent use of grammatical morphology. Children who are developing language typically also show a period during which they produce grammatical morphemes inconsistently. Various theories claim that both young typically developing children and children with SLI achieve correct production through memorization of some inflected forms (M. Gopnik, 1997; M. Tomasello, 2000a, 2000b). Adapting a method introduced by C. Miller and L. Leonard (1998), the authors investigated the use of present tense third singular -s by 24 typically developing preschoolers and 36 preschoolers with SLI. Each group was divided into 2 mean length of utterance (MLU) levels. Group and individual data provided little evidence that memorization could explain the correct productions of the third singular morpheme for either children with SLI or typically developing children, and there was no difference between children with higher and lower MLUs.
NASA Astrophysics Data System (ADS)
Wang, Hongcui; Kawahara, Tatsuya
CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.
Birth talk in second stage labor.
Bergstrom, Linda; Richards, Lori; Proctor, Adele; Avila, Leticia Bohrer; Morse, Janice M; Roberts, Joyce E
2009-07-01
In this secondary analysis of videotape data, we describe birth talk demonstrated by caregivers to women during the second stage of labor. Birth talk is a distinctive verbal register or a set of linguistic features that are used with particular behaviors during specific situations, has a particular communication purpose, and is characterized by distinctive language features. Birth talk is found cross-culturally among speakers of diverse languages. Our findings show that birth talk occurred mainly during contractions and co-occurred with two general styles of caregiving: "directed toward forced bearing down" and "supportive of physiologic bearing down." We also describe talk that occurred during rest periods, which was similar across the two styles. Caregivers' use of language tended to be either procedural (giving directions, instructions) or comfort related (encouraging and supporting). Linguistic features of the talk consisted of utterances of short duration, level pitch patterns with no sudden pitch shifts, and a restricted pitch range.
Language from police body camera footage shows racial disparities in officer respect
Voigt, Rob; Camp, Nicholas P.; Prabhakaran, Vinodkumar; Hamilton, William L.; Hetey, Rebecca C.; Griffiths, Camilla M.; Jurgens, David; Jurafsky, Dan; Eberhardt, Jennifer L.
2017-01-01
Using footage from body-worn cameras, we analyze the respectfulness of police officer language toward white and black community members during routine traffic stops. We develop computational linguistic methods that extract levels of respect automatically from transcripts, informed by a thin-slicing study of participant ratings of officer utterances. We find that officers speak with consistently less respect toward black versus white community members, even after controlling for the race of the officer, the severity of the infraction, the location of the stop, and the outcome of the stop. Such disparities in common, everyday interactions between police and the communities they serve have important implications for procedural justice and the building of police–community trust. PMID:28584085
What does it take to stress a word? Digital manipulation of stress markers in ataxic dysarthria.
Lowit, Anja; Ijitona, Tolulope; Kuschmann, Anja; Corson, Stephen; Soraghan, John
2018-05-18
Stress production is important for effective communication, but this skill is frequently impaired in people with motor speech disorders. The literature reports successful treatment of these deficits in this population, thus highlighting the therapeutic potential of this area. However, no specific guidance is currently available to clinicians about whether any of the stress markers are more effective than others, to what degree they have to be manipulated, and whether strategies need to differ according to the underlying symptoms. In order to provide detailed information on how stress production problems can be addressed, the study investigated (1) the minimum amount of change in a single stress marker necessary to achieve significant improvement in stress target identification; and (2) whether stress can be signalled more effectively with a combination of stress markers. Data were sourced from a sentence stress task performed by 10 speakers with ataxic dysarthria and 10 healthy matched control participants. Fifteen utterances perceived as having incorrect stress patterns (no stress, all words stressed or inappropriate word stressed) were selected and digitally manipulated in a stepwise fashion based on typical speaker performance. Manipulations were performed on F0, intensity and duration, either in isolation or in combination with each other. In addition, pitch contours were modified for some utterances. A total of 50 naïve listeners scored which word they perceived as being stressed. Results showed that increases in duration and intensity at levels smaller than produced by the control participants resulted in significant improvements in listener accuracy. The effectiveness of F0 increases depended on the underlying error pattern. Overall intensity showed the most stable effects. Modifications of the pitch contour also resulted in significant improvements, but not to the same degree as amplification. Integration of two or more stress markers did not result in better results than manipulation of individual stress markers, unless they were combined with pitch contour modifications. The results highlight the potential for improvement of stress production in speakers with motor speech disorders. The fact that individual parameter manipulation is as effective as combining them will facilitate the therapeutic process considerably, as will the result that amplification at lower levels than seen in typical speakers is sufficient. The difference in results across utterance sets highlights the need to investigate the underlying error pattern in order to select the most effective compensatory strategy for clients. © 2018 Royal College of Speech and Language Therapists.
Schizophrenia and the structure of language: the linguist's view.
Covington, Michael A; He, Congzhou; Brown, Cati; Naçi, Lorina; McClain, Jonathan T; Fjordbak, Bess Sirmon; Semple, James; Brown, John
2005-09-01
Patients with schizophrenia often display unusual language impairments. This is a wide ranging critical review of the literature on language in schizophrenia since the 19th century. We survey schizophrenic language level by level, from phonetics through phonology, morphology, syntax, semantics, and pragmatics. There are at least two kinds of impairment (perhaps not fully distinct): thought disorder, or failure to maintain a discourse plan, and schizophasia, comprising various dysphasia-like impairments such as clanging, neologism, and unintelligible utterances. Thought disorder appears to be primarily a disruption of executive function and pragmatics, perhaps with impairment of the syntax-semantics interface; schizophasia involves disruption at other levels. Phonetics is also often abnormal (manifesting as flat intonation or unusual voice quality), but phonological structure, morphology, and syntax are normal or nearly so (some syntactic impairments have been demonstrated). Access to the lexicon is clearly impaired, manifesting as stilted speech, word approximation, and neologism. Clanging (glossomania) is straightforwardly explainable as distraction by self-monitoring. Recent research has begun to relate schizophrenia, which is partly genetic, to the genetic endowment that makes human language possible.
Automatic detection of Parkinson's disease in running speech spoken in three different languages.
Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E
2016-01-01
The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages.
Lesion localization of speech comprehension deficits in chronic aphasia
Binder, Jeffrey R.; Humphries, Colin; Gross, William L.; Book, Diane S.
2017-01-01
Objective: Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Methods: Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. Results: ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Conclusions: Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. PMID:28179469
Lesion localization of speech comprehension deficits in chronic aphasia.
Pillay, Sara B; Binder, Jeffrey R; Humphries, Colin; Gross, William L; Book, Diane S
2017-03-07
Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. © 2017 American Academy of Neurology.
Masking Release for Igbo and English.
Ebem, Deborah U; Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Uguru, Joy O
2013-09-01
In this research, we explored the effect of noise interruption rate on speech intelligibility. Specifically, we used the Hearing In Noise Test (HINT) procedure with the original HINT stimuli (English) and Igbo stimuli to assess speech reception ability in interrupted noise. For a given noise level, the HINT test provides an estimate of the signal-to-noise ratio (SNR) required for 50%-correct speech intelligibility. The SNR for 50%-correct intelligibility changes depending upon the interruption rate of the noise. This phenomenon (called Masking Release) has been studied extensively in English but not for Igbo - which is an African tonal language spoken predominantly in South Eastern Nigeria. This experiment explored and compared the phenomenon of Masking Release for (i) native English speakers listening to English, (ii) native Igbo speakers listening to English, and (iii) native Igbo speakers listening to Igbo. Since Igbo is a tonal language and English is a non-tonal language, this allowed us to compare Masking Release patterns on native speakers of tonal and non-tonal languages. Our results for native English speakers listening to English HINT show that the SNR and the masking release are orderly and consistent with other English HINT data for English speakers. Our result for Igbo speakers listening to English HINT sentences show that there is greater variability in results across the different Igbo listeners than across the English listeners. This result likely reflects different levels of ability in the English language across the Igbo listeners. The masking release values in dB are less than for English listeners. Our results for Igbo speakers listening to Igbo show that in general, the SNRs for Igbo sentences are lower than for English/English and Igbo/English. This means that the Igbo listeners could understand 50% of the Igbo sentences at SNRs less than those required for English sentences by either native or non-native listeners. This result can be explained by the fact that the perception of Igbo utterances by Igbo subjects may have been aided by the prediction of tonal and vowel harmony features existent in the Igbo language. In agreement with other studies, our results also show that in a noisy environment listeners are able to perceive their native language better than a second language. The ability of native language speakers to perceive their language better than a second language in a noisy environment may be attributed to the fact that: Native speakers are more familiar with the sounds of their language than second language speakers.One of the features of language is that it is predictable hence even in noise a native speaker may be able to predict a succeeding word that is scarcely audible. These contextual effects are facilitated by familiarity.
Auditory Cortex Processes Variation in Our Own Speech
Sitek, Kevin R.; Mathalon, Daniel H.; Roach, Brian J.; Houde, John F.; Niziolek, Caroline A.; Ford, Judith M.
2013-01-01
As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance's difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production. PMID:24349399
Distributional Language Learning: Mechanisms and Models of ategory Formation.
Aslin, Richard N; Newport, Elissa L
2014-09-01
In the past 15 years, a substantial body of evidence has confirmed that a powerful distributional learning mechanism is present in infants, children, adults and (at least to some degree) in nonhuman animals as well. The present article briefly reviews this literature and then examines some of the fundamental questions that must be addressed for any distributional learning mechanism to operate effectively within the linguistic domain. In particular, how does a naive learner determine the number of categories that are present in a corpus of linguistic input and what distributional cues enable the learner to assign individual lexical items to those categories? Contrary to the hypothesis that distributional learning and category (or rule) learning are separate mechanisms, the present article argues that these two seemingly different processes---acquiring specific structure from linguistic input and generalizing beyond that input to novel exemplars---actually represent a single mechanism. Evidence in support of this single-mechanism hypothesis comes from a series of artificial grammar-learning studies that not only demonstrate that adults can learn grammatical categories from distributional information alone, but that the specific patterning of distributional information among attested utterances in the learning corpus enables adults to generalize to novel utterances or to restrict generalization when unattested utterances are consistently absent from the learning corpus. Finally, a computational model of distributional learning that accounts for the presence or absence of generalization is reviewed and the implications of this model for linguistic-category learning are summarized.
Liu, Xiao; Sawada, Yoshie; Takizawa, Takako; Sato, Hiroko; Sato, Mahito; Sakamoto, Hironosuke; Utsugi, Toshihiro; Sato, Kunio; Sumino, Hiroyuki; Okamura, Shinichi; Sakamaki, Tetsuo
2007-01-01
The objective of this study was to compare doctor-patient communications in clinical consultations via telemedicine technology to doctor-patient communications in face-to-face clinical consultations. Five doctors who had been practicing internal medicine for 8 to 18 years, and twenty patients were enrolled in this study; neither doctors nor patients had previous experience of telemedicine. The patients received both a telemedicine consultation and a face-to-face consultation. Three measures--video observation, medical record volume, and participants' satisfaction--were used for the assessment. It was found that the time spent on the telemedicine consultation was substantially longer than the time spent on the face-to-face consultation. No statistically significant differences were found in the number of either closed or open-ended questions asked by doctors between both types of consultation. Empathy-utterances, praise-utterances, and facilitation-utterances were, however, seen less in the telemedicine consultations than in the face-to-face consultations. The volume of the medical records was statistically smaller in the telemedicine consultations than in the face-to-face consultations. Patients were satisfied with the telemedicine consultation, but doctors were dissatisfied with it and felt hampered by the communication barriers. This study suggests that new training programs are needed for doctors to develop improved communication skills and the ability to express empathy in telemedicine consultations.
Rydell, P J; Mirenda, P
1994-12-01
This study examined the effects of adult antecedent utterances on the occurrence and use of echolalia in children with autism during a free play setting. Adult antecedent utterances were differentiated into two types, high and low constraint, based on the degree of linguistic constraint inherent in the adult utterance and social-communicative control exerted on the child's social and verbal interaction. Results of this study identified a variety of patterns of echolalia usage following adult high and low constraint utterances. Overall results found that a majority of immediate echoes followed high constraint utterances and were primarily used as responsives, organizational devices, and cognitives. The majority of delayed echoes followed low constraint utterances and were primarily used as requestives, assertives, and cognitives. Delayed echoes were more likely than immediate echoes to be produced with evidence of comprehension, but there were no differences in comprehension within the two categories of echolalia following high and low constraint utterances. Educational implications are discussed.
Sentence durations and accentedness judgments
NASA Astrophysics Data System (ADS)
Bond, Z. S.; Stockmal, Verna; Markus, Dace
2003-04-01
Talkers in a second language can frequently be identified as speaking with a foreign accent. It is not clear to what degree a foreign accent represents specific deviations from a target language versus more general characteristics. We examined the identifications of native and non-native talkers by listeners with various amount of knowledge of the target language. Native and non-native speakers of Latvian provided materials. All the non-native talkers spoke Russian as their first language and were long-term residents of Latvia. A listening test, containing sentences excerpted from a short recorded passage, was presented to three groups of listeners: native speakers of Latvian, Russians for whom Latvian was a second language, and Americans with no knowledge of either of the two languages. The listeners were asked to judge whether each utterance was produced by a native or non-native talker. The Latvians identified the non-native talkers very accurately, 88%. The Russians were somewhat less accurate, 83%. The American listeners were least accurate, but still identified the non-native talkers at above chance levels, 62%. Sentence durations correlated with the judgments provided by the American listeners but not with the judgments provided by native or L2 listeners.
Wood, Carla; Diehm, Emily A; Callender, Maya F
2016-04-01
The current study was designed to (a) describe average hourly Language Environment Analysis (LENA) data for preschool-age Spanish-English bilinguals (SEBs) and typically developing monolingual peers and (b) compare LENA data with mean length of utterance in words (MLUw) and total number of words (TNW) calculated on a selected sample of consecutive excerpts of audio files (CEAFs). Investigators examined average hourly child vocalizations from daylong LENA samples for 42 SEBs and 39 monolingual English-speaking preschoolers. The relationship between average hourly child vocalizations, conversational turns, and adult words from the daylong samples and MLUw from a 50-utterance CEAF was examined and compared between groups. MLUw, TNW, average hourly child vocalizations, and conversational turns were lower for young SEBs than monolingual English-speaking peers. Average hourly child vocalizations were not strongly related to MLUw performance for monolingual or SEB participants (r = .29, r = .25, respectively). In a similar manner, average hourly conversational turns were not strongly related to MLUw for either group (r = .22, r = .21, respectively). Young SEBs from socioeconomically disadvantaged backgrounds showed lower average performance on LENA measures, MLUw, and TNW than monolingual English-speaking peers. MLUw from monolinguals were also lower than typical expectations when derived from CEAFs. LENA technology may be a promising tool for communication sampling with SEBs; however, more research is needed to establish norms for interpreting MLUw and TNW from selected CEAF samples.
Context updating during sentence comprehension: the effect of aboutness topic.
Burmester, Juliane; Spalek, Katharina; Wartenburger, Isabell
2014-10-01
To communicate efficiently, speakers typically link their utterances to the discourse environment and adapt their utterances to the listener's discourse representation. Information structure describes how linguistic information is packaged within a discourse to optimize information transfer. The present study investigates the nature and time course of context integration (i.e., aboutness topic vs. neutral context) on the comprehension of German declarative sentences with either subject-before-object (SO) or object-before-subject (OS) word order using offline comprehensibility judgments and online event-related potentials (ERPs). Comprehensibility judgments revealed that the topic context selectively facilitated comprehension of stories containing OS (i.e., non-canonical) sentences. In the ERPs, the topic context effect was reflected in a less pronounced late positivity at the sentence-initial object. In line with the Syntax-Discourse Model, we argue that these context-induced effects are attributable to reduced processing costs for updating the current discourse model. The results support recent approaches of neurocognitive models of discourse processing. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Conceptual recurrence plots: revealing patterns in human discourse.
Angus, Daniel; Smith, Andrew; Wiles, Janet
2012-06-01
Human discourse contains a rich mixture of conceptual information. Visualization of the global and local patterns within this data stream is a complex and challenging problem. Recurrence plots are an information visualization technique that can reveal trends and features in complex time series data. The recurrence plot technique works by measuring the similarity of points in a time series to all other points in the same time series and plotting the results in two dimensions. Previous studies have applied recurrence plotting techniques to textual data; however, these approaches plot recurrence using term-based similarity rather than conceptual similarity of the text. We introduce conceptual recurrence plots, which use a model of language to measure similarity between pairs of text utterances, and the similarity of all utterances is measured and displayed. In this paper, we explore how the descriptive power of the recurrence plotting technique can be used to discover patterns of interaction across a series of conversation transcripts. The results suggest that the conceptual recurrence plotting technique is a useful tool for exploring the structure of human discourse.
Segment-based acoustic models for continuous speech recognition
NASA Astrophysics Data System (ADS)
Ostendorf, Mari; Rohlicek, J. R.
1993-07-01
This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.
The Role of Utterance Length and Position in 3-Year-Olds' Production of Third Person Singular -s
ERIC Educational Resources Information Center
Mealings, Kiri T.; Demuth, Katherine
2014-01-01
Purpose: Evidence from children's spontaneous speech suggests that utterance length and utterance position may help explain why children omit grammatical morphemes in some contexts but not others. This study investigated whether increased utterance length (hence, increased grammatical complexity) adversely affects children's third person singular…
Walton, Katherine M; Ingersoll, Brooke R
2015-05-01
Adult responsiveness is related to language development both in young typically developing children and in children with autism spectrum disorders, such that parents who use more responsive language with their children have children who develop better language skills over time. This study used a micro-analytic technique to examine how two facets of maternal utterances, relationship to child focus of attention and degree of demandingness, influenced the immediate use of appropriate expressive language of preschool-aged children with autism spectrum disorders (n = 28) and toddlers with typical development (n = 16) within a naturalistic mother-child play session. Mothers' use of follow-in demanding language was most likely to elicit appropriate expressive speech in both children with autism spectrum disorders and children with typical development. For children with autism spectrum disorders, but not children with typical development, mothers' use of orienting cues conferred an additional benefit for expressive speech production. These findings are consistent with the naturalistic behavioral intervention philosophy and suggest that following a child's lead while prompting for language is likely to elicit speech production in children with autism spectrum disorders and children with typical development. Furthermore, using orienting cues may help children with autism spectrum disorders to verbally respond. © The Author(s) 2014.
Iconicity and the Emergence of Combinatorial Structure in Language.
Verhoef, Tessa; Kirby, Simon; de Boer, Bart
2016-11-01
In language, recombination of a discrete set of meaningless building blocks forms an unlimited set of possible utterances. How such combinatorial structure emerged in the evolution of human language is increasingly being studied. It has been shown that it can emerge when languages culturally evolve and adapt to human cognitive biases. How the emergence of combinatorial structure interacts with the existence of holistic iconic form-meaning mappings in a language is still unknown. The experiment presented in this paper studies the role of iconicity and human cognitive learning biases in the emergence of combinatorial structure in artificial whistled languages. Participants learned and reproduced whistled words for novel objects with the use of a slide whistle. Their reproductions were used as input for the next participant, to create transmission chains and simulate cultural transmission. Two conditions were studied: one in which the persistence of iconic form-meaning mappings was possible and one in which this was experimentally made impossible. In both conditions, cultural transmission caused the whistled languages to become more learnable and more structured, but this process was slightly delayed in the first condition. Our findings help to gain insight into when and how words may lose their iconic origins when they become part of an organized linguistic system. Copyright © 2015 Cognitive Science Society, Inc.
Wolf Craig, Kelly S.; Hall, Wyatte C.; Ziedonis, Douglas M.
2016-01-01
Conducting semi-structured American Sign Language interviews with 17 Deaf trauma survivors, this pilot study explored Deaf individuals’ trauma experiences and whether these experiences generally align with trauma in the hearing population. Most commonly reported traumas were physical assault, sudden unexpected deaths, and “other” very stressful events. Although some “other” events overlap with traumas in the general population, many are unique to Deaf people (e.g., corporal punishment at oral/aural school if caught using sign language, utter lack of communication with hearing parents). These findings suggest that Deaf individuals may experience developmental traumas distinct to being raised in a hearing world. Such traumas are not captured by available trauma assessments, nor are they considered in evidence-based trauma treatments. PMID:28138351
Anderson, Melissa L; Wolf Craig, Kelly S; Hall, Wyatte C; Ziedonis, Douglas M
2016-12-01
Conducting semi-structured American Sign Language interviews with 17 Deaf trauma survivors, this pilot study explored Deaf individuals' trauma experiences and whether these experiences generally align with trauma in the hearing population. Most commonly reported traumas were physical assault, sudden unexpected deaths, and "other" very stressful events. Although some "other" events overlap with traumas in the general population, many are unique to Deaf people (e.g., corporal punishment at oral/aural school if caught using sign language, utter lack of communication with hearing parents). These findings suggest that Deaf individuals may experience developmental traumas distinct to being raised in a hearing world. Such traumas are not captured by available trauma assessments, nor are they considered in evidence-based trauma treatments.
Utterance Duration as It Relates to Communicative Variables in Infant Vocal Development
ERIC Educational Resources Information Center
Ramsdell-Hudock, Heather L.; Stuart, Andrew; Parham, Douglas F.
2018-01-01
Purpose: We aimed to provide novel information on utterance duration as it relates to vocal type, facial affect, gaze direction, and age in the prelinguistic/early linguistic infant. Method: Infant utterances were analyzed from longitudinal recordings of 15 infants at 8, 10, 12, 14, and 16 months of age. Utterance durations were measured and coded…
The redeployment of attention to the mouth of a talking face during the second year of life.
Hillairet de Boisferon, Anne; Tift, Amy H; Minar, Nicholas J; Lewkowicz, David J
2018-08-01
Previous studies have found that when monolingual infants are exposed to a talking face speaking in a native language, 8- and 10-month-olds attend more to the talker's mouth, whereas 12-month-olds no longer do so. It has been hypothesized that the attentional focus on the talker's mouth at 8 and 10 months of age reflects reliance on the highly salient audiovisual (AV) speech cues for the acquisition of basic speech forms and that the subsequent decline of attention to the mouth by 12 months of age reflects the emergence of basic native speech expertise. Here, we investigated whether infants may redeploy their attention to the mouth once they fully enter the word-learning phase. To test this possibility, we recorded eye gaze in monolingual English-learning 14- and 18-month-olds while they saw and heard a talker producing an English or Spanish utterance in either an infant-directed (ID) or adult-directed (AD) manner. Results indicated that the 14-month-olds attended more to the talker's mouth than to the eyes when exposed to the ID utterance and that the 18-month-olds attended more to the talker's mouth when exposed to the ID and the AD utterance. These results show that infants redeploy their attention to a talker's mouth when they enter the word acquisition phase and suggest that infants rely on the greater perceptual salience of redundant AV speech cues to acquire their lexicon. Copyright © 2018 Elsevier Inc. All rights reserved.
Phonetic convergence in spontaneous conversations as a function of interlocutor language distance
Kim, Midam; Horton, William S.; Bradlow, Ann R.
2013-01-01
This study explores phonetic convergence during conversations between pairs of talkers with varying language distance. Specifically, we examined conversations within two native English talkers and within two native Korean talkers who had either the same or different regional dialects, and between native and nonnative talkers of English. To measure phonetic convergence, an independent group of listeners judged the similarity of utterance samples from each talker through an XAB perception test, in which X was a sample of one talker’s speech and A and B were samples from the other talker at either early or late portions of the conversation. The results showed greater convergence for same-dialect pairs than for either the different-dialect pairs or the different-L1 pairs. These results generally support the hypothesis that there is a relationship between phonetic convergence and interlocutor language distance. We interpret this pattern as suggesting that phonetic convergence between talker pairs that vary in the degree of their initial language alignment may be dynamically mediated by two parallel mechanisms: the need for intelligibility and the extra demands of nonnative speech production and perception. PMID:23637712
Weisleder, Adriana; Waxman, Sandra R.
2010-01-01
Recent analyses have revealed that child-directed speech contains distributional regularities that could, in principle, support young children's discovery of distinct grammatical categories (noun, verb, adjective). In particular, a distributional unit known as the frequent frame appears to be especially informative (Mintz, 2003). However, analyses have focused almost exclusively on the distributional information available in English. Because languages differ considerably in how the grammatical forms are marked within utterances, the scarcity of cross-linguistic evidence represents an unfortunate gap. We therefore advance the developmental evidence by analyzing the distributional information available in frequent frames across two languages (Spanish and English), across sentence positions (phrase medial and phrase final), and across grammatical forms (noun, verb, adjective). We selected six parent-child corpora from the CHILDES database (3 English; 3 Spanish), and analyzed the input when children were 2;6 years or younger. In each language, frequent frames did indeed offer systematic cues to grammatical category assignment. We also identify differences in the accuracy of these frames across languages, sentences positions, and grammatical classes. PMID:19698207
Weisleder, Adriana; Waxman, Sandra R
2010-11-01
Recent analyses have revealed that child-directed speech contains distributional regularities that could, in principle, support young children's discovery of distinct grammatical categories (noun, verb, adjective). In particular, a distributional unit known as the frequent frame appears to be especially informative (Mintz, 2003). However, analyses have focused almost exclusively on the distributional information available in English. Because languages differ considerably in how the grammatical forms are marked within utterances, the scarcity of cross-linguistic evidence represents an unfortunate gap. We therefore advance the developmental evidence by analyzing the distributional information available in frequent frames across two languages (Spanish and English), across sentence positions (phrase medial and phrase final), and across grammatical forms (noun, verb, adjective). We selected six parent-child corpora from the CHILDES database (three English; three Spanish), and analyzed the input when children were aged 2 ; 6 or younger. In each language, frequent frames did indeed offer systematic cues to grammatical category assignment. We also identify differences in the accuracy of these frames across languages, sentences positions and grammatical classes.
ERIC Educational Resources Information Center
Rydell, Patrick J.; Mirenda, Pat
1994-01-01
Examination of the effects of adult antecedent utterances on echolalia in seven male children with autism (ages five and six) during free play found that most immediate echoes followed high constraint utterances and were used as responsives, organizational devices, and cognitives. Most delayed echoes followed low constraint utterances and were…
Saito, Yuri; Fukuhara, Rie; Aoyama, Shiori; Toshima, Tamotsu
2009-07-01
The present study was focusing on the very few contacts with the mother's voice that NICU infants have in the womb as well as after birth, we examined whether they can discriminate between their mothers' utterances and those of female nurses in terms of the emotional bonding that is facilitated by prosodic utterances. Twenty-six premature infants were included in this study, and their cerebral blood flows were measured by near-infrared spectroscopy. They were exposed to auditory stimuli in the form of utterances made by their mothers and female nurses. A two (stimulus: mother and nurse) x two (recording site: right frontal area and left frontal area) analysis of variance (ANOVA) for these relative oxy-Hb values was conducted. The ANOVA showed a significant interaction between stimulus and recording site. The mother's and the nurse's voices were activated in the same way in the left frontal area, but showed different reactions in the right frontal area. We presume that the nurse's voice might become associated with pain and stress for premature infants. Our results showed that the premature infants reacted differently to the different voice stimuli. Therefore, we presume that both mothers' and nurses' voices represent positive stimuli for premature infants because both activate the frontal brain. Accordingly, we cannot explain our results only in terms of the state-dependent marker for infantile individual differences, but must also address the stressful trigger of nurses' voices for NICU infants.
Perception of English intonation by English, Spanish, and Chinese listeners.
Grabe, Esther; Rosner, Burton S; García-Albea, José E; Zhou, Xiaolin
2003-01-01
Native language affects the perception of segmental phonetic structure, of stress, and of semantic and pragmatic effects of intonation. Similarly, native language might influence the perception of similarities and differences among intonation contours. To test this hypothesis, a cross-language experiment was conducted. An English utterance was resynthesized with seven falling and four rising intonation contours. English, Iberian Spanish, and Chinese listeners then rated each pair of nonidentical stimuli for degree of difference. Multidimensional scaling of the results supported the hypothesis. The three groups of listeners produced statistically different perceptual configurations for the falling contours. All groups, however, perceptually separated the falling from the rising contours. This result suggested that the perception of intonation begins with the activation of universal auditory mechanisms that process the direction of relatively slow frequency modulations. A second experiment therefore employed frequency-modulated sine waves that duplicated the fundamental frequency contours of the speech stimuli. New groups of English, Spanish, and Chinese subjects yielded no cross-language differences between the perceptual configurations for these nonspeech stimuli. The perception of similarities and differences among intonation contours calls upon universal auditory mechanisms whose output is molded by experience with one's native language.
Language learners privilege structured meaning over surface frequency
Culbertson, Jennifer; Adger, David
2014-01-01
Although it is widely agreed that learning the syntax of natural languages involves acquiring structure-dependent rules, recent work on acquisition has nevertheless attempted to characterize the outcome of learning primarily in terms of statistical generalizations about surface distributional information. In this paper we investigate whether surface statistical knowledge or structural knowledge of English is used to infer properties of a novel language under conditions of impoverished input. We expose learners to artificial-language patterns that are equally consistent with two possible underlying grammars—one more similar to English in terms of the linear ordering of words, the other more similar on abstract structural grounds. We show that learners’ grammatical inferences overwhelmingly favor structural similarity over preservation of superficial order. Importantly, the relevant shared structure can be characterized in terms of a universal preference for isomorphism in the mapping from meanings to utterances. Whereas previous empirical support for this universal has been based entirely on data from cross-linguistic language samples, our results suggest it may reflect a deep property of the human cognitive system—a property that, together with other structure-sensitive principles, constrains the acquisition of linguistic knowledge. PMID:24706789
Evaluation of Language Function under Awake Craniotomy
KANNO, Aya; MIKUNI, Nobuhiro
2015-01-01
Awake craniotomy is the only established way to assess patients’ language functions intraoperatively and to contribute to their preservation, if necessary. Recent guidelines have enabled the approach to be used widely, effectively, and safely. Non-invasive brain functional imaging techniques, including functional magnetic resonance imaging and diffusion tensor imaging, have been used preoperatively to identify brain functional regions corresponding to language, and their accuracy has increased year by year. In addition, the use of neuronavigation that incorporates this preoperative information has made it possible to identify the positional relationships between the lesion and functional regions involved in language, conduct functional brain mapping in the awake state with electrical stimulation, and intraoperatively assess nerve function in real time when resecting the lesion. This article outlines the history of awake craniotomy, the current state of pre- and intraoperative evaluation of language function, and the clinical usefulness of such functional evaluation. When evaluating patients’ language functions during awake craniotomy, given the various intraoperative stresses involved, it is necessary to carefully select the tasks to be undertaken, quickly perform all examinations, and promptly evaluate the results. As language functions involve both input and output, they are strongly affected by patients’ preoperative cognitive function, degree of intraoperative wakefulness and fatigue, the ability to produce verbal articulations and utterances, as well as perform synergic movement. Therefore, it is essential to appropriately assess the reproducibility of language function evaluation using awake craniotomy techniques. PMID:25925758
Evaluation of Language Function under Awake Craniotomy.
Kanno, Aya; Mikuni, Nobuhiro
2015-01-01
Awake craniotomy is the only established way to assess patients' language functions intraoperatively and to contribute to their preservation, if necessary. Recent guidelines have enabled the approach to be used widely, effectively, and safely. Non-invasive brain functional imaging techniques, including functional magnetic resonance imaging and diffusion tensor imaging, have been used preoperatively to identify brain functional regions corresponding to language, and their accuracy has increased year by year. In addition, the use of neuronavigation that incorporates this preoperative information has made it possible to identify the positional relationships between the lesion and functional regions involved in language, conduct functional brain mapping in the awake state with electrical stimulation, and intraoperatively assess nerve function in real time when resecting the lesion. This article outlines the history of awake craniotomy, the current state of pre- and intraoperative evaluation of language function, and the clinical usefulness of such functional evaluation. When evaluating patients' language functions during awake craniotomy, given the various intraoperative stresses involved, it is necessary to carefully select the tasks to be undertaken, quickly perform all examinations, and promptly evaluate the results. As language functions involve both input and output, they are strongly affected by patients' preoperative cognitive function, degree of intraoperative wakefulness and fatigue, the ability to produce verbal articulations and utterances, as well as perform synergic movement. Therefore, it is essential to appropriately assess the reproducibility of language function evaluation using awake craniotomy techniques.
Theory of mind in utterance interpretation: the case from clinical pragmatics.
Cummings, Louise
2015-01-01
The cognitive basis of utterance interpretation is an area that continues to provoke intense theoretical debate among pragmatists. That utterance interpretation involves some type of mind-reading or theory of mind (ToM) is indisputable. However, theorists are divided on the exact nature of this ToM-based mechanism. In this paper, it is argued that the only type of ToM-based mechanism that can adequately represent the cognitive basis of utterance interpretation is one which reflects the rational, intentional, holistic character of interpretation. Such a ToM-based mechanism is supported on conceptual and empirical grounds. Empirical support for this view derives from the study of children and adults with pragmatic disorders. Specifically, three types of clinical case are considered. In the first case, evidence is advanced which indicates that individuals with pragmatic disorders exhibit deficits in reasoning and the use of inferences. These deficits compromise the ability of children and adults with pragmatic disorders to comply with the rational dimension of utterance interpretation. In the second case, evidence is presented which suggests that subjects with pragmatic disorders struggle with the intentional dimension of utterance interpretation. This dimension extends beyond the recognition of communicative intentions to include the attribution of a range of cognitive and affective mental states that play a role in utterance interpretation. In the third case, evidence is presented that children and adults with pragmatic disorders struggle with the holistic character of utterance interpretation. This serves to distort the contexts in which utterances are processed for their implicated meanings. The paper concludes with some thoughts about the role of theorizing in relation to utterance interpretation.
Theory of mind in utterance interpretation: the case from clinical pragmatics
Cummings, Louise
2015-01-01
The cognitive basis of utterance interpretation is an area that continues to provoke intense theoretical debate among pragmatists. That utterance interpretation involves some type of mind-reading or theory of mind (ToM) is indisputable. However, theorists are divided on the exact nature of this ToM-based mechanism. In this paper, it is argued that the only type of ToM-based mechanism that can adequately represent the cognitive basis of utterance interpretation is one which reflects the rational, intentional, holistic character of interpretation. Such a ToM-based mechanism is supported on conceptual and empirical grounds. Empirical support for this view derives from the study of children and adults with pragmatic disorders. Specifically, three types of clinical case are considered. In the first case, evidence is advanced which indicates that individuals with pragmatic disorders exhibit deficits in reasoning and the use of inferences. These deficits compromise the ability of children and adults with pragmatic disorders to comply with the rational dimension of utterance interpretation. In the second case, evidence is presented which suggests that subjects with pragmatic disorders struggle with the intentional dimension of utterance interpretation. This dimension extends beyond the recognition of communicative intentions to include the attribution of a range of cognitive and affective mental states that play a role in utterance interpretation. In the third case, evidence is presented that children and adults with pragmatic disorders struggle with the holistic character of utterance interpretation. This serves to distort the contexts in which utterances are processed for their implicated meanings. The paper concludes with some thoughts about the role of theorizing in relation to utterance interpretation. PMID:26379602
NASA Astrophysics Data System (ADS)
Work, Richard; Andruski, Jean; Casielles, Eugenia; Kim, Sahyang; Nathan, Geoff
2005-04-01
Traditionally, English is classified as a stress-timed language while Spanish is classified as syllable-timed. Examining the contrasting development of rhythmic patterns in bilingual first language acquisition should provide information on how this differentiation takes place. As part of a longitudinal study, speech samples were taken of a Spanish/English bilingual child of Argentinean parents living in the Midwestern United States between the ages of 1;8 and 3;2. Spanish is spoken at home and English input comes primarily from an English day care the child attends 5 days a week. The parents act as interlocutors for Spanish recordings with a native speaker interacting with the child for the English recordings. Following the work of Grabe, Post and Watson (1999) and Grabe and Low (2002) a normalized Pairwise Variability Index (PVI) is used which compares, in utterances of minimally four syllables, the durations of vocalic intervals in successive syllables. Comparisons are then made between the rhythmic patterns of the child's productions within each language over time and between languages at comparable MLUs. Comparisons are also made with the rhythmic patterns of the adult productions of each language. Results will be analyzed for signs of native speaker-like rhythmic production in the child.
Neural evidence that utterance-processing entails mentalizing: the case of irony.
Spotorno, Nicola; Koun, Eric; Prado, Jérôme; Van Der Henst, Jean-Baptiste; Noveck, Ira A
2012-10-15
It is now well established that communicators interpret others' mental states through what has been called "Theory of Mind" (ToM). From a linguistic-pragmatics perspective, this mentalizing ability is considered critical because it is assumed that the linguistic code in all utterances underdetermines the speaker's meaning, leaving a vital role for ToM to fill the gap. From a neuroscience perspective, understanding others' intentions has been shown to activate a neural ToM network that includes the right and left temporal parietal junction (rTPJ, lTPJ), the medial prefrontal cortex (MPFC) and the precuneus (PC). Surprisingly, however, there are no studies - to our knowledge - that aim to uncover a direct, on-line link between language processing and ToM through neuroimaging. This is why we focus on verbal irony, an obviously pragmatic phenomenon that compels a listener to detect the speaker's (dissociated, mocking) attitude (Wilson, 2009). In the present fMRI investigation, we compare participants' comprehension of 18 target sentences as contexts make them either ironic or literal. Consider an opera singer who tells her interlocutor: "Tonight we gave a superb performance!" when the performance in question was clearly awful (making the statement ironic) or very good (making the statement literal). We demonstrate that the ToM network becomes active while a participant is understanding verbal irony. Moreover, we demonstrate - through Psychophysiological Interactions (PPI) analyses - that ToM activity is directly linked with language comprehension processes. The paradigm, its predictions, and the reported results contrast dramatically with those from seven prior fMRI studies on irony. Copyright © 2012 Elsevier Inc. All rights reserved.
Nyhout, Angela; O'Neill, Daniela K.
2014-01-01
Parents and children encounter a variety of animals and objects in the early picture books they share, but little is known about how the context in which these entities are presented influences talk about them. The present study investigated how the presence or absence of a visual narrative context influences mothers' tendency to refer to animals as individual characters or as members of a kind when sharing picture books with their toddlers (mean age 21.3 months). Mother-child dyads shared both a narrative and a non-narrative book, each featuring six animals and matched in terms of length and quantity of text. Mothers made more specific (individual-referring) statements about animals in the narrative books, whereas they provided more labels for animals in the non-narrative books. But, of most interest, the frequency and proportion of mothers' use of generic (kind-referring) utterances did not differ across the two different types of books. Further coding of the content of the utterances revealed that mothers provided more story-specific descriptions of states and actions of the animals when sharing narrative books and more physical descriptions of animals when sharing non-narrative books. However, the two books did not differ in terms of their elicitation of natural facts about the animals. Overall, although the two types of books encouraged different types of talk from mothers, they stimulated generic language and talk about natural facts to an equal degree. Implications for learning from picture storybooks and book genre selection in classrooms and home reading are discussed. PMID:24795675
Nyhout, Angela; O'Neill, Daniela K
2014-01-01
Parents and children encounter a variety of animals and objects in the early picture books they share, but little is known about how the context in which these entities are presented influences talk about them. The present study investigated how the presence or absence of a visual narrative context influences mothers' tendency to refer to animals as individual characters or as members of a kind when sharing picture books with their toddlers (mean age 21.3 months). Mother-child dyads shared both a narrative and a non-narrative book, each featuring six animals and matched in terms of length and quantity of text. Mothers made more specific (individual-referring) statements about animals in the narrative books, whereas they provided more labels for animals in the non-narrative books. But, of most interest, the frequency and proportion of mothers' use of generic (kind-referring) utterances did not differ across the two different types of books. Further coding of the content of the utterances revealed that mothers provided more story-specific descriptions of states and actions of the animals when sharing narrative books and more physical descriptions of animals when sharing non-narrative books. However, the two books did not differ in terms of their elicitation of natural facts about the animals. Overall, although the two types of books encouraged different types of talk from mothers, they stimulated generic language and talk about natural facts to an equal degree. Implications for learning from picture storybooks and book genre selection in classrooms and home reading are discussed.
Automatic Parsing of Parental Verbal Input
Sagae, Kenji; MacWhinney, Brian; Lavie, Alon
2006-01-01
To evaluate theoretical proposals regarding the course of child language acquisition, researchers often need to rely on the processing of large numbers of syntactically parsed utterances, both from children and their parents. Because it is so difficult to do this by hand, there are currently no parsed corpora of child language input data. To automate this process, we developed a system that combined the MOR tagger, a rule-based parser, and statistical disambiguation techniques. The resultant system obtained nearly 80% correct parses for the sentences spoken to children. To achieve this level, we had to construct a particular processing sequence that minimizes problems caused by the coverage/ambiguity trade-off in parser design. These procedures are particularly appropriate for use with the CHILDES database, an international corpus of transcripts. The data and programs are now freely available over the Internet. PMID:15190707
Mohammadzaheri, Fereshteh; Koegel, Lynn Kern; Rezaei, Mohammad; Bakhshi, Enayatolah
2015-09-01
Children with autism often demonstrate disruptive behaviors during demanding teaching tasks. Language intervention can be particularly difficult as it involves social and communicative areas, which are challenging for this population. The purpose of this study was to compare two intervention conditions, a naturalistic approach, Pivotal Response Treatment (PRT) with an adult-directed ABA approach on disruptive behavior during language intervention in the public schools. A randomized clinical trial design was used with two groups of children, matched according to age, sex and mean length of utterance. The data showed that the children demonstrated significantly lower levels of disruptive behavior during the PRT condition. The results are discussed with respect to antecedent manipulations that may be helpful in reducing disruptive behavior.
Mohammadzaheri, Fereshteh; Koegel, Lynn Kern; Rezaei, Mohammad; Bakhshi, Enayatolah
2015-01-01
Children with autism often demonstrate disruptive behaviors during demanding teaching tasks. Language intervention can be particularly difficult as it involves social and communicative areas, which are challenging for this population. The purpose of this study was to compare two intervention conditions, a naturalistic approach, Pivotal Response Treatment (PRT) with a structured ABA approach on disruptive behavior during language intervention in the public schools. A Randomized Clinical Trial (RCT) design was used with two groups of children, matched according to age, sex and mean length of utterance. The data showed that the children demonstrated significantly lower levels of disruptive behavior during the PRT condition. The results are discussed with respect to antecedent manipulations that may be helpful in reducing disruptive behavior. PMID:25953148
Linear grammar as a possible stepping-stone in the evolution of language.
Jackendoff, Ray; Wittenberg, Eva
2017-02-01
We suggest that one way to approach the evolution of language is through reverse engineering: asking what components of the language faculty could have been useful in the absence of the full complement of components. We explore the possibilities offered by linear grammar, a form of language that lacks syntax and morphology altogether, and that structures its utterances through a direct mapping between semantics and phonology. A language with a linear grammar would have no syntactic categories or syntactic phrases, and therefore no syntactic recursion. It would also have no functional categories such as tense, agreement, and case inflection, and no derivational morphology. Such a language would still be capable of conveying certain semantic relations through word order-for instance by stipulating that agents should precede patients. However, many other semantic relations would have to be based on pragmatics and discourse context. We find evidence of linear grammar in a wide range of linguistic phenomena: pidgins, stages of late second language acquisition, home signs, village sign languages, language comprehension (even in fully syntactic languages), aphasia, and specific language impairment. We also find a full-blown language, Riau Indonesian, whose grammar is arguably close to a pure linear grammar. In addition, when subjects are asked to convey information through nonlinguistic gesture, their gestures make use of semantically based principles of linear ordering. Finally, some pockets of English grammar, notably compounds, can be characterized in terms of linear grammar. We conclude that linear grammar is a plausible evolutionary precursor of modern fully syntactic grammar, one that is still active in the human mind.
Expressive language of two year-old pre-term and full-term children.
Isotani, Selma Mie; Azevedo, Marisa Frasson de; Chiari, Brasília Maria; Perissinoto, Jacy
2009-01-01
expressive language of pre-term children. to compare the expressive vocabulary of two year-old children born prematurely, to that of those born at term. the study sample was composed by 118 speech-language assessment protocols, divided in two groups: the pre-term group (PTG) composed by 58 underweight premature children followed by a multi-professional team at the Casa do Prematuro (House of Premature Children) at Unifesp, and the full-term group (FTG) composed by 60 full-term born children. In order to evaluate the expressive language of these children, the Lave - Lista de Avaliação do Vocabulário Expressivo (Assessment List of the Expressive Vocabulary) was used. The Lave is an adaptation of the LDS - Language Development Survey - for the Brazilian Portuguese Language. The Lave investigates the expressive language and detects delays in oral language. children born underweight and prematurely present a greater occurrence of expressive language delay, 27.6%. These pre-term children present significantly lower expressive vocabulary and phrasal extension than children of the same age born at full-term in all semantic categories. Family income proved to be positively associated to phrasal extension, as well as to gestational age and weight at birth; thus indicating the effect of these adverse conditions still during the third year of age. The audiological status was associated to word utterances in the PTG. children born prematurely and underweight are at risk in terms of vocabulary development; this determines the need for speech-therapy intervention programs.
Is Language a Factor in the Perception of Foreign Accent Syndrome?
Jose, Linda; Read, Jennifer; Miller, Nick
2016-06-01
Neurogenic foreign accent syndrome (FAS) is diagnosed when listeners perceive speech associated with motor speech impairments as foreign rather than disordered. Speakers with foreign accent syndrome typically have aphasia. It remains unclear how far language changes might contribute to the perception of foreign accent syndrome independent of accent. Judges with and without training in language analysis rated orthographic transcriptions of speech from people with foreign accent syndrome, speech-language disorder and no foreign accent syndrome, foreign accent without neurological impairment and healthy controls on scales of foreignness, normalness and disorderedness. Control speakers were judged as significantly more normal, less disordered and less foreign than other groups. Foreign accent syndrome speakers' transcriptions consistently profiled most closely to those of foreign speakers and significantly different to speakers with speech-language disorder. On normalness and foreignness ratings there were no significant differences between foreign and foreign accent syndrome speakers. For disorderedness, foreign accent syndrome participants fell midway between foreign speakers and those with speech-language impairment only. Slower rate, more hesitations, pauses within and between utterances influenced judgments, delineating control scripts from others. Word-level syntactic and morphological deviations and reduced syntactic and semantic repertoire linked strongly with foreignness perceptions. Greater disordered ratings related to word fragments, poorly intelligible grammatical structures and inappropriate word selection. Language changes influence foreignness perception. Clinical and theoretical issues are addressed.
Intensive Communicative Therapy Reduces Symptoms of Depression in Chronic Nonfluent Aphasia
Mohr, Bettina; Stahl, Benjamin; Berthier, Marcelo L.; Pulvermüller, Friedemann
2017-01-01
Background. Patients with brain lesions and resultant chronic aphasia frequently suffer from depression. However, no effective interventions are available to target neuropsychiatric symptoms in patients with aphasia who have severe language and communication deficits. Objective. The present study aimed to investigate the efficacy of 2 different methods of speech and language therapy in reducing symptoms of depression in aphasia on the Beck Depression Inventory (BDI) using secondary analysis (BILAT-1 trial). Methods. In a crossover randomized controlled trial, 18 participants with chronic nonfluent aphasia following left-hemispheric brain lesions were assigned to 2 consecutive treatments: (1) intensive language-action therapy (ILAT), emphasizing communicative language use in social interaction, and (2) intensive naming therapy (INT), an utterance-centered standard method. Patients were randomly assigned to 2 groups, receiving both treatments in counterbalanced order. Both interventions were applied for 3.5 hours daily over a period of 6 consecutive working days. Outcome measures included depression scores on the BDI and a clinical language test (Aachen Aphasia Test). Results. Patients showed a significant decrease in symptoms of depression after ILAT but not after INT, which paralleled changes on clinical language tests. Treatment-induced decreases in depression scores persisted when controlling for individual changes in language performance. Conclusions. Intensive training of behaviorally relevant verbal communication in social interaction might help reduce symptoms of depression in patients with chronic nonfluent aphasia. PMID:29192534
Haviland, John B
2015-01-01
Zinacantec Family Homesign (Z) is a new sign language emerging spontaneously over the past three decades in a single family in a remote Mayan Indian village. Three deaf siblings, their Tzotzil-speaking age-mates, and now their children, who have had contact with no other deaf people, represent the first generation of Z signers. I postulate an augmented grammaticalization path, beginning with the adoption of a Tzotzil cospeech holophrastic gesture-meaning "come!"-into Z, and then its apparent stylization as an attention-getting sign, followed by grammatical regimentation and pragmatic generalization as an utterance initial change of speaker or turn marker. Copyright © 2015 Cognitive Science Society, Inc.
Asking or Telling--Real-time Processing of Prosodically Distinguished Questions and Statements.
Heeren, Willemijn F L; Bibyk, Sarah A; Gunlogson, Christine; Tanenhaus, Michael K
2015-12-01
We introduce a targeted language game approach using the visual world, eye-movement paradigm to assess when and how certain intonational contours affect the interpretation of utterances. We created a computer-based card game in which elliptical utterances such as "Got a candy" occurred with a nuclear contour most consistent with a yes-no question (H* H-H%) or a statement (L* L-L%). In Experiment I we explored how such contours are integrated online. In Experiment 2 we studied the expectations listeners have for how intonational contours signal intentions: do these reflect linguistic categories or rapid adaptation to the paradigm? Prosody had an immediate effect on interpretation, as indexed by the pattern and timing of fixations. Moreover, the association between different contours and intentions was quite robust in the absence of clear syntactic cues to sentence type, and was not due to rapid adaptation. Prosody had immediate effects on interpretation even though there was a construction-based bias to interpret "got a" as a question. Taken together, we believe this paradigm will provide further insights into how intonational contours and their phonetic realization interact with other cues to sentence type in online comprehension.
Understanding speaker attitudes from prosody by adults with Parkinson's disease.
Monetta, Laura; Cheang, Henry S; Pell, Marc D
2008-09-01
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical 'pseudo-utterances' were presented to listener groups with and without PD in two separate rating tasks. Task I required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo-utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the politelimpolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language (Pell & Leonard, 2003).
Relative Salience of Speech Rhythm and Speech Rate on Perceived Foreign Accent in a Second Language.
Polyanskaya, Leona; Ordin, Mikhail; Busa, Maria Grazia
2017-09-01
We investigated the independent contribution of speech rate and speech rhythm to perceived foreign accent. To address this issue we used a resynthesis technique that allows neutralizing segmental and tonal idiosyncrasies between identical sentences produced by French learners of English at different proficiency levels and maintaining the idiosyncrasies pertaining to prosodic timing patterns. We created stimuli that (1) preserved the idiosyncrasies in speech rhythm while controlling for the differences in speech rate between the utterances; (2) preserved the idiosyncrasies in speech rate while controlling for the differences in speech rhythm between the utterances; and (3) preserved the idiosyncrasies both in speech rate and speech rhythm. All the stimuli were created in intoned (with imposed intonational contour) and flat (with monotonized, constant F0) conditions. The original and the resynthesized sentences were rated by native speakers of English for degree of foreign accent. We found that both speech rate and speech rhythm influence the degree of perceived foreign accent, but the effect of speech rhythm is larger than that of speech rate. We also found that intonation enhances the perception of fine differences in rhythmic patterns but reduces the perceptual salience of fine differences in speech rate.
van Heuven, Walter J. B.; Conklin, Kathy; Coderre, Emily L.; Guo, Taomei; Dijkstra, Ton
2011-01-01
This study investigated effects of cross-language similarity on within- and between-language Stroop interference and facilitation in three groups of trilinguals. Trilinguals were either proficient in three languages that use the same-script (alphabetic in German–English–Dutch trilinguals), two similar scripts and one different script (Chinese and alphabetic scripts in Chinese–English–Malay trilinguals), or three completely different scripts (Arabic, Chinese, and alphabetic in Uyghur–Chinese–English trilinguals). The results revealed a similar magnitude of within-language Stroop interference for the three groups, whereas between-language interference was modulated by cross-language similarity. For the same-script trilinguals, the within- and between-language interference was similar, whereas the between-language Stroop interference was reduced for trilinguals with languages written in different scripts. The magnitude of within-language Stroop facilitation was similar across the three groups of trilinguals, but smaller than within-language Stroop interference. Between-language Stroop facilitation was also modulated by cross-language similarity such that these effects became negative for trilinguals with languages written in different scripts. The overall pattern of Stroop interference and facilitation effects can be explained in terms of diverging and converging color and word information across languages. PMID:22180749
Integrating Best Practices in Language Intervention and Curriculum Design to Facilitate First Words
ERIC Educational Resources Information Center
Lederer, Susan Hendler
2014-01-01
For children developing language typically, exposure to language through the natural, general language stimulation provided by families, siblings, and others is sufficient enough to facilitate language learning (Bloom & Lahey, 1978; Nelson, 1973; Owens, 2008). However, children with language delays (even those who are receptively and…
Infants' Behaviors as Antecedents and Consequents of Mothers' Responsive and Directive Utterances
ERIC Educational Resources Information Center
Masur, Elise Frank; Flynn, Valerie; Lloyd, Carrie A.
2013-01-01
To investigate possible influences on and consequences of mothers' speech, specific infant behaviors preceding and following four pragmatic categories of mothers' utterances--responsive utterances, supportive behavioral directives, intrusive behavioral directives, and intrusive attentional directives--were examined longitudinally during dyadic…
Bourke, Emilie; Magill, Molly; Apodaca, Timothy R.
2016-01-01
Objective To examine how significant other (SO) language in support of or against client abstinence from alcohol influences clients’ in-session speech and drinking behavior over the 9 months post-Motivational Enhancement Therapy (MET). Method Sequential analyses were used to examine the language of Project MATCH clients who invited an SO to participate in an MET session. Hierarchical regressions investigated the predictive relationship between SO language and clients’ post-treatment drinking behavior. A cohort analytic design compared the change language of these SO-involved participants against a matched group who chose client-only therapy. Results 'SO Support Change' language increased the odds of client Change Talk in the next utterance (p < .01). SO Support Change did not significantly predict reduced post-treatment drinking whereas 'SO Against Change' significantly predicted an increase in average drinks per drinking day (DDD) across months 7-9 post-MET (p = .04). In the matched comparison, the proportion of change-related client language was comparable across the SO-involved and client-only groups. Conclusions Motivational interviewing theory was supported by the sequential association between SO and client language as well as the predictive link between SO Against Change and client drinking intensity. Given the centrality of pro-sobriety language in the literature, it was surprising that SO Support Change did not predict alcohol use outcomes. Findings are discussed in relation to contemporary treatment process research and clinical practice. PMID:26951920
Pushing up daisies: implicit and explicit language in oncologist-patient communication about death.
Rodriguez, Keri L; Gambino, Frank J; Butow, Phyllis; Hagerty, Rebecca; Arnold, Robert M
2007-02-01
Although there are guidelines regarding how conversations with patients about prognosis in life-limiting illness should occur, there are little data about what doctors actually say. This study was designed to qualitatively analyze the language that oncologists and cancer patients use when talking about death. We recruited 29 adults who had incurable forms of cancer, were scheduled for a first-time visit with one of six oncologists affiliated with a teaching hospital in Australia, and consented to having their visit audiotaped and transcribed. Using content analytic techniques, we coded various features of language usage. Of the 29 visits, 23 (79.3%) included prognostic utterances about treatment-related and disease-related outcomes. In 12 (52.2%) of these 23 visits, explicit language about death ("terminal," variations of "death") was used. It was most commonly used by the oncologist after the physical examination, but it was sometimes used by patients or their kin, usually before the examination and involving emotional questioning about the patient's future. In all 23 (100%) visits, implicit language (euphemistic or indirect talk) was used in discussing death and focused on an anticipated life span (mentioned in 87.0% of visits), estimated time frame (69.6%), or projected survival (47.8%). Instead of using the word "death," most participants used some alternative phrase, including implicit language. Although oncologists are more likely than patients and their kin to use explicit language in discussing death, the oncologists tend to couple it with implicit language, possibly to mitigate the message effects.
Effects of utterance length and vocal loudness on speech breathing in older adults.
Huber, Jessica E
2008-12-31
Age-related reductions in pulmonary elastic recoil and respiratory muscle strength can affect how older adults generate subglottal pressure required for speech production. The present study examined age-related changes in speech breathing by manipulating utterance length and loudness during a connected speech task (monologue). Twenty-three older adults and twenty-eight young adults produced a monologue at comfortable loudness and pitch and with multi-talker babble noise playing in the room to elicit louder speech. Dependent variables included sound pressure level, speech rate, and lung volume initiation, termination, and excursion. Older adults produced shorter utterances than young adults overall. Age-related effects were larger for longer utterances. Older adults demonstrated very different lung volume adjustments for loud speech than young adults. These results suggest that older adults have a more difficult time when the speech system is being taxed by both utterance length and loudness. The data were consistent with the hypothesis that both young and older adults use utterance length in premotor speech planning processes.
Utterance independent bimodal emotion recognition in spontaneous communication
NASA Astrophysics Data System (ADS)
Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng
2011-12-01
Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.
Do Adults Show an Effect of Delayed First Language Acquisition When Calculating Scalar Implicatures?
Davidson, Kathryn; Mayberry, Rachel I
Language acquisition involves learning not only grammatical rules and a lexicon, but also what someone is intending to convey with their utterance: the semantic/pragmatic component of language. In this paper we separate the contributions of linguistic development and cognitive maturity to the acquisition of the semantic/pragmatic component of language by comparing deaf adults who had either early or late first exposure to their first language (ASL). We focus on the particular type of meaning at the semantic/pragmatic interface called scalar implicature , for which preschool-age children typically differ from adults. Children's behavior has been attributed to either their not knowing appropriate linguistic alternatives to consider or to cognitive developmental differences between children and adults. Unlike children, deaf adults with late language exposure are cognitively mature, although they never fully acquire some complex linguistic structures, and thus serve as a test for the role of language in such interpretations. Our results indicate an overall high performance by late learners, especially when implicatures are not based on conventionalized items. However, compared to early language learners, late language learners compute fewer implicatures when conventionalized linguistic alternatives are involved (e.g.
Early language delay phenotypes and correlation with later linguistic abilities.
Petinou, Kakia; Spanoudis, George
2014-01-01
The present study focused on examining the continuity and directionality of language skills in late talkers (LTs) and identifying factors which might contribute to language outcomes at the age of 3 years. Subjects were 23 Cypriot-Greek-speaking toddlers classified as LTs and 24 age-matched typically developing peers (TDs). Participants were assessed at 28, 32 and 36 months, using various linguistic measures such as size of receptive and expressive vocabulary, mean length of utterance (MLU) of words and number of consonants produced. Data on otitis media familial history were also analyzed. The ANOVA results indicated parallel developmental profiles between the two groups, with a language lag characterizing LTs. Concurrent correlations between measures showed that poor phonetic inventories in the LT group at 28 months predicted poor MLU at the ages of 32 and 36 months. Significant cross-lagged correlations supported the finding that poor phonetic inventories at 28 months served as a good predictor for MLU and expressive vocabulary at the age of 32 and for MLU at 36 months. The results highlight the negative effect of early language delay on language skills up to the age of 3 years and lend support to the current literature regarding the universal linguistic picture of early and persistent language delay. Based on the current results, poor phonetic inventories at the age of intake might serve as a predictive factor for language outcomes at the age of 36 months. Finally, the findings are discussed in view of the need for further research with a focus on more language-sensitive tools in testing later language outcomes. © 2014 S. Karger AG, Basel.
The origins of duality of patterning in artificial whistled languages
Verhoef, Tessa
2012-01-01
In human speech, a finite set of basic sounds is combined into a (potentially) unlimited set of well-formed morphemes. Hockett (1960) placed this phenomenon under the term ‘duality of patterning’ and included it as one of the basic design features of human language. Of the thirteen basic design features Hockett proposed, duality of patterning is the least studied and it is still unclear how it evolved in language. Recent work shedding light on this is summarized in this paper and experimental data is presented. This data shows that combinatorial structure can emerge in an artificial whistled language through cultural transmission as an adaptation to human cognitive biases and learning. In this work the method of experimental iterated learning (Kirby et al. 2008) is used, in which a participant is trained on the reproductions of the utterances the previous participant learned. Participants learn and recall a system of sounds that are produced with a slide whistle. Transmission from participant to participant causes the whistle systems to change and become more learnable and more structured. These findings follow from qualitative observations, quantitative measures and a follow-up experiment that tests how well participants can learn the emerged whistled languages by generalizing from a few examples. PMID:23637710
Kover, Sara T.; McDuffie, Andrea; Abbeduto, Leonard; Brown, W. Ted
2012-01-01
Purpose This study examined the impact of sampling context on multiple aspects of expressive language in males with fragile X syndrome in comparison to males with Down syndrome or typical development. Method Participants with fragile X syndrome (n = 27), ages 10 to 17 years, were matched groupwise on nonverbal mental age to adolescents with Down syndrome (n = 15) and typically developing 3- to 6-year-olds (n = 15). Language sampling contexts were an interview-style conversation and narration of a wordless book, with scripted examiner behavior. Language was assessed in terms of amount of talk, MLU of communication unit (MLCU), lexical diversity, fluency, and intelligibility. Results Participants with fragile X syndrome had lower MLCU and lexical diversity than participants with typical development. Participants with Down syndrome produced yet lower MLCU. A differential effect of context among those with fragile X syndrome, Down syndrome, and typical development emerged for the number of attempts per minute, MLCU, and fluency. For participants with fragile X syndrome, autism symptom severity related to the number of utterances produced in conversation. Aspects of examiner behavior related to participant performance. Conclusions Sampling context characteristics should be considered when assessing expressive language in individuals with neurodevelopmental disabilities. PMID:22232386
Integrating mechanisms of visual guidance in naturalistic language production.
Coco, Moreno I; Keller, Frank
2015-05-01
Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.
The Extended Language Network: A Meta-Analysis of Neuroimaging Studies on Text Comprehension
Ferstl, Evelyn C.; Neumann, Jane; Bogler, Carsten; von Cramon, D. Yves
2010-01-01
Language processing in context requires more than merely comprehending words and sentences. Important subprocesses are inferences for bridging successive utterances, the use of background knowledge and discourse context, and pragmatic interpretations. The functional neuroanatomy of these text comprehension processes has only recently been investigated. Although there is evidence for right-hemisphere contributions, reviews have implicated the left lateral prefrontal cortex, left temporal regions beyond Wernicke’s area, and the left dorso-medial prefrontal cortex (dmPFC) for text comprehension. To objectively confirm this extended language network and to evaluate the respective contribution of right hemisphere regions, meta-analyses of 23 neuroimaging studies are reported here. The analyses used replicator dynamics based on activation likelihood estimates. Independent of the baseline, the anterior temporal lobes (aTL) were active bilaterally. In addition, processing of coherent compared with incoherent text engaged the dmPFC and the posterior cingulate cortex. Right hemisphere activations were seen most notably in the analysis of contrasts testing specific subprocesses, such as metaphor comprehension. These results suggest task dependent contributions for the lateral PFC and the right hemisphere. Most importantly, they confirm the role of the aTL and the fronto-medial cortex for language processing in context. PMID:17557297
Role of maternal gesture use in speech use by children with fragile X syndrome.
Hahn, Laura J; Zimmer, B Jean; Brady, Nancy C; Swinburne Romine, Rebecca E; Fleming, Kandace K
2014-05-01
The purpose of this study was to investigate how maternal gesture relates to speech production by children with fragile X syndrome (FXS). Participants were 27 young children with FXS (23 boys, 4 girls) and their mothers. Videotaped home observations were conducted between the ages of 25 and 37 months (toddler period) and again between the ages of 60 and 71 months (child period). The videos were later coded for types of maternal utterances and maternal gestures that preceded child speech productions. Children were also assessed with the Mullen Scales of Early Learning at both ages. Maternal gesture use in the toddler period was positively related to expressive language scores at both age periods and was related to receptive language scores in the child period. Maternal proximal pointing, in comparison to other gestures, evoked more speech responses from children during the mother-child interactions, particularly when combined with wh-questions. This study adds to the growing body of research on the importance of contextual variables, such as maternal gestures, in child language development. Parental gesture use may be an easily added ingredient to parent-focused early language intervention programs.
Siller, Michael; Swanson, Meghan R.; Serlin, Gayle; George, Ann
2014-01-01
The current study examines narratives elicited using a wordless picture book, focusing on language used to describe the characters’ thoughts and emotions (i.e., internal state language, ISL). The sample includes 21 children with Autism Spectrum Disorder (ASD) and 24 typically developing controls, matched on children's gender, IQ, as well as receptive and expressive vocabulary. This research had three major findings. First, despite equivalent performance on standardized language assessments, the volume of children's narratives (i.e., the number of utterances and words, the range of unique verbs and adjectives) was lower in children with ASD than in typically developing controls. Second, after controlling for narrative volume, the narratives of children with ASD were less likely to reference the characters’ emotions than was the case for typically developing controls. Finally, our results revealed a specific association between children's use of emotion terms and their performance on a battery of experimental tasks evaluating children's Theory of Mind abilities. Implications for our understanding of narrative deficits in ASD as well as interventions that use narrative as a context for improving social comprehension are discussed. PMID:24748899
Sign Lowering and Phonetic Reduction in American Sign Language.
Tyrone, Martha E; Mauk, Claude E
2010-04-01
This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in the citation form. Sign lowering has been documented previously, but this is the first study to examine it in phonetic detail. The data presented here are tokens of the sign WONDER, as produced by six native signers, in two phonetic contexts and at three signing rates, which were captured by optoelectronic motion capture. The results indicate that sign lowering occurred for all signers, according to the factors we manipulated. Sign production was affected by several phonetic factors that also influence speech production, namely, production rate, phonetic context, and position within an utterance. In addition, we have discovered interesting variations in sign production, which could underlie distinctions in signing style, analogous to accent or voice quality in speech.
Who's Who? Memory updating and character reference in children's narratives.
Whitely, Cristy; Colozzo, Paola
2013-10-01
The capacity to update and monitor the contents of working memory is an executive function presumed to play a critical role in language processing. The current study used an individual differences approach to consider the relationship between memory updating and accurate reference to story characters in the narratives of typically developing children. English-speaking children from kindergarten to grade 2 ( N = 63; M age = 7.0 years) completed updating tasks, short-term memory tasks, and narrative productions. The authors used multiple regression to test whether updating accounted for independent variability in referential adequacy. The capacity to update working memory was related to adequate character reference beyond the effects of age and of short-term memory capacity, with the strongest relationship emerging for maintaining reference over multiple utterances. This individual differences study is the first to show a link between updating and performance in a discourse production task for young school-age children. The findings contribute to the growing body of research investigating the role of working memory in shaping language production. This study invites extension to children of different ages and language abilities as well as to other language production tasks.
Siller, Michael; Swanson, Meghan R; Serlin, Gayle; George, Ann
2014-05-01
The current study examines narratives elicited using a wordless picture book, focusing on language used to describe the characters' thoughts and emotions (i.e., internal state language, ISL). The sample includes 21 children with Autism Spectrum Disorder (ASD) and 24 typically developing controls, matched on children's gender, IQ, as well as receptive and expressive vocabulary. This research had three major findings. First, despite equivalent performance on standardized language assessments, the volume of children's narratives (i.e., the number of utterances and words, the range of unique verbs and adjectives) was lower in children with ASD than in typically developing controls. Second, after controlling for narrative volume, the narratives of children with ASD were less likely to reference the characters' emotions than was the case for typically developing controls. Finally, our results revealed a specific association between children's use of emotion terms and their performance on a battery of experimental tasks evaluating children's Theory of Mind abilities. Implications for our understanding of narrative deficits in ASD as well as interventions that use narrative as a context for improving social comprehension are discussed.
Effects of Conversational Pressures on Speech Planning
ERIC Educational Resources Information Center
Swets, Benjamin; Jacovina, Matthew E.; Gerrig, Richard J.
2013-01-01
In ordinary conversation, speakers experience pressures both to produce utterances suited to particular addressees and to do so with minimal delay. To document the impact of these conversational pressures, our experiment asked participants to produce brief utterances to describe visual displays. We complicated utterance planning by including…
Rate and rhythm control strategies for apraxia of speech in nonfluent primary progressive aphasia.
Beber, Bárbara Costa; Berbert, Monalise Costa Batista; Grawer, Ruth Siqueira; Cardoso, Maria Cristina de Almeida Freitas
2018-01-01
The nonfluent/agrammatic variant of primary progressive aphasia is characterized by apraxia of speech and agrammatism. Apraxia of speech limits patients' communication due to slow speaking rate, sound substitutions, articulatory groping, false starts and restarts, segmentation of syllables, and increased difficulty with increasing utterance length. Speech and language therapy is known to benefit individuals with apraxia of speech due to stroke, but little is known about its effects in primary progressive aphasia. This is a case report of a 72-year-old, illiterate housewife, who was diagnosed with nonfluent primary progressive aphasia and received speech and language therapy for apraxia of speech. Rate and rhythm control strategies for apraxia of speech were trained to improve initiation of speech. We discuss the importance of these strategies to alleviate apraxia of speech in this condition and the future perspectives in the area.
Vocabulary, Grammar, Sex, and Aging.
Moscoso Del Prado Martín, Fermín
2017-05-01
Understanding the changes in our language abilities along the lifespan is a crucial step for understanding the aging process both in normal and in abnormal circumstances. Besides controlled experimental tasks, it is equally crucial to investigate language in unconstrained conversation. I present an information-theoretical analysis of a corpus of dyadic conversations investigating how the richness of the vocabulary, the word-internal structure (inflectional morphology), and the syntax of the utterances evolves as a function of the speaker's age and sex. Although vocabulary diversity increases throughout the lifetime, grammatical diversities follow a different pattern, which also differs between women and men. Women use increasingly diverse syntactic structures at least up to their late fifties, and they do not deteriorate in terms of fluency through their lifespan. However, from age 45 onward, men exhibit a decrease in the diversity of the syntactic structures they use, coupled with an increased number of speech disfluencies. Copyright © 2016 Cognitive Science Society, Inc.
Youngsters do not pay attention to conversational rules: is this so for nonhuman primates?
Lemasson, A; Glas, L; Barbu, S; Lacroix, A; Guilloux, M; Remeuf, K; Koda, H
2011-01-01
The potentiality to find precursors of human language in nonhuman primates is questioned because of differences related to the genetic determinism of human and nonhuman primate acoustic structures. Limiting the debate to production and acoustic plasticity might have led to underestimating parallels between human and nonhuman primates. Adult-young differences concerning vocal usage have been reported in various primate species. A key feature of language is the ability to converse, respecting turn-taking rules. Turn-taking structures some nonhuman primates' adult vocal exchanges, but the development and the cognitive relevancy of this rule have never been investigated in monkeys. Our observations of Campbell's monkeys' spontaneous vocal utterances revealed that juveniles broke the turn-taking rule more often than did experienced adults. Only adults displayed different levels of interest when hearing playbacks of vocal exchanges respecting or not the turn-taking rule. This study strengthens parallels between human conversations and nonhuman primate vocal exchanges.
Grammatical Constructions as Relational Categories.
Goldwater, Micah B
2017-07-01
This paper argues that grammatical constructions, specifically argument structure constructions that determine the "who did what to whom" part of sentence meaning and how this meaning is expressed syntactically, can be considered a kind of relational category. That is, grammatical constructions are represented as the abstraction of the syntactic and semantic relations of the exemplar utterances that are expressed in that construction, and it enables the generation of novel exemplars. To support this argument, I review evidence that there are parallel behavioral patterns between how children learn relational categories generally and how they learn grammatical constructions specifically. Then, I discuss computational simulations of how grammatical constructions are abstracted from exemplar sentences using a domain-general relational cognitive architecture. Last, I review evidence from adult language processing that shows parallel behavioral patterns with expert behavior from other cognitive domains. After reviewing the evidence, I consider how to integrate this account with other theories of language development. Copyright © 2017 Cognitive Science Society, Inc.
Hale, Courtney M; Tager-Flusberg, Helen
2005-05-01
This longitudinal study investigated the developmental trajectory of discourse skills and theory of mind in 57 children with autism. Children were tested at two time points spaced 1 year apart. Each year they provided a natural language sample while interacting with one parent, and were given standardized vocabulary measures and a developmentally sequenced battery of theory of mind tasks. The language samples were coded for conversational skills, specifically the child's use of topic-related contingent utterances. Children with autism made significant gains over 1 year in the ability to maintain a topic of discourse. Hierarchical regression analyses demonstrated that theory of mind skills contributed unique variance to individual differences in contingent discourse ability and vice versa, when measured concurrently; however, they did not predict longitudinal changes. The findings offer some empirical support for the hypothesis that theory of mind is linked to communicative competence in children with autism.
Mikhail Bakhtin and "Expressive Discourse."
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
Mikhail Bakhtin's concept of dialogism has applications to rhetoric and composition instruction. Dialogism, sometimes translated as intertextuality, is the term Bakhtin used to designate the relation of one utterance to other utterances. Dialogism is not dialogue in the usual sense of the word; it is the context which informs utterance, and…
An auditory cue-depreciation effect.
Gibson, J M; Watkins, M J
1991-01-01
An experiment is reported in which subjects first heard a list of words and then tried to identify these same words from degraded utterances. Paralleling previous findings in the visual modality, the probability of identifying a given utterance was reduced when the utterance was immediately preceded by other, more degraded, utterances of the same word. A second experiment replicated this "cue-depreciation effect" and in addition found the effect to be weakened, if not eliminated, when the target word was not included in the initial list or when the test was delayed by two days.
Nonhomogeneous transfer reveals specificity in speech motor learning.
Rochet-Capellan, Amélie; Richer, Lara; Ostry, David J
2012-03-01
Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning.
Nonhomogeneous transfer reveals specificity in speech motor learning
Rochet-Capellan, Amélie; Richer, Lara
2012-01-01
Does motor learning generalize to new situations that are not experienced during training, or is motor learning essentially specific to the training situation? In the present experiments, we use speech production as a model to investigate generalization in motor learning. We tested for generalization from training to transfer utterances by varying the acoustical similarity between these two sets of utterances. During the training phase of the experiment, subjects received auditory feedback that was altered in real time as they repeated a single consonant-vowel-consonant utterance. Different groups of subjects were trained with different consonant-vowel-consonant utterances, which differed from a subsequent transfer utterance in terms of the initial consonant or vowel. During the adaptation phase of the experiment, we observed that subjects in all groups progressively changed their speech output to compensate for the perturbation (altered auditory feedback). After learning, we tested for generalization by having all subjects produce the same single transfer utterance while receiving unaltered auditory feedback. We observed limited transfer of learning, which depended on the acoustical similarity between the training and the transfer utterances. The gradients of generalization observed here are comparable to those observed in limb movement. The present findings are consistent with the conclusion that speech learning remains specific to individual instances of learning. PMID:22190628
Discourse intonation and second language acquisition: Three genre-based studies
NASA Astrophysics Data System (ADS)
Wennerstrom, Ann Kristin
1997-12-01
This dissertation investigates intonation in the discourse of nonnative speakers of English. It is proposed that intonation functions as a grammar of cohesion, contributing to the coherence of the text. Based on a componential model of intonation adapted from Pierrehumbert and Hirshberg (1990), three empirical studies were conducted in different genres of spoken discourse: academic lectures, conversations, and oral narratives. Using computerized speech technology, excerpts of taped discourse were measured to determine how intonation associated with various constituents of text. All speakers were tested for overall English level on tests adapted from the SPEAK Test (ETS, 1985). Comparisons using native speaker data were also conducted. The first study investigated intonation in lectures given by Chinese teaching assistants. Multivariate analyses showed that intonation was a significant factor contributing to better scores on an exam of overall comprehensibility in English. The second study investigated the role of intonation in the turn-taking system in conversations between native and nonnative speakers of English. The final study considered emotional aspects of intonation in narratives, using the framework of Labov and Waletsky (1967). In sum, adult nonnative speakers can acquire intonation as part of their overall language development, although there is evidence against any specific order of acquisition. Intonation contributes to coherence by indicating the relationship between the current utterance and what is assumed to already be in participants' mental representations of the discourse. It also performs a segmentation function, denoting hierarchical relationships among utterances and/or turns. It is suggested that while pitch can be a resource in cross-cultural communication to show emotion and attitude, the grammatical aspects of intonation must be acquired gradually.
Visual Attention during Spatial Language Comprehension
Burigo, Michele; Knoeferle, Pia
2015-01-01
Spatial terms such as “above”, “in front of”, and “on the left of” are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, and models such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as “The box is above the sausage”. To the extent that this prediction generalizes to overt gaze shifts, a listener’s visual attention should shift from the sausage to the box. However, listeners tend to rapidly look at referents in their order of mention and even anticipate them based on linguistic cues, a behavior that predicts a converse attentional shift from the box to the sausage. Four eye-tracking experiments assessed the role of overt attention in spatial language comprehension by examining to which extent visual attention is guided by words in the utterance and to which extent it also shifts “against the grain” of the unfolding sentence. The outcome suggests that comprehenders’ visual attention is predominantly guided by their interpretation of the spatial description. Visual shifts against the grain occurred only when comprehenders had some extra time, and their absence did not affect comprehension accuracy. However, the timing of this reverse gaze shift on a trial correlated with that trial’s verification time. Thus, while the timing of these gaze shifts is subtly related to the verification time, their presence is not necessary for successful verification of spatial relations. PMID:25607540
Contextual predictability shapes signal autonomy.
Winters, James; Kirby, Simon; Smith, Kenny
2018-07-01
Aligning on a shared system of communication requires senders and receivers reach a balance between simplicity, where there is a pressure for compressed representations, and informativeness, where there is a pressure to be communicatively functional. We investigate the extent to which these two pressures are governed by contextual predictability: the amount of contextual information that a sender can estimate, and therefore exploit, in conveying their intended meaning. In particular, we test the claim that contextual predictability is causally related to signal autonomy: the degree to which a signal can be interpreted in isolation, without recourse to contextual information. Using an asymmetric communication game, where senders and receivers are assigned fixed roles, we manipulate two aspects of the referential context: (i) whether or not a sender shares access to the immediate contextual information used by the receiver in interpreting their utterance; (ii) the extent to which the relevant solution in the immediate referential context is generalisable to the aggregate set of contexts. Our results demonstrate that contextual predictability shapes the degree of signal autonomy: when the context is highly predictable (i.e., the sender has access to the context in which their utterances will be interpreted, and the semantic dimension which discriminates between meanings in context is consistent across communicative episodes), languages develop which rely heavily on the context to reduce uncertainty about the intended meaning. When the context is less predictable, senders favour systems composed of autonomous signals, where all potentially relevant semantic dimensions are explicitly encoded. Taken together, these results suggest that our pragmatic faculty, and how it integrates information from the context in reducing uncertainty, plays a central role in shaping language structure. Copyright © 2018 Elsevier B.V. All rights reserved.
Provine, Robert R.; Emmorey, Karen
2008-01-01
The placement of laughter in the speech of hearing individuals is not random but “punctuates” speech, occurring during pauses and at phrase boundaries where punctuation would be placed in a transcript of a conversation. For speakers, language is dominant in the competition for the vocal tract since laughter seldom interrupts spoken phrases. For users of American Sign Language, however, laughter and language do not compete in the same way for a single output channel. This study investigated whether laughter occurs simultaneously with signing, or punctuates signing, as it does speech, in 11 signed conversations (with two to five participants) that had at least one instance of audible, vocal laughter. Laughter occurred 2.7 times more often during pauses and at phrase boundaries than simultaneously with a signed utterance. Thus, the production of laughter involves higher order cognitive or linguistic processes rather than the low-level regulation of motor processes competing for a single vocal channel. In an examination of other variables, the social dynamics of deaf and hearing people were similar, with “speakers” (those signing) laughing more than their audiences and females laughing more than males. PMID:16891353
Tompkins, Virginia; Farrar, M Jeffrey
2011-01-01
This study examined the role that mothers' scaffolding plays in the autobiographical memory (AM) and storybook narratives of children with specific language impairment (SLI). Seven 4-5-year-old children and their mothers co-constructed narratives in both contexts. We also compared children's narratives with mothers to their narratives with an experimenter. Narratives were assessed in terms of narrative style (i.e., elaborativeness) and topic control. Mothers' elaborative and repetitive questions during AM and book narratives were related to children's elaborations, whereas mothers' elaborative and repetitive statements were not. Mothers produced more topic-controlling utterances than children in both contexts; however, both mothers and children provided proportionally more information in the book context. Additionally, children were more elaborative with mothers compared to an experimenter. Readers will be able to: (1) understand the importance of mother-child narratives for both typical and clinical populations; (2) understand how mother-child autobiographical memory and storybook narratives may differ between typical and clinical populations; and (3) consider the implications for designing narrative intervention studies for language impaired children. Copyright © 2010 Elsevier Inc. All rights reserved.
Provine, Robert R; Emmorey, Karen
2006-01-01
The placement of laughter in the speech of hearing individuals is not random but "punctuates" speech, occurring during pauses and at phrase boundaries where punctuation would be placed in a transcript of a conversation. For speakers, language is dominant in the competition for the vocal tract since laughter seldom interrupts spoken phrases. For users of American Sign Language, however, laughter and language do not compete in the same way for a single output channel. This study investigated whether laughter occurs simultaneously with signing, or punctuates signing, as it does speech, in 11 signed conversations (with two to five participants) that had at least one instance of audible, vocal laughter. Laughter occurred 2.7 times more often during pauses and at phrase boundaries than simultaneously with a signed utterance. Thus, the production of laughter involves higher order cognitive or linguistic processes rather than the low-level regulation of motor processes competing for a single vocal channel. In an examination of other variables, the social dynamics of deaf and hearing people were similar, with "speakers" (those signing) laughing more than their audiences and females laughing more than males.
The Coordinated Interplay of Scene, Utterance, and World Knowledge: Evidence from Eye Tracking
ERIC Educational Resources Information Center
Knoeferle, Pia; Crocker, Matthew W.
2006-01-01
Two studies investigated the interaction between utterance and scene processing by monitoring eye movements in agent-action-patient events, while participants listened to related utterances. The aim of Experiment 1 was to determine if and when depicted events are used for thematic role assignment and structural disambiguation of temporarily…
Convergent and Divergent Validity of the Grammaticality and Utterance Length Instrument
ERIC Educational Resources Information Center
Castilla-Earls, Anny; Fulcher-Rood, Katrina
2018-01-01
Purpose: This feasibility study examines the convergent and divergent validity of the Grammaticality and Utterance Length Instrument (GLi), a tool designed to assess the grammaticality and average utterance length of a child's prerecorded story retell. Method: Three raters used the GLi to rate audio-recorded story retells from 100 English-speaking…
Low-income fathers' speech to toddlers during book reading versus toy play.
Salo, Virginia C; Rowe, Meredith L; Leech, Kathryn A; Cabrera, Natasha J
2016-11-01
Fathers' child-directed speech across two contexts was examined. Father-child dyads from sixty-nine low-income families were videotaped interacting during book reading and toy play when children were 2;0. Fathers used more diverse vocabulary and asked more questions during book reading while their mean length of utterance was longer during toy play. Variation in these specific characteristics of fathers' speech that differed across contexts was also positively associated with child vocabulary skill measured on the MacArthur-Bates Communicative Development Inventory. Results are discussed in terms of how different contexts elicit specific qualities of child-directed speech that may promote language use and development.
A model of serial order problems in fluent, stuttered and agrammatic speech.
Howell, Peter
2007-10-01
Many models of speech production have attempted to explain dysfluent speech. Most models assume that the disruptions that occur when speech is dysfluent arise because the speakers make errors while planning an utterance. In this contribution, a model of the serial order of speech is described that does not make this assumption. It involves the coordination or 'interlocking' of linguistic planning and execution stages at the language-speech interface. The model is examined to determine whether it can distinguish two forms of dysfluent speech (stuttered and agrammatic speech) that are characterized by iteration and omission of whole words and parts of words.
Utterance Complexity and Stuttering on Function Words in Preschool-Age Children Who Stutter
ERIC Educational Resources Information Center
Richels, Corrin; Buhr, Anthony; Conture, Edward; Ntourou, Katerina
2010-01-01
The purpose of the present investigation was to examine the relation between utterance complexity and utterance position and the tendency to stutter on function words in preschool-age children who stutter (CWS). Two separate studies involving two different groups of participants (Study 1, n = 30; Study 2, n = 30) were conducted. Participants were…
ERIC Educational Resources Information Center
Theodore, Rachel M.; Demuth, Katherine; Shattuck-Hufnagel, Stefanie
2015-01-01
Purpose: Prosodic and articulatory factors influence children's production of inflectional morphemes. For example, plural -"s" is produced more reliably in utterance-final compared to utterance-medial position (i.e., the positional effect), which has been attributed to the increased planning time in utterance-final position. In previous…
Hekler, Eric B; Dubey, Gaurav; McDonald, David W; Poole, Erika S; Li, Victor; Eikey, Elizabeth
2014-12-08
There is increasing interest in the use of online forums as a component of eHealth weight loss interventions. Although the research is mixed on the utility of online forums in general, results suggest that there is promise to this, particularly if the systems can be designed well to support healthful interactions that foster weight loss and continued engagement. The purpose of this study was to examine the relationship between the styles of utterances individuals make on an online weight loss forum and week-to-week fluctuations in weight. This analysis was conducted to generate hypotheses on possible strategies that could be used to improve the overall design of online support groups to facilitate more healthful interactions. A convenience sample of individuals using an online weight loss forum (N=4132) included data both on online forum use and weight check-in data. All interactions were coded utilizing the Linguistic Inquiry and Word Count (LIWC) system. Mixed model analyses were conducted to examine the relationship between these LIWC variables and weight over time. Results suggested that increased use of past-tense verbs (P=.05) and motion (P=.02) were associated with lower weekly weights whereas increased use of conjunctions (eg, and, but, whereas; P=.001) and exclusion words (eg, but, without, exclude; P=.07) were both associated with higher weight during the weeks when these utterances were used more. These results provide some insights on the styles of interactions that appear to be associated with weight fluctuations. Future work should explore the stability of these findings and also explore possibilities for fostering these types of interactions more explicitly within online weight loss forums.
Dubey, Gaurav; McDonald, David W; Poole, Erika S; Li, Victor; Eikey, Elizabeth
2014-01-01
Background There is increasing interest in the use of online forums as a component of eHealth weight loss interventions. Although the research is mixed on the utility of online forums in general, results suggest that there is promise to this, particularly if the systems can be designed well to support healthful interactions that foster weight loss and continued engagement. Objective The purpose of this study was to examine the relationship between the styles of utterances individuals make on an online weight loss forum and week-to-week fluctuations in weight. This analysis was conducted to generate hypotheses on possible strategies that could be used to improve the overall design of online support groups to facilitate more healthful interactions. Methods A convenience sample of individuals using an online weight loss forum (N=4132) included data both on online forum use and weight check-in data. All interactions were coded utilizing the Linguistic Inquiry and Word Count (LIWC) system. Mixed model analyses were conducted to examine the relationship between these LIWC variables and weight over time. Results Results suggested that increased use of past-tense verbs (P=.05) and motion (P=.02) were associated with lower weekly weights whereas increased use of conjunctions (eg, and, but, whereas; P=.001) and exclusion words (eg, but, without, exclude; P=.07) were both associated with higher weight during the weeks when these utterances were used more. Conclusions These results provide some insights on the styles of interactions that appear to be associated with weight fluctuations. Future work should explore the stability of these findings and also explore possibilities for fostering these types of interactions more explicitly within online weight loss forums. PMID:25513997
Effects of topiramate on language functions in newly diagnosed pediatric epileptic patients.
Kim, Sun Jun; Kim, Moon Yeon; Choi, Yoon Mi; Song, Mi Kyoung
2014-09-01
The aim of this study was to characterize the effects of topiramate on language functions in newly diagnosed pediatric epileptic patients. Thirty-eight newly diagnosed epileptic patients were assessed using standard language tests. Data were collected before and after beginning topiramate during which time a monotherapy treatment regimen was maintained. Language tests included the Test of Language Problem Solving Abilities, a Korean version of the Peabody Picture Vocabulary Test. We used language tests in the Korean version because all the patients were spoken Korean exclusively in their families. All the language parameters of Test of Language Problem Solving Abilities worsened after initiation of topiramate (determine cause, 13.2 ± 4.8 to 11.2 ± 4.3; problem solving, 14.8 ± 6.0 to 12.8 ± 5.0; predicting, 9.8 ± 3.6 to 8.8 ± 4.6). Patients given topiramate exhibited a shortened mean length of utterance in words during response (determine cause, 4.8 ± 0.9 to 4.3 ± 0.7; making inference, 4.5 ± 0.8 to 4.1 ± 1.1; predicting, 5.2 ± 1.0 to 4.7 ± 0.6; P < 0.05), provided ambiguous answers during the testing, exhibited difficulty in selecting appropriate words, took more time to provide answers, and used incorrect grammar. However, there were no statistically significant changes in the receptive language of patients after taking topiramate (95.4 ± 20.4 to 100.8 ± 19.1). Our data suggest that topiramate may have negative effects on problem-solving abilities in children. We recommend performing language tests should be considered in children being treated with topiramate. Copyright © 2014 Elsevier Inc. All rights reserved.
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W.; Imel, Zac E.; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C.
2014-01-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C
2015-02-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Son, Seung-Hee Claire; Tineo, Maria F.
2016-01-01
This study examined associations among low-income mothers' use of attention-getting utterances during shared book reading, preschoolers' verbal engagement and visual attention to reading, and their early literacy skills (N = 51). Mother-child shared book reading sessions were videotaped and coded for each utterance, including attention talk,…
Are You Talking to Me? Dialogue Systems Supporting Mixed Teams of Humans and Robots
NASA Technical Reports Server (NTRS)
Dowding, John; Clancey, William J.; Graham, Jeffrey
2006-01-01
This position paper describes an approach to building spoken dialogue systems for environments containing multiple human speakers and hearers, and multiple robotic speakers and hearers. We address the issue, for robotic hearers, of whether the speech they hear is intended for them, or more likely to be intended for some other hearer. We will describe data collected during a series of experiments involving teams of multiple human and robots (and other software participants), and some preliminary results for distinguishing robot-directed speech from human-directed speech. The domain of these experiments is Mars-analogue planetary exploration. These Mars-analogue field studies involve two subjects in simulated planetary space suits doing geological exploration with the help of 1-2 robots, supporting software agents, a habitat communicator and links to a remote science team. The two subjects are performing a task (geological exploration) which requires them to speak with each other while also speaking with their assistants. The technique used here is to use a probabilistic context-free grammar language model in the speech recognizer that is trained on prior robot-directed speech. Intuitively, the recognizer will give higher confidence to an utterance if it is similar to utterances that have been directed to the robot in the past.
Cheung, Candice Chi-Hang; Politzer-Ahles, Stephen; Hwang, Heeju; Chui, Ronald Lung Yat; Leung, Man Tak; Tang, Tempo Po Yi
2017-01-01
While an enormous amount of research has been done on the deficient conversation skills in individuals with autism spectrum disorders (ASD), little is known about their performance on presuppositions, a domain of knowledge that is crucial for successful communication. This study investigated the comprehension of four types of presupposition, namely existential, factive, lexical and structural presuppositions, in school-age Cantonese-speaking children with and without ASD. A group of children with ASD (n = 21), mean age 8.8, was compared with a group of typically developing children (n = 106). Knowledge of presuppositions was evaluated based on children's ability to judge whether a given utterance was a correct presupposition of a preceding utterance. Children with ASD were found to show a deficit in the comprehension of presuppositions, even after controlling for differences in general language ability and non-verbal intelligence. The relative difficulty of the four types of presupposition did not differ between the two groups of children. The present findings provide new empirical evidence that children with ASD have a deficit in the comprehension of presuppositions. Future research should explore whether the deficit in the comprehension of presuppositions is related to the development of theory of mind skills in children with ASD.
The impact of memory demands on audience design during language production.
Horton, William S; Gerrig, Richard J
2005-06-01
Speakers often tailor their utterances to the needs of particular addressees--a process called audience design. We argue that important aspects of audience design can be understood as emergent features of ordinary memory processes. This perspective contrasts with earlier views that presume special processes or representations. To support our account, we present a study in which Directors engaged in a referential communication task with two independent Matchers. Over several rounds, the Directors instructed the Matchers how to arrange a set of picture cards. For half the triads, the Directors' card categories were initially distributed orthogonally by Matcher (e.g. Directors described birds and dogs with one Matcher and fish and frogs with the other). For the other triads, the Directors' card categories initially overlapped across Matchers (e.g. Directors described two members of each category with each Matcher). We predicted that the orthogonal configuration would more readily allow Directors to encode associations between particular cards and particular Matchers--and thus allow those Directors to provide more evidence for audience design. Content analyses of Directors' utterances from two final rounds supported our prediction. We suggest that audience design depends on the memory representations to which speakers have ready access given the time constraints of routine conversation.
Jones, Robin M.; Conture, Edward G.; Walden, Tedra A.
2014-01-01
Purpose The purpose of this study was to assess the relation between emotional reactivity and regulation associated with fluent and stuttered utterances of preschool-age children who stutter (CWS) and those who do not (CWNS). Participants Participants were eight 3 to 6-year old CWS and eight CWNS of comparable age and gender. Methods Participants were exposed to three emotion-inducing overheard conversations—neutral, angry and happy—and produced a narrative following each overheard conversation. From audio-video recordings of these narratives, coded behavioral analysis of participants’ negative and positive affect and emotion regulation associated with stuttered and fluent utterances was conducted. Results Results indicated that CWS were significantly more likely to exhibit emotion regulation attempts prior to and during their fluent utterances following the happy as compared to the negative condition, whereas CWNS displayed the opposite pattern. Within-group assessment indicated that CWS were significantly more likely to display negative emotion prior to and during their stuttered than fluent utterances, particularly following the positive overheard conversation. Conclusions After exposure to emotional-inducing overheard conversations, changes in preschool-age CWS’s emotion and emotion regulatory attempts were associated with the fluency of their utterances. PMID:24630144
Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain.
Arbib, Michael A
2016-03-01
We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c). We then (iii) hypothesize the role of imitation, pantomime, protosign and protospeech in biological and cultural evolution from LCA-c to Homo sapiens with a language-ready brain. Second, we suggest how cultural evolution in Homo sapiens led from protolanguages to full languages with grammar and compositional semantics. Third, we assess the similarities and differences between the dorsal and ventral streams in audition and vision as the basis for presenting and comparing two models of language processing in the human brain: A model of (i) the auditory dorsal and ventral streams in sentence comprehension; and (ii) the visual dorsal and ventral streams in defining "what language is about" in both production and perception of utterances related to visual scenes provide the basis for (iii) a first step towards a synthesis and a look at challenges for further research. Copyright © 2015 Elsevier B.V. All rights reserved.
Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain
NASA Astrophysics Data System (ADS)
Arbib, Michael A.
2016-03-01
We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c). We then (iii) hypothesize the role of imitation, pantomime, protosign and protospeech in biological and cultural evolution from LCA-c to Homo sapiens with a language-ready brain. Second, we suggest how cultural evolution in Homo sapiens led from protolanguages to full languages with grammar and compositional semantics. Third, we assess the similarities and differences between the dorsal and ventral streams in audition and vision as the basis for presenting and comparing two models of language processing in the human brain: A model of (i) the auditory dorsal and ventral streams in sentence comprehension; and (ii) the visual dorsal and ventral streams in defining ;what language is about; in both production and perception of utterances related to visual scenes provide the basis for (iii) a first step towards a synthesis and a look at challenges for further research.
Tesink, Cathelijne M J Y; Buitelaar, Jan K; Petersson, Karl Magnus; van der Gaag, Rutger Jan; Teunisse, Jan-Pieter; Hagoort, Peter
2011-04-01
In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group. Copyright © 2011 Elsevier Ltd. All rights reserved.
When pitch Accents Encode Speaker Commitment: Evidence from French Intonation.
Michelas, Amandine; Portes, Cristel; Champagne-Lavau, Maud
2016-06-01
Recent studies on a variety of languages have shown that a speaker's commitment to the propositional content of his or her utterance can be encoded, among other strategies, by pitch accent types. Since prior research mainly relied on lexical-stress languages, our understanding of how speakers of a non-lexical-stress language encode speaker commitment is limited. This paper explores the contribution of the last pitch accent of an intonation phrase to convey speaker commitment in French, a language that has stress at the phrasal level as well as a restricted set of pitch accents. In a production experiment, participants had to produce sentences in two pragmatic contexts: unbiased questions (the speaker had no particular belief with respect to the expected answer) and negatively biased questions (the speaker believed the proposition to be false). Results revealed that negatively biased questions consistently exhibited an additional unaccented F0 peak in the preaccentual syllable (an H+!H* pitch accent) while unbiased questions were often realized with a rising pattern across the accented syllable (an H* pitch accent). These results provide evidence that pitch accent types in French can signal the speaker's belief about the certainty of the proposition expressed in French. It also has implications for the phonological model of French intonation.
An integrated theory of language production and comprehension.
Pickering, Martin J; Garrod, Simon
2013-08-01
Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal.
The availability and accessibility of basic concept vocabulary in AAC software: a preliminary study.
McCarthy, Jillian H; Schwarz, Ilsa; Ashworth, Morgan
2017-09-01
Core vocabulary lists obtained through the analyses of children's utterances include a variety of basic concept words. Supporting young children who use augmentative and alternative communication (AAC) to develop their understanding and use of basic concepts is an area of practice that has important ramifications for successful communication in a classroom environment. This study examined the availability of basic concept words across eight frequently used, commercially available AAC language systems, iPad© applications, and symbol libraries used to create communication boards. The accessibility of basic concept words was subsequently examined using two AAC language page sets and two iPad applications. Results reveal that the availability of basic concept words represented within the different AAC language programs, iPad applications, and symbol libraries varied but was limited across programs. However, there is no significant difference in the accessibility of basic concept words across the language program page sets or iPad applications, generally because all of them require sophisticated motor and cognitive plans for access. These results suggest that educators who teach or program vocabulary in AAC systems need to be mindful of the importance of basic concept words in classroom settings and, when possible, enhance the availability and accessibility of these words to users of AAC.
Greek perception and production of an English vowel contrast: A preliminary study
NASA Astrophysics Data System (ADS)
Podlipský, Václav J.
2005-04-01
This study focused on language-independent principles functioning in acquisition of second language (L2) contrasts. Specifically, it tested Bohn's Desensitization Hypothesis [in Speech perception and linguistic experience: Issues in Cross Language Research, edited by W. Strange (York Press, Baltimore, 1995)] which predicted that Greek speakers of English as an L2 would base their perceptual identification of English /i/ and /I/ on durational differences. Synthetic vowels differing orthogonally in duration and spectrum between the /i/ and /I/ endpoints served as stimuli for a forced-choice identification test. To assess L2 proficiency and to evaluate the possibility of cross-language category assimilation, productions of English /i/, /I/, and /ɛ/ and of Greek /i/ and /e/ were elicited and analyzed acoustically. The L2 utterances were also rated for the degree of foreign accent. Two native speakers of Modern Greek with low and 2 with intermediate experience in English participated. Six native English (NE) listeners and 6 NE speakers tested in an earlier study constituted the control groups. Heterogeneous perceptual behavior was observed for the L2 subjects. It is concluded that until acquisition in completely naturalistic settings is tested, possible interference of formally induced meta-linguistic differentiation between a ``short'' and a ``long'' vowel cannot be eliminated.
ERIC Educational Resources Information Center
Mitchell, Peter; Robinson, Elizabeth J.; Thompson, Doreen E.
1999-01-01
Three experiments examined 3- to 6-year olds' ability to use a speaker's utterance based on false belief to identify which of several referents was intended. Found that many 4- to 5-year olds performed correctly only when it was unnecessary to consider the speaker's belief. When the speaker gave an ambiguous utterance, many 3- to 6-year olds…
ERIC Educational Resources Information Center
Le Normand, M. T.; Moreno-Torres, I.; Parisse, C.; Dellatolas, G.
2013-01-01
In the last 50 years, researchers have debated over the lexical or grammatical nature of children's early multiword utterances. Due to methodological limitations, the issue remains controversial. This corpus study explores the effect of grammatical, lexical, and pragmatic categories on mean length of utterances (MLU). A total of 312 speech samples…
Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert
2017-03-01
Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and socio-economic background during adolescence, particularly in populations without language impairment. To investigate: (1) whether adolescents with higher educational outcomes overall had higher language abilities; and (2) associations between adolescent language ability, socio-economic background and educational outcomes, specifically in relation to Mathematics, English Language and English Literature GCSE grade. A total of 151 participants completed five standardized language assessments measuring vocabulary, comprehension of sentences and spoken paragraphs, and narrative skills and one nonverbal assessment when between 13 and 14 years old. These data were compared with the participants' educational achievement obtained upon leaving secondary education (16 years old). Univariate logistic regressions were employed to identify those language assessments and demographic factors that were associated with achieving a targeted A * -C grade in English Language, English Literature and Mathematics General Certificate of Secondary Education (GCSE) at 16 years. Further logistic regressions were then conducted to examine further the contribution of socio-economic background and spoken language skills in the multivariate models. Vocabulary, comprehension of sentences and spoken paragraphs, and mean length utterance in a narrative task along with socio-economic background contributed to whether participants achieved an A * -C grade in GCSE Mathematics and English Language and English Literature. Nonverbal ability contributed to English Language and Mathematics. The results of multivariate logistic regressions then found that vocabulary skills were particularly relevant to all three GCSE outcomes. Socio-economic background only remained important for English Language, once language assessment scores and demographic information were considered. Language ability, and in particular vocabulary, plays an important role for educational achievement. Results confirm a need for ongoing support for spoken language ability throughout secondary education and a potential role for speech and language therapy provision in the continuing drive to reduce the gap in educational attainment between groups from differing socio-economic backgrounds. © 2016 Royal College of Speech and Language Therapists.
Japanese mothers’ utterances about agents and actions during joint picture-book reading
Murase, Toshiki
2013-01-01
This study extended the research on the scaffolding provided by mothers while reading picture books with their children from a focus on conversational styles related to labeling to a focus on those related to agents and actions to clarify the process by which language develops from the one-word to the syntactic stage. We clarified whether mothers decreased the degree of scaffolding in their initiation of conversations, in the responses to their children’s utterances, and in the choice of referential ranges of their utterances. We also investigated whether maternal conversational styles contributed to the development of their children’s vocabularies. Eighteen pairs of Japanese mothers and their children were longitudinally observed when the children were 20 and 27 months of age. The pairs were given a picture book depicting 24 animals engaged in everyday behavior. The mothers shifted their approach in the initiation of conversation from providing to requesting information as a function of their children’s age. The proportion of maternal elaborative information-seeking responses was positively correlated with the size of their children’s productive vocabulary. In terms of referential choices, mothers broadened the range of their references as their children aged. In terms of the contribution of maternal conversational styles to children’s vocabulary development, the use of a maternal elaborative information-seeking style when the children were 20 months of age predicted the size of the children’s productive vocabulary at 27 months. These results indicate that mothers decrease the degree of scaffolding by introducing more complex information into the conversations and transferring the role of actively producing information to their children by requesting information as their children develop. The results also indicate that these conversational styles promote the development of children’s vocabularies during the transition from the one-word to the syntactic stage. PMID:24847288
Uguccioni, Ginevra; Pallanca, Olivier; Golmard, Jean-Louis; Dodet, Pauline; Herlin, Bastien; Leu-Semenescu, Smaranda; Arnulf, Isabelle
2013-01-01
To determine if sleep talkers with REM sleep behavior disorder (RBD) would utter during REM sleep sentences learned before sleep, and to evaluate their verbal memory consolidation during sleep. Eighteen patients with RBD and 10 controls performed two verbal memory tasks (16 words from the Free and Cued Selective Reminding Test and a 220-263 word long modified Story Recall Test) in the evening, followed by nocturnal video-polysomnography and morning recall (night-time consolidation). In 9 patients with RBD, daytime consolidation (morning learning/recall, evening recall) was also evaluated with the modified Story Recall Test in a cross-over order. Two RBD patients with dementia were studied separately. Sleep talking was recorded using video-polysomnography, and the utterances were compared to the studied texts by two external judges. Sleep-related verbal memory consolidation was maintained in patients with RBD (+24±36% words) as in controls (+9±18%, p=0.3). The two demented patients with RBD also exhibited excellent nighttime consolidation. The post-sleep performance was unrelated to the sleep measures (including continuity, stages, fragmentation and apnea-hypopnea index). Daytime consolidation (-9±19%) was worse than night-time consolidation (+29±45%, p=0.03) in the subgroup of 9 patients with RBD. Eleven patients with RBD spoke during REM sleep and pronounced a median of 20 words, which represented 0.0003% of sleep with spoken language. A single patient uttered a sentence that was judged to be semantically (but not literally) related to the text learned before sleep. Verbal declarative memory normally consolidates during sleep in patients with RBD. The incorporation of learned material within REM sleep-associated sleep talking in one patient (unbeknownst to himself) at the semantic level suggests a replay at a highly cognitive creative level.
Uguccioni, Ginevra; Pallanca, Olivier; Golmard, Jean-Louis; Dodet, Pauline; Herlin, Bastien; Leu-Semenescu, Smaranda; Arnulf, Isabelle
2013-01-01
Objective To determine if sleep talkers with REM sleep behavior disorder (RBD) would utter during REM sleep sentences learned before sleep, and to evaluate their verbal memory consolidation during sleep. Methods Eighteen patients with RBD and 10 controls performed two verbal memory tasks (16 words from the Free and Cued Selective Reminding Test and a 220-263 word long modified Story Recall Test) in the evening, followed by nocturnal video-polysomnography and morning recall (night-time consolidation). In 9 patients with RBD, daytime consolidation (morning learning/recall, evening recall) was also evaluated with the modified Story Recall Test in a cross-over order. Two RBD patients with dementia were studied separately. Sleep talking was recorded using video-polysomnography, and the utterances were compared to the studied texts by two external judges. Results Sleep-related verbal memory consolidation was maintained in patients with RBD (+24±36% words) as in controls (+9±18%, p=0.3). The two demented patients with RBD also exhibited excellent nighttime consolidation. The post-sleep performance was unrelated to the sleep measures (including continuity, stages, fragmentation and apnea-hypopnea index). Daytime consolidation (-9±19%) was worse than night-time consolidation (+29±45%, p=0.03) in the subgroup of 9 patients with RBD. Eleven patients with RBD spoke during REM sleep and pronounced a median of 20 words, which represented 0.0003% of sleep with spoken language. A single patient uttered a sentence that was judged to be semantically (but not literally) related to the text learned before sleep. Conclusion Verbal declarative memory normally consolidates during sleep in patients with RBD. The incorporation of learned material within REM sleep-associated sleep talking in one patient (unbeknownst to himself) at the semantic level suggests a replay at a highly cognitive creative level. PMID:24349492
DOE Office of Scientific and Technical Information (OSTI.GOV)
Small, S.; Cottrell, G.; Tanenhaus, M.
1987-01-01
This book collects much of the best research currently available on the problem of lexical ambiguity resolution in the processing of human language. When taken out of context, sentences are usually ambiguous. When actually uttered in a dialogue or written in text, these same sentences often have unique interpretations. The inherent ambiguity of isolated sentences, becomes obvious in the attempt to write a computer program to understand them. Different views have emerged on the nature of context and the mechanisms by which it directs unambiguous understanding of words and sentences. These perspectives are represented and discussed. Eighteen original papers frommore » a valuable source book for cognitive scientists in AI, psycholinguistics, neuropsychology, or theoretical linguistics.« less
Low-income fathers’ speech to toddlers during book reading versus toy play*
Salo, Virginia C.; Rowe, Meredith L.; Leech, Kathryn A.; Cabrera, Natasha J.
2016-01-01
Fathers’ child-directed speech across two contexts was examined. Father–child dyads from sixty-nine low-income families were videotaped interacting during book reading and toy play when children were 2;0. Fathers used more diverse vocabulary and asked more questions during book reading while their mean length of utterance was longer during toy play. Variation in these specific characteristics of fathers’ speech that differed across contexts was also positively associated with child vocabulary skill measured on the MacArthur-Bates Communicative Development Inventory. Results are discussed in terms of how different contexts elicit specific qualities of child-directed speech that may promote language use and development. PMID:26541647
Differences between conduction aphasia and Wernicke's aphasia.
Anzaki, F; Izumi, S
2001-07-01
Conduction aphasia and Wernike's aphasia have been differentiated by the degree of auditory language comprehension. We quantitatively compared the speech sound errors of two conduction aphasia patients and three Wernicke's aphasia patients on various language modality tests. All of the patients were Japanese. The two conduction aphasia patients had "conduites d'approche" errors and phonological paraphasia. The patient with mild Wernicke's aphasia made various errors. In the patient with severe Wernicke's aphasia, neologism was observed. Phonological paraphasia in the two conduction aphasia patients seemed to occur when the examinee searched for the target word. They made more errors in vowels than in consonants of target words on the naming and repetition tests. They seemed to search the target word by the correct consonant phoneme and incorrect vocalic phoneme in the table of the Japanese alphabet. The Wernicke's aphasia patients who had severe impairment of auditory comprehension, made more errors in consonants than in vowels of target words. In conclusion, utterance of conduction aphasia and that of Wernicke's aphasia are qualitatively distinct.
Speech to Text Translation for Malay Language
NASA Astrophysics Data System (ADS)
Al-khulaidi, Rami Ali; Akmeliawati, Rini
2017-11-01
The speech recognition system is a front end and a back-end process that receives an audio signal uttered by a speaker and converts it into a text transcription. The speech system can be used in several fields including: therapeutic technology, education, social robotics and computer entertainments. In most cases in control tasks, which is the purpose of proposing our system, wherein the speed of performance and response concern as the system should integrate with other controlling platforms such as in voiced controlled robots. Therefore, the need for flexible platforms, that can be easily edited to jibe with functionality of the surroundings, came to the scene; unlike other software programs that require recording audios and multiple training for every entry such as MATLAB and Phoenix. In this paper, a speech recognition system for Malay language is implemented using Microsoft Visual Studio C#. 90 (ninety) Malay phrases were tested by 10 (ten) speakers from both genders in different contexts. The result shows that the overall accuracy (calculated from Confusion Matrix) is satisfactory as it is 92.69%.
How the Context Matters. Literal and Figurative Meaning in the Embodied Language Paradigm
Cuccio, Valentina; Ambrosecchia, Marianna; Ferri, Francesca; Carapezza, Marco; Lo Piparo, Franco; Fogassi, Leonardo; Gallese, Vittorio
2014-01-01
The involvement of the sensorimotor system in language understanding has been widely demonstrated. However, the role of context in these studies has only recently started to be addressed. Though words are bearers of a semantic potential, meaning is the product of a pragmatic process. It needs to be situated in a context to be disambiguated. The aim of this study was to test the hypothesis that embodied simulation occurring during linguistic processing is contextually modulated to the extent that the same sentence, depending on the context of utterance, leads to the activation of different effector-specific brain motor areas. In order to test this hypothesis, we asked subjects to give a motor response with the hand or the foot to the presentation of ambiguous idioms containing action-related words when these are preceded by context sentences. The results directly support our hypothesis only in relation to the comprehension of hand-related action sentences. PMID:25531530
Limits on negative information in language input.
Morgan, J L; Travis, L L
1989-10-01
Hirsh-Pasek, Treiman & Schneiderman (1984) and Demetras, Post & Snow (1986) have recently suggested that certain types of parental repetitions and clarification questions may provide children with subtle cues to their grammatical errors. We further investigated this possibility by examining parental responses to inflectional over-regularizations and wh-question auxiliary-verb omission errors in the sets of transcripts from Adam, Eve and Sarah (Brown 1973). These errors were chosen because they are exemplars of overgeneralization, the type of mistake for which negative information is, in theory, most critically needed. Expansions and Clarification Questions occurred more often following ill-formed utterances in Adam's and Eve's input, but not in Sarah's. However, these corrective responses formed only a small proportion of all adult responses following Adam's and Eve's grammatical errors. Moreover, corrective responses appear to drop out of children's input while they continue to make overgeneralization errors. Whereas negative feedback may occasionally be available, in the light of these findings the contention that language input generally incorporates negative information appears to be unfounded.
The role of voice input for human-machine communication.
Cohen, P R; Oviatt, S L
1995-01-01
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803
Improving Spoken Language Outcomes for Children With Hearing Loss: Data-driven Instruction.
Douglas, Michael
2016-02-01
To assess the effects of data-driven instruction (DDI) on spoken language outcomes of children with cochlear implants and hearing aids. Retrospective, matched-pairs comparison of post-treatment speech/language data of children who did and did not receive DDI. Private, spoken-language preschool for children with hearing loss. Eleven matched pairs of children with cochlear implants who attended the same spoken language preschool. Groups were matched for age of hearing device fitting, time in the program, degree of predevice fitting hearing loss, sex, and age at testing. Daily informal language samples were collected and analyzed over a 2-year period, per preschool protocol. Annual informal and formal spoken language assessments in articulation, vocabulary, and omnibus language were administered at the end of three time intervals: baseline, end of year one, and end of year two. The primary outcome measures were total raw score performance of spontaneous utterance sentence types and syntax element use as measured by the Teacher Assessment of Spoken Language (TASL). In addition, standardized assessments (the Clinical Evaluation of Language Fundamentals--Preschool Version 2 (CELF-P2), the Expressive One-Word Picture Vocabulary Test (EOWPVT), the Receptive One-Word Picture Vocabulary Test (ROWPVT), and the Goldman-Fristoe Test of Articulation 2 (GFTA2)) were also administered and compared with the control group. The DDI group demonstrated significantly higher raw scores on the TASL each year of the study. The DDI group also achieved statistically significant higher scores for total language on the CELF-P and expressive vocabulary on the EOWPVT, but not for articulation nor receptive vocabulary. Post-hoc assessment revealed that 78% of the students in the DDI group achieved scores in the average range compared with 59% in the control group. The preliminary results of this study support further investigation regarding DDI to investigate whether this method can consistently and significantly improve the achievement of children with hearing loss in spoken language skills.
Xiao, Bo; Huang, Chewei; Imel, Zac E; Atkins, David C; Georgiou, Panayiotis; Narayanan, Shrikanth S
2016-04-01
Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy-a key therapy quality index-from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.