Sample records for interactive voice response

  1. 78 FR 71676 - Submission for Review: 3206-0201, Federal Employees Health Benefits (FEHB) Open Season Express...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... (FEHB) Open Season Express Interactive Voice Response (IVR) System and Open Season Web site AGENCY: U.S... Benefits (FEHB) Open Season Express Interactive Voice Response (IVR) System and the Open Season Web site... Season Express Interactive Voice Response (IVR) System, and the Open Season Web site, Open Season Online...

  2. The Voice as Computer Interface: A Look at Tomorrow's Technologies.

    ERIC Educational Resources Information Center

    Lange, Holley R.

    1991-01-01

    Discussion of voice as the communications device for computer-human interaction focuses on voice recognition systems for use within a library environment. Voice technologies are described, including voice response and voice recognition; examples of voice systems in use in libraries are examined; and further possibilities, including use with…

  3. 76 FR 72306 - Federal Housing Administration (FHA) Appraiser Roster: Appraiser Qualifications for Placement on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... Appraiser Roster regulations by replacing the obsolete references to the Credit Alert Interactive Voice Response System (CAIVRS) with references to its successor, the online-based Credit Alert Verification... propose the elimination references to the Credit Alert Interactive Voice Response System (CAIVRS). On July...

  4. 76 FR 41441 - Federal Housing Administration (FHA) Appraiser Roster: Appraiser Qualifications for Placement on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-14

    ... the FHA Appraiser Roster by replacing the obsolete references to the Credit Alert Interactive Voice Response System with references to its successor, the online-based Credit Alert Verification Reporting...'s Limited Denial of Participation list, or in HUD's Credit Alert Interactive Voice Response System...

  5. Shielding voices: The modulation of binding processes between voice features and response features by task representations.

    PubMed

    Bogon, Johanna; Eisenbarth, Hedwig; Landgraf, Steffen; Dreisbach, Gesine

    2017-09-01

    Vocal events offer not only semantic-linguistic content but also information about the identity and the emotional-motivational state of the speaker. Furthermore, most vocal events have implications for our actions and therefore include action-related features. But the relevance and irrelevance of vocal features varies from task to task. The present study investigates binding processes for perceptual and action-related features of spoken words and their modulation by the task representation of the listener. Participants reacted with two response keys to eight different words spoken by a male or a female voice (Experiment 1) or spoken by an angry or neutral male voice (Experiment 2). There were two instruction conditions: half of participants learned eight stimulus-response mappings by rote (SR), and half of participants applied a binary task rule (TR). In both experiments, SR instructed participants showed clear evidence for binding processes between voice and response features indicated by an interaction between the irrelevant voice feature and the response. By contrast, as indicated by a three-way interaction with instruction, no such binding was found in the TR instructed group. These results are suggestive of binding and shielding as two adaptive mechanisms that ensure successful communication and action in a dynamic social environment.

  6. Interactions between voice clinics and singing teachers: a report on the British Voice Association questionnaire to voice clinics in the UK.

    PubMed

    Davies, J; Anderson, S; Huchison, L; Stewart, G

    2007-01-01

    Singers with vocal problems are among patients who present at multidisciplinary voice clinics led by Ear Nose and Throat consultants and laryngologists or speech and language therapists. However, the development and care of the singing voice are also important responsibilities of singing teachers. We report here on the current extent and nature of interactions between voice clinics and singing teachers, based on data from a recent survey undertaken on behalf of the British Voice Association. A questionnaire was sent to all 103 voice clinics at National Health Service (NHS) hospitals in the UK. Responses were received and analysed from 42 currently active clinics. Eight (19%) clinics reported having a singing teacher as an active member of the team. They were all satisfied with the singing teacher's knowledge and expertise, which had been acquired by several different means. Of 32 clinics without a singing teacher regularly associated with the team, funding and difficulty of finding an appropriate singing voice expert (81% and 50%, respectively) were among the main reasons for their absence. There was an expressed requirement for more interaction between voice clinics and singing teachers, and 86% replied that they would find it useful to have a list of singing teachers in their area. On the matter of gaining expertise and training, 74% of the clinics replying would enable singing teachers to observe clinic sessions for experience and 21% were willing to assist in training them for clinic-associated work.

  7. Toward a Trustworthy Voice: Increasing the Effectiveness of Automated Outreach Calls to Promote Colorectal Cancer Screening among African Americans

    PubMed Central

    Albright, Karen; Richardson, Terri; Kempe, Karin L; Wallace, Kristin

    2014-01-01

    Introduction: Colorectal cancer screening rates are lower among African-American members of Kaiser Permanente Colorado (KPCO) than among members of other races and ethnicities. This study evaluated use of a linguistically congruent voice in interactive voice response outreach calls about colorectal cancer screening as a strategy to increase call completion and response. Methods: After an initial discussion group to assess cultural acceptability of the project, 6 focus groups were conducted with 33 KPCO African-American members. Participants heard and discussed recordings of 5 female voices reading the same segment of the standard-practice colorectal cancer message using interactive voice response. The linguistic palette included the voices of a white woman, a lightly accented Latina, and 3 African-American women. Results: Participants strongly preferred the African-American voices, particularly two voices. Participants considered these voices the most trustworthy and reported that they would be the most effective at increasing motivation to complete an automated call. Participants supported the use of African-American voices when designing outgoing automated calls for African Americans because the sense of familiarity engendered trust among listeners. Participants also indicated that effective automated messages should provide immediate clarity of purpose; explain why the issue is relevant to African Americans; avoid sounding scripted; emphasize that the call is for the listener’s benefit only; sound personable, warm, and positive; and not create fear among listeners. Discussion: Establishing linguistic congruence between African Americans and the voices used in automated calls designed to reach them may increase the effectiveness of outreach efforts. PMID:24867548

  8. The voices of seduction: cross-gender effects in processing of erotic prosody

    PubMed Central

    Ethofer, Thomas; Wiethoff, Sarah; Anders, Silke; Kreifelts, Benjamin; Grodd, Wolfgang

    2007-01-01

    Gender specific differences in cognitive functions have been widely discussed. Considering social cognition such as emotion perception conveyed by non-verbal cues, generally a female advantage is assumed. In the present study, however, we revealed a cross-gender interaction with increasing responses to the voice of opposite sex in male and female subjects. This effect was confined to erotic tone of speech in behavioural data and haemodynamic responses within voice sensitive brain areas (right middle superior temporal gyrus). The observed response pattern, thus, indicates a particular sensitivity to emotional voices that have a high behavioural relevance for the listener. PMID:18985138

  9. Monitoring daily affective symptoms and memory function using interactive voice response in outpatients receiving electroconvulsive therapy.

    PubMed

    Fazzino, Tera L; Rabinowitz, Terry; Althoff, Robert R; Helzer, John E

    2013-12-01

    Recently, there has been a gradual shift from inpatient-only electroconvulsive therapy (ECT) toward outpatient administration. Potential advantages include convenience and reduced cost. But providers do not have the same opportunity to monitor treatment response and adverse effects as they do with inpatients. This can obviate some of the potential advantages of outpatient ECT, such as tailoring treatment intervals to clinical response. Scheduling is typically algorithmic rather than empirically based. Daily monitoring through an automated telephone, interactive voice response (IVR), is a potential solution to this quandary. To test feasibility of clinical monitoring via IVR, we recruited 26 patients (69% female; mean age, 51 years) receiving outpatient ECT to make daily IVR reports of affective symptoms and subjective memory for 60 days. The IVR also administered a word recognition task daily to test objective memory. Every seventh day, a longer IVR weekly interview included questions about suicidal ideation. Overall daily call compliance was high (mean, 80%). Most participants (96%) did not consider the calls to be time-consuming. Longitudinal regression analysis using generalized estimating equations revealed that participant objective memory functioning significantly improved during the study (P < 0.05). Of 123 weekly IVR interviews, 41 reports (33%) in 14 patients endorsed suicidal ideation during the previous week. Interactive voice response monitoring of outpatient ECT can provide more detailed clinical information than standard outpatient ECT assessment. Interactive voice response data offer providers a comprehensive, longitudinal picture of patient treatment response and adverse effects as a basis for treatment scheduling and ongoing clinical management.

  10. Sequoyah Foreign Language Translation System - Business Case Analysis

    DTIC Science & Technology

    2007-12-01

    Interactive Natural Dialogue System (S-MINDS)..................................................................20 j. Voice Response Translator ( VRT ...20 Figure 8. U.S. Marine Military Policeman Demonstrating VRT (From: Ref. U.S...www.languagerealm.com/Files/usmc_mt_test_2004.pdf. 21 j. Voice Response Translator ( VRT ) The VRT is a S2S human language translation device that uses

  11. Depressed mothers' infants are less responsive to faces and voices.

    PubMed

    Field, Tiffany; Diego, Miguel; Hernandez-Reif, Maria

    2009-06-01

    A review of our recent research suggests that infants of depressed mothers appeared to be less responsive to faces and voices as early as the neonatal period. At that time they have shown less orienting to the live face/voice stimulus of the Brazelton scale examiner and to their own and other infants' cry sounds. This lesser responsiveness has been attributed to higher arousal, less attentiveness and less "empathy." Their delayed heart rate decelerations to instrumental and vocal music sounds have also been ascribed to their delayed attention and/or slower processing. Later at 3-6 months they showed less negative responding to their mothers' non-contingent and still-face behavior, suggesting that they were more accustomed to this behavior in their mothers. The less responsive behavior of the depressed mothers was further compounded by their comorbid mood states of anger and anxiety and their difficult interaction styles including withdrawn or intrusive interaction styles and their later authoritarian parenting style. Pregnancy massage was effectively used to reduce prenatal depression and facilitate more optimal neonatal behavior. Interaction coaching was used during the postnatal period to help these dyads with their interactions and ultimately facilitate the infants' development.

  12. Infants of Depressed Mothers Are Less Responsive To Faces and Voices: A Review

    PubMed Central

    Field, Tiffany; Diego, Miguel; Hernandez-Reif, Maria

    2009-01-01

    A review of our recent research suggests that infants of depressed mothers appeared to be less responsive to faces and voices as early as the neonatal period. At that time they have shown less orienting to the live face/voice stimulus of the Brazelton scale examiner and to their own and other infants’ cry sounds. This lesser responsiveness has been attributed to higher arousal, less attentiveness and less “empathy.” Their delayed heart rate decelerations to instrumental and vocal music sounds have also been ascribed to their delayed attention and/or slower processing. Later at 3–6 months they showed less negative responding to their mothers’ non-contingent and still-face behavior, suggesting that they were more accustomed to this behavior in their mothers. The less responsive behavior of the depressed mothers was further compounded by their comorbid mood states of anger and anxiety and their difficult interaction styles including withdrawn or intrusive interaction styles and their later authoritarian parenting style. Pregnancy massage was effectively used to reduce prenatal depression and facilitate more optimal neonatal behavior. Interaction coaching was used during the postnatal period to help these dyads with their interactions and ultimately facilitate the infants’ development PMID:19439359

  13. Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.

    PubMed

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2014-02-12

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.

  14. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    PubMed Central

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  15. Interactions between observer and stimuli fertility status: Endocrine and perceptual responses to intrasexual vocal fertility cues.

    PubMed

    Ostrander, Grant M; Pipitone, R Nathan; Shoup-Knox, Melanie L

    2018-02-01

    Both men and women find female voices more attractive at higher fertility times in the menstrual cycle, suggesting the voice is a cue to fertility and/or hormonal status. Preference for fertile females' voices provides males with an obvious reproduction advantage, however the advantage for female listeners is less clear. One possibility is that attention to the fertility status of potential rivals may enable women to enhance their own reproductive strategies through intrasexual competition. If so, the response to having high fertility voices should include hormonal changes that promote competitive behavior. Furthermore, attention and response to such cues should vary as a function of the observer's own fertility, which influences her ability to compete for mates. The current study monitored variation in cortisol and testosterone levels in response to evaluating the attractiveness of voices of other women. All 33 participants completed this task once during ovulation then again during the luteal phase. The voice stimuli were recorded from naturally cycling women at both high and low fertility, and from women using hormonal birth control. We found that listeners rated high fertility voices as more attractive compared to low fertility, with the effect being stronger when listeners were ovulating. Testosterone was elevated following voice ratings suggesting threat detection or the anticipation of competition, but no stress response was found. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Interactive Voice/Web Response System in clinical research

    PubMed Central

    Ruikar, Vrishabhsagar

    2016-01-01

    Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc. PMID:26952178

  17. Interactive Voice/Web Response System in clinical research.

    PubMed

    Ruikar, Vrishabhsagar

    2016-01-01

    Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc.

  18. Voice and choice in health care in England: understanding citizen responses to dissatisfaction.

    PubMed

    Dowding, Keith; John, Peter

    2011-01-01

    Using data from a five-year online survey the paper examines the effects of relative satisfaction with health services on individuals' voice-and-choice activity in the English public health care system. Voice is considered in three parts – individual voice (complaints), collective voice voting and participation (collective action). Exercising choice is seen in terms of complete exit (not using health care), internal exit (choosing another public service provider) and private exit (using private health care). The interaction of satisfaction and forms of voice and choice are analysed over time. Both voice and choice are correlated with dissatisfaction with those who are unhappy with the NHS more likely to privately voice and to plan to take up private health care. Those unable to choose private provision are likely to use private voice. These factors are not affected by items associated with social capital – indeed, being more trusting leads to lower voice activity.

  19. A study on the application of voice interaction in automotive human machine interface experience design

    NASA Astrophysics Data System (ADS)

    Huang, Zhaohui; Huang, Xiemin

    2018-04-01

    This paper, firstly, introduces the application trend of the integration of multi-channel interactions in automotive HMI ((Human Machine Interface) from complex information models faced by existing automotive HMI and describes various interaction modes. By comparing voice interaction and touch screen, gestures and other interaction modes, the potential and feasibility of voice interaction in automotive HMI experience design are concluded. Then, the related theories of voice interaction, identification technologies, human beings' cognitive models of voices and voice design methods are further explored. And the research priority of this paper is proposed, i.e. how to design voice interaction to create more humane task-oriented dialogue scenarios to enhance interactive experiences of automotive HMI. The specific scenarios in driving behaviors suitable for the use of voice interaction are studied and classified, and the usability principles and key elements for automotive HMI voice design are proposed according to the scenario features. Then, through the user participatory usability testing experiment, the dialogue processes of voice interaction in automotive HMI are defined. The logics and grammars in voice interaction are classified according to the experimental results, and the mental models in the interaction processes are analyzed. At last, the voice interaction design method to create the humane task-oriented dialogue scenarios in the driving environment is proposed.

  20. Initial Progress Toward Development of a Voice-Based Computer-Delivered Motivational Intervention for Heavy Drinking College Students: An Experimental Study

    PubMed Central

    Lechner, William J; MacGlashan, James; Wray, Tyler B; Littman, Michael L

    2017-01-01

    Background Computer-delivered interventions have been shown to be effective in reducing alcohol consumption in heavy drinking college students. However, these computer-delivered interventions rely on mouse, keyboard, or touchscreen responses for interactions between the users and the computer-delivered intervention. The principles of motivational interviewing suggest that in-person interventions may be effective, in part, because they encourage individuals to think through and speak aloud their motivations for changing a health behavior, which current computer-delivered interventions do not allow. Objective The objective of this study was to take the initial steps toward development of a voice-based computer-delivered intervention that can ask open-ended questions and respond appropriately to users’ verbal responses, more closely mirroring a human-delivered motivational intervention. Methods We developed (1) a voice-based computer-delivered intervention that was run by a human controller and that allowed participants to speak their responses to scripted prompts delivered by speech generation software and (2) a text-based computer-delivered intervention that relied on the mouse, keyboard, and computer screen for all interactions. We randomized 60 heavy drinking college students to interact with the voice-based computer-delivered intervention and 30 to interact with the text-based computer-delivered intervention and compared their ratings of the systems as well as their motivation to change drinking and their drinking behavior at 1-month follow-up. Results Participants reported that the voice-based computer-delivered intervention engaged positively with them in the session and delivered content in a manner consistent with motivational interviewing principles. At 1-month follow-up, participants in the voice-based computer-delivered intervention condition reported significant decreases in quantity, frequency, and problems associated with drinking, and increased perceived importance of changing drinking behaviors. In comparison to the text-based computer-delivered intervention condition, those assigned to voice-based computer-delivered intervention reported significantly fewer alcohol-related problems at the 1-month follow-up (incident rate ratio 0.60, 95% CI 0.44-0.83, P=.002). The conditions did not differ significantly on perceived importance of changing drinking or on measures of drinking quantity and frequency of heavy drinking. Conclusions Results indicate that it is feasible to construct a series of open-ended questions and a bank of responses and follow-up prompts that can be used in a future fully automated voice-based computer-delivered intervention that may mirror more closely human-delivered motivational interventions to reduce drinking. Such efforts will require using advanced speech recognition capabilities and machine-learning approaches to train a program to mirror the decisions made by human controllers in the voice-based computer-delivered intervention used in this study. In addition, future studies should examine enhancements that can increase the perceived warmth and empathy of voice-based computer-delivered intervention, possibly through greater personalization, improvements in the speech generation software, and embodying the computer-delivered intervention in a physical form. PMID:28659259

  1. Loud and angry: sound intensity modulates amygdala activation to angry voices in social anxiety disorder

    PubMed Central

    Simon, Doerte; Becker, Michael; Mothes-Lasch, Martin; Miltner, Wolfgang H.R.

    2017-01-01

    Abstract Angry expressions of both voices and faces represent disorder-relevant stimuli in social anxiety disorder (SAD). Although individuals with SAD show greater amygdala activation to angry faces, previous work has failed to find comparable effects for angry voices. Here, we investigated whether voice sound-intensity, a modulator of a voice’s threat-relevance, affects brain responses to angry prosody in SAD. We used event-related functional magnetic resonance imaging to explore brain responses to voices varying in sound intensity and emotional prosody in SAD patients and healthy controls (HCs). Angry and neutral voices were presented either with normal or high sound amplitude, while participants had to decide upon the speaker’s gender. Loud vs normal voices induced greater insula activation, and angry vs neutral prosody greater orbitofrontal cortex activation in SAD as compared with HC subjects. Importantly, an interaction of sound intensity, prosody and group was found in the insula and the amygdala. In particular, the amygdala showed greater activation to loud angry voices in SAD as compared with HC subjects. This finding demonstrates a modulating role of voice sound-intensity on amygdalar hyperresponsivity to angry prosody in SAD and suggests that abnormal processing of interpersonal threat signals in amygdala extends beyond facial expressions in SAD. PMID:27651541

  2. The value of visualizing tone of voice.

    PubMed

    Pullin, Graham; Cook, Andrew

    2013-10-01

    Whilst most of us have an innate feeling for tone of voice, it is an elusive quality that even phoneticians struggle to describe with sufficient subtlety. For people who cannot speak themselves this can have particularly profound repercussions. Augmentative communication often involves text-to-speech, a technology that only supports a basic choice of prosody based on punctuation. Given how inherently difficult it is to talk about more nuanced tone of voice, there is a risk that its absence from current devices goes unremarked and unchallenged. Looking ahead optimistically to more expressive communication aids, their design will need to involve more subtle interactions with tone of voice-interactions that the people using them can understand and engage with. Interaction design can play a role in making tone of voice visible, tangible, and accessible. Two projects that have already catalysed interdisciplinary debate in this area, Six Speaking Chairs and Speech Hedge, are introduced together with responses. A broader role for design is advocated, as a means to opening up speech technology research to a wider range of disciplinary perspectives, and also to the contributions and influence of people who use it in their everyday lives.

  3. Loud and angry: sound intensity modulates amygdala activation to angry voices in social anxiety disorder.

    PubMed

    Simon, Doerte; Becker, Michael; Mothes-Lasch, Martin; Miltner, Wolfgang H R; Straube, Thomas

    2017-03-01

    Angry expressions of both voices and faces represent disorder-relevant stimuli in social anxiety disorder (SAD). Although individuals with SAD show greater amygdala activation to angry faces, previous work has failed to find comparable effects for angry voices. Here, we investigated whether voice sound-intensity, a modulator of a voice's threat-relevance, affects brain responses to angry prosody in SAD. We used event-related functional magnetic resonance imaging to explore brain responses to voices varying in sound intensity and emotional prosody in SAD patients and healthy controls (HCs). Angry and neutral voices were presented either with normal or high sound amplitude, while participants had to decide upon the speaker's gender. Loud vs normal voices induced greater insula activation, and angry vs neutral prosody greater orbitofrontal cortex activation in SAD as compared with HC subjects. Importantly, an interaction of sound intensity, prosody and group was found in the insula and the amygdala. In particular, the amygdala showed greater activation to loud angry voices in SAD as compared with HC subjects. This finding demonstrates a modulating role of voice sound-intensity on amygdalar hyperresponsivity to angry prosody in SAD and suggests that abnormal processing of interpersonal threat signals in amygdala extends beyond facial expressions in SAD. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Initial Progress Toward Development of a Voice-Based Computer-Delivered Motivational Intervention for Heavy Drinking College Students: An Experimental Study.

    PubMed

    Kahler, Christopher W; Lechner, William J; MacGlashan, James; Wray, Tyler B; Littman, Michael L

    2017-06-28

    Computer-delivered interventions have been shown to be effective in reducing alcohol consumption in heavy drinking college students. However, these computer-delivered interventions rely on mouse, keyboard, or touchscreen responses for interactions between the users and the computer-delivered intervention. The principles of motivational interviewing suggest that in-person interventions may be effective, in part, because they encourage individuals to think through and speak aloud their motivations for changing a health behavior, which current computer-delivered interventions do not allow. The objective of this study was to take the initial steps toward development of a voice-based computer-delivered intervention that can ask open-ended questions and respond appropriately to users' verbal responses, more closely mirroring a human-delivered motivational intervention. We developed (1) a voice-based computer-delivered intervention that was run by a human controller and that allowed participants to speak their responses to scripted prompts delivered by speech generation software and (2) a text-based computer-delivered intervention that relied on the mouse, keyboard, and computer screen for all interactions. We randomized 60 heavy drinking college students to interact with the voice-based computer-delivered intervention and 30 to interact with the text-based computer-delivered intervention and compared their ratings of the systems as well as their motivation to change drinking and their drinking behavior at 1-month follow-up. Participants reported that the voice-based computer-delivered intervention engaged positively with them in the session and delivered content in a manner consistent with motivational interviewing principles. At 1-month follow-up, participants in the voice-based computer-delivered intervention condition reported significant decreases in quantity, frequency, and problems associated with drinking, and increased perceived importance of changing drinking behaviors. In comparison to the text-based computer-delivered intervention condition, those assigned to voice-based computer-delivered intervention reported significantly fewer alcohol-related problems at the 1-month follow-up (incident rate ratio 0.60, 95% CI 0.44-0.83, P=.002). The conditions did not differ significantly on perceived importance of changing drinking or on measures of drinking quantity and frequency of heavy drinking. Results indicate that it is feasible to construct a series of open-ended questions and a bank of responses and follow-up prompts that can be used in a future fully automated voice-based computer-delivered intervention that may mirror more closely human-delivered motivational interventions to reduce drinking. Such efforts will require using advanced speech recognition capabilities and machine-learning approaches to train a program to mirror the decisions made by human controllers in the voice-based computer-delivered intervention used in this study. In addition, future studies should examine enhancements that can increase the perceived warmth and empathy of voice-based computer-delivered intervention, possibly through greater personalization, improvements in the speech generation software, and embodying the computer-delivered intervention in a physical form. ©Christopher W Kahler, William J Lechner, James MacGlashan, Tyler B Wray, Michael L Littman. Originally published in JMIR Mental Health (http://mental.jmir.org), 28.06.2017.

  5. Natural asynchronies in audiovisual communication signals regulate neuronal multisensory interactions in voice-sensitive cortex.

    PubMed

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2015-01-06

    When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face-voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions.

  6. Designing of Intelligent Multilingual Patient Reported Outcome System (IMPROS)

    PubMed Central

    Pourasghar, Faramarz; Partovi, Yeganeh

    2015-01-01

    Background: By self-reporting outcome procedure the patients themselves record disease symptoms outside medical centers and then report them to medical staff in specific periods of time. One of the self-reporting methods is the application of interactive voice response (IVR), in which some pre-designed questions in the form of voice tracks would be played and then the caller responses the questions by pressing phone’s keypad bottoms. Aim: The present research explains the main framework of such system designing according to IVR technology that is for the first time designed and administered in Iran. Methods: Interactive Voice Response system was composed by two main parts of hardware and software. Hardware section includes one or several digital phone lines, a modem card with voice playing capability and a PC. IVR software on the other hand, acts as an intelligent control center, records call information and controls incoming data. Results: One of the main features of the system is its capability to be administered in common PCs, utilizing simple and cheap modems, high speed to take responses and it’s appropriateness to low literate patients. The system is applicable for monitoring chronic diseases, cancer and also in psychological diseases and can be suitable for taking care of elders and Children who require long term cares. Other features include user-friendly, decrease in direct and indirect costs of disease treatment and enjoying from high level of security to access patients’ profiles. Conclusions: Intelligent multilingual patient reported outcome system (IMPROS) by controlling diseases gives the opportunity to patients to have more participation during treatment and it improves mutual interaction between patient and medical staff. Moreover it increases the quality of medical services, Additional to empowering patients and their followers. PMID:26635441

  7. The use of an automated interactive voice response system to manage medication identification calls to a poison center.

    PubMed

    Krenzelok, Edward P; Mrvos, Rita

    2009-05-01

    In 2007, medication identification requests (MIRs) accounted for 26.2% of all calls to U.S. poison centers. MIRs are documented with minimal information, but they still require an inordinate amount of work by specialists in poison information (SPI). An analysis was undertaken to identify options to reduce the impact of MIRs on both human and financial resources. All MIRs (2003-2007) to a certified regional poison information center were analyzed to determine call patterns and staffing. The data were used to justify an efficient and cost-effective solution. MIRs represented 42.3% of the 2007 call volume. Optimal staffing would require hiring an additional four full-time equivalent SPI. An interactive voice response (IVR) system was developed to respond to the MIRs. The IVR was used to develop the Medication Identification System that allowed the diversion of up to 50% of the MIRs, enhancing surge capacity and allowing specialists to address the more emergent poison exposure calls. This technology is an entirely voice-activated response call management system that collects zip code, age, gender and drug data and stores all responses as .csv files for reporting purposes. The query bank includes the 200 most common MIRs, and the system features text-to-voice synthesis that allows easy modification of the drug identification menu. Callers always have the option of engaging a SPI at any time during the IVR call flow. The IVR is an efficient and effective alternative that creates better staff utilization.

  8. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    NASA Astrophysics Data System (ADS)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  9. ‘Inner voices’: the cerebral representation of emotional voice cues described in literary texts

    PubMed Central

    Kreifelts, Benjamin; Gößling-Arnold, Christina; Wertheimer, Jürgen; Wildgruber, Dirk

    2014-01-01

    While non-verbal affective voice cues are generally recognized as a crucial behavioral guide in any day-to-day conversation their role as a powerful source of information may extend well beyond close-up personal interactions and include other modes of communication such as written discourse or literature as well. Building on the assumption that similarities between the different ‘modes’ of voice cues may not only be limited to their functional role but may also include cerebral mechanisms engaged in the decoding process, the present functional magnetic resonance imaging study aimed at exploring brain responses associated with processing emotional voice signals described in literary texts. Emphasis was placed on evaluating ‘voice’ sensitive as well as task- and emotion-related modulations of brain activation frequently associated with the decoding of acoustic vocal cues. Obtained findings suggest that several similarities emerge with respect to the perception of acoustic voice signals: results identify the superior temporal, lateral and medial frontal cortex as well as the posterior cingulate cortex and cerebellum to contribute to the decoding process, with similarities to acoustic voice perception reflected in a ‘voice’-cue preference of temporal voice areas as well as an emotion-related modulation of the medial frontal cortex and a task-modulated response of the lateral frontal cortex. PMID:24396008

  10. Natural asynchronies in audiovisual communication signals regulate neuronal multisensory interactions in voice-sensitive cortex

    PubMed Central

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.; Petkov, Christopher I.

    2015-01-01

    When social animals communicate, the onset of informative content in one modality varies considerably relative to the other, such as when visual orofacial movements precede a vocalization. These naturally occurring asynchronies do not disrupt intelligibility or perceptual coherence. However, they occur on time scales where they likely affect integrative neuronal activity in ways that have remained unclear, especially for hierarchically downstream regions in which neurons exhibit temporally imprecise but highly selective responses to communication signals. To address this, we exploited naturally occurring face- and voice-onset asynchronies in primate vocalizations. Using these as stimuli we recorded cortical oscillations and neuronal spiking responses from functional MRI (fMRI)-localized voice-sensitive cortex in the anterior temporal lobe of macaques. We show that the onset of the visual face stimulus resets the phase of low-frequency oscillations, and that the face–voice asynchrony affects the prominence of two key types of neuronal multisensory responses: enhancement or suppression. Our findings show a three-way association between temporal delays in audiovisual communication signals, phase-resetting of ongoing oscillations, and the sign of multisensory responses. The results reveal how natural onset asynchronies in cross-sensory inputs regulate network oscillations and neuronal excitability in the voice-sensitive cortex of macaques, a suggested animal model for human voice areas. These findings also advance predictions on the impact of multisensory input on neuronal processes in face areas and other brain regions. PMID:25535356

  11. Study on intelligent processing system of man-machine interactive garment frame model

    NASA Astrophysics Data System (ADS)

    Chen, Shuwang; Yin, Xiaowei; Chang, Ruijiang; Pan, Peiyun; Wang, Xuedi; Shi, Shuze; Wei, Zhongqian

    2018-05-01

    A man-machine interactive garment frame model intelligent processing system is studied in this paper. The system consists of several sensor device, voice processing module, mechanical parts and data centralized acquisition devices. The sensor device is used to collect information on the environment changes brought by the body near the clothes frame model, the data collection device is used to collect the information of the environment change induced by the sensor device, voice processing module is used for speech recognition of nonspecific person to achieve human-machine interaction, mechanical moving parts are used to make corresponding mechanical responses to the information processed by data collection device.it is connected with data acquisition device by a means of one-way connection. There is a one-way connection between sensor device and data collection device, two-way connection between data acquisition device and voice processing module. The data collection device is one-way connection with mechanical movement parts. The intelligent processing system can judge whether it needs to interact with the customer, realize the man-machine interaction instead of the current rigid frame model.

  12. Deficits in voice and multisensory processing in patients with Prader-Willi syndrome.

    PubMed

    Salles, Juliette; Strelnikov, Kuzma; Carine, Mantoulan; Denise, Thuilleaux; Laurier, Virginie; Molinas, Catherine; Tauber, Maïthé; Barone, Pascal

    2016-05-01

    Prader-Willi syndrome (PWS) is a rare neurodevelopmental and genetic disorder that is characterized by various expression of endocrine, cognitive and behavioral problems, among which a true obsession for food and a deficit of satiety that leads to hyperphagia and severe obesity. Neuropsychological studies have reported that PWS display altered social interactions with a specific weakness in interpreting social information and in responding to them, a symptom closed to that observed in autism spectrum disorders (ASD). Based on the hypothesis that atypical multisensory integration such as face and voice interactions would contribute in PWS to social impairment we investigate the abilities of PWS to process communication signals including the human voice. Patients with PWS recruited from the national reference center for PWS performed a simple detection task of stimuli presented in an uni-o or bimodal condition, as well as a voice discrimination task. Compared to control typically developing (TD) individuals, PWS present a specific deficit in discriminating human voices from environmental sounds. Further, PWS present a much lower multisensory benefits with an absence of violation of the race model indicating that multisensory information do not converge and interact prior to the initiation of the behavioral response. All the deficits observed in PWS were stronger for the subgroup of patients suffering from Uniparental Disomy, a population known to be more sensitive to ASD. Altogether, our study suggests that the deficits in social behavior observed in PWS derive at least partly from an impairment in deciphering the social information carried by voice signals, face signals, and the combination of both. In addition, our work is in agreement with the brain imaging studies revealing an alteration in PWS of the "social brain network" including the STS region involved in processing human voices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Interactive Augmentation of Voice Quality and Reduction of Breath Airflow in the Soprano Voice.

    PubMed

    Rothenberg, Martin; Schutte, Harm K

    2016-11-01

    In 1985, at a conference sponsored by the National Institutes of Health, Martin Rothenberg first described a form of nonlinear source-tract acoustic interaction mechanism by which some sopranos, singing in their high range, can use to reduce the total airflow, to allow holding the note longer, and simultaneously enrich the quality of the voice, without straining the voice. (M. Rothenberg, "Source-Tract Acoustic Interaction in the Soprano Voice and Implications for Vocal Efficiency," Fourth International Conference on Vocal Fold Physiology, New Haven, Connecticut, June 3-6, 1985.) In this paper, we describe additional evidence for this type of nonlinear source-tract interaction in some soprano singing and describe an analogous interaction phenomenon in communication engineering. We also present some implications for voice research and pedagogy. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  14. Phonologically-Based Biomarkers for Major Depressive Disorder

    DTIC Science & Technology

    2011-04-26

    COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...measures of depression severity and treatment response collected via interactive voice response (IVR) technology." Journal of Neurolinguistics 20(1

  15. The distress of voice-hearing: the use of simulation for awareness, understanding and communication skill development in undergraduate nursing education.

    PubMed

    Orr, Fiona; Kellehear, Kevin; Armari, Elizabeth; Pearson, Arana; Holmes, Douglas

    2013-11-01

    Role-play scenarios are frequently used with undergraduate nursing students enrolled in mental health nursing subjects to simulate the experience of voice-hearing. However, role-play has limitations and typically does not involve those who hear voices. This collaborative project between mental health consumers who hear voices and nursing academics aimed to develop and assess simulated voice-hearing as an alternative learning tool that could provide a deeper understanding of the impact of voice-hearing, whilst enabling students to consider the communication skills required when interacting with voice-hearers. Simulated sounds and voices recorded by consumers on mp3 players were given to eighty final year nursing students undertaking a mental health elective. Students participated in various activities whilst listening to the simulations. Seventy-six (95%) students completed a written evaluation following the simulation, which assessed the benefits of the simulation and its implications for clinical practice. An analysis of the students' responses by an external evaluator indicated that there were three major learning outcomes: developing an understanding of voice-hearing, increasing students' awareness of its impact on functioning, and consideration of the communication skills necessary to engage with consumers who hear voices. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Using the Web to Market Your Schools.

    ERIC Educational Resources Information Center

    Carr, Nora

    2001-01-01

    With careful planning and a strategic focus, today's technology can greatly enhance a district's marketing efforts. Websites can offer features such as interactive school assignment (based on home address), ability to check student progress, education portals (24-hour news channels), one-to-one communication, and interactive voice responses. (MLH)

  17. Measuring positive and negative affect in the voiced sounds of African elephants (Loxodonta africana).

    PubMed

    Soltis, Joseph; Blowers, Tracy E; Savage, Anne

    2011-02-01

    As in other mammals, there is evidence that the African elephant voice reflects affect intensity, but it is less clear if positive and negative affective states are differentially reflected in the voice. An acoustic comparison was made between African elephant "rumble" vocalizations produced in negative social contexts (dominance interactions), neutral social contexts (minimal social activity), and positive social contexts (affiliative interactions) by four adult females housed at Disney's Animal Kingdom®. Rumbles produced in the negative social context exhibited higher and more variable fundamental frequencies (F(0)) and amplitudes, longer durations, increased voice roughness, and higher first formant locations (F1), compared to the neutral social context. Rumbles produced in the positive social context exhibited similar shifts in most variables (F(0 )variation, amplitude, amplitude variation, duration, and F1), but the magnitude of response was generally less than that observed in the negative context. Voice roughness and F(0) observed in the positive social context remained similar to that observed in the neutral context. These results are most consistent with the vocal expression of affect intensity, in which the negative social context elicited higher intensity levels than the positive context, but differential vocal expression of positive and negative affect cannot be ruled out.

  18. Feasibility of automated speech sample collection with stuttering children using interactive voice response (IVR) technology.

    PubMed

    Vogel, Adam P; Block, Susan; Kefalianos, Elaina; Onslow, Mark; Eadie, Patricia; Barth, Ben; Conway, Laura; Mundt, James C; Reilly, Sheena

    2015-04-01

    To investigate the feasibility of adopting automated interactive voice response (IVR) technology for remotely capturing standardized speech samples from stuttering children. Participants were 10 6-year-old stuttering children. Their parents called a toll-free number from their homes and were prompted to elicit speech from their children using a standard protocol involving conversation, picture description and games. The automated IVR system was implemented using an off-the-shelf telephony software program and delivered by a standard desktop computer. The software infrastructure utilizes voice over internet protocol. Speech samples were automatically recorded during the calls. Video recordings were simultaneously acquired in the home at the time of the call to evaluate the fidelity of the telephone collected samples. Key outcome measures included syllables spoken, percentage of syllables stuttered and an overall rating of stuttering severity using a 10-point scale. Data revealed a high level of relative reliability in terms of intra-class correlation between the video and telephone acquired samples on all outcome measures during the conversation task. Findings were less consistent for speech samples during picture description and games. Results suggest that IVR technology can be used successfully to automate remote capture of child speech samples.

  19. A scoping review to explore the suitability of interactive voice response to conduct automated performance measurement of the patient's experience in primary care.

    PubMed

    Falconi, Michael; Johnston, Sharon; Hogg, William

    2016-05-01

    Practice-based performance measurement is fundamental for improvement and accountability in primary care. Traditional performance measurement of the patient's experience is often too costly and cumbersome for most practices. This scoping review explores the literature on the use of interactive voice response (IVR) telephone surveys to identify lessons for its use for collecting data on patient-reported outcome measures at the primary care practice level. The literature suggests IVR could potentially increase the capacity to reach more representative patient samples and those traditionally most difficult to engage. There is potential for long-term cost effectiveness and significant decrease of the burden on practices involved in collecting patient survey data. Challenges such as low response rates, mode effects, high initial set-up costs and maintenance fees, are also reported and require careful attention. This review suggests IVR may be a feasible alternative to traditional patient data collection methods, which should be further explored.

  20. Administration of Neuropsychological Tests Using Interactive Voice Response Technology in the Elderly: Validation and Limitations

    PubMed Central

    Miller, Delyana Ivanova; Talbot, Vincent; Gagnon, Michèle; Messier, Claude

    2013-01-01

    Interactive voice response (IVR) systems are computer programs, which interact with people to provide a number of services from business to health care. We examined the ability of an IVR system to administer and score a verbal fluency task (fruits) and the digit span forward and backward in 158 community dwelling people aged between 65 and 92 years of age (full scale IQ of 68–134). Only six participants could not complete all tasks mostly due to early technical problems in the study. Participants were also administered the Wechsler Intelligence Scale fourth edition (WAIS-IV) and Wechsler Memory Scale fourth edition subtests. The IVR system correctly recognized 90% of the fruits in the verbal fluency task and 93–95% of the number sequences in the digit span. The IVR system typically underestimated the performance of participants because of voice recognition errors. In the digit span, these errors led to the erroneous discontinuation of the test: however the correlation between IVR scoring and clinical scoring was still high (93–95%). The correlation between the IVR verbal fluency and the WAIS-IV Similarities subtest was 0.31. The correlation between the IVR digit span forward and backward and the in-person administration was 0.46. We discuss how valid and useful IVR systems are for neuropsychological testing in the elderly. PMID:23950755

  1. 31 CFR 901.4 - Reporting debts.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and Urban Development's Credit Alert Interactive Voice Response System (CAIVRS). For information about the CAIVRS program, agencies should contact the Director of Information Resources Management Policy and Management Division, Office of Information Technology, Department of Housing and Urban Development...

  2. 31 CFR 901.4 - Reporting debts.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... and Urban Development's Credit Alert Interactive Voice Response System (CAIVRS). For information about the CAIVRS program, agencies should contact the Director of Information Resources Management Policy and Management Division, Office of Information Technology, Department of Housing and Urban Development...

  3. Vocal responses to unanticipated perturbations in voice loudness feedback: an automatic mechanism for stabilizing voice amplitude.

    PubMed

    Bauer, Jay J; Mittal, Jay; Larson, Charles R; Hain, Timothy C

    2006-04-01

    The present study tested whether subjects respond to unanticipated short perturbations in voice loudness feedback with compensatory responses in voice amplitude. The role of stimulus magnitude (+/- 1,3 vs 6 dB SPL), stimulus direction (up vs down), and the ongoing voice amplitude level (normal vs soft) were compared across compensations. Subjects responded to perturbations in voice loudness feedback with a compensatory change in voice amplitude 76% of the time. Mean latency of amplitude compensation was 157 ms. Mean response magnitudes were smallest for 1-dB stimulus perturbations (0.75 dB) and greatest for 6-dB conditions (0.98 dB). However, expressed as gain, responses for 1-dB perturbations were largest and almost approached 1.0. Response magnitudes were larger for the soft voice amplitude condition compared to the normal voice amplitude condition. A mathematical model of the audio-vocal system captured the main features of the compensations. Previous research has demonstrated that subjects can respond to an unanticipated perturbation in voice pitch feedback with an automatic compensatory response in voice fundamental frequency. Data from the present study suggest that voice loudness feedback can be used in a similar manner to monitor and stabilize voice amplitude around a desired loudness level.

  4. Age Differences in Voice Evaluation: From Auditory-Perceptual Evaluation to Social Interactions

    ERIC Educational Resources Information Center

    Lortie, Catherine L.; Deschamps, Isabelle; Guitton, Matthieu J.; Tremblay, Pascale

    2018-01-01

    Purpose: The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial…

  5. Bias in child maltreatment self-reports using Interactive Voice Response

    PubMed Central

    Kepple, Nancy J.; Freisthler, Bridget; Johnson-Motoyama, Michelle

    2014-01-01

    Few methods estimate the prevalence of child maltreatment in the general population due to concerns about socially desirable responding and mandated reporting laws. Innovative methods, such as Interactive Voice Response (IVR), may obtain better estimates that address these concerns. This study examined the utility of Interactive Voice Response (IVR) for child maltreatment behaviors by assessing differences between respondents who completed and did not complete a survey using IVR technology. A mixed-mode telephone survey was conducted in English and Spanish in 50 cities in California during 2009. Caregivers (n = 3,023) self-reported abusive and neglectful parenting behaviors for a focal child under the age of 13 using Computer-Assisted Telephone Interviewing and IVR. We used Hierarchical Generalized Linear Models to compare survey completion by caregivers nested within cities for the full sample and age-specific ranges. For demographic characteristics, caregivers born in the United States were more likely to complete the survey when controlling for covariates. Parenting stress, provision of physical needs, and provision of supervisory needs were not associated with survey completion in the full multivariate model. For caregivers of children 0 to 4 years (n = 838), those reporting they could often or always hear their child from another room had a higher likelihood of survey completion. The findings suggest IVR could prove to be useful for future surveys that aim to estimate abusive and/or neglectful parenting behaviors given the limited bias observed for demographic characteristics and problematic parenting behaviors. Further research should expand upon its utility to advance estimation rates. PMID:24819534

  6. Maternal Sensitivity and the Learning-Promoting Effects of Depressed and Nondepressed Mothers' Infant-Directed Speech

    ERIC Educational Resources Information Center

    Kaplan, Peter S.; Burgess, Aaron P.; Sliter, Jessica K.; Moreno, Amanda J.

    2009-01-01

    The hypothesis that aspects of current mother-infant interactions predict an infant's response to maternal infant-directed speech (IDS) was tested. Relative to infants of nondepressed mothers, those of depressed mothers acquired weaker voice-face associations in response to their own mothers' IDS in a conditioned-attention paradigm, although this…

  7. Functional selectivity for face processing in the temporal voice area of early deaf individuals

    PubMed Central

    van Ackeren, Markus J.; Rabini, Giuseppe; Zonca, Joshua; Foa, Valentina; Baruffaldi, Francesca; Rezk, Mohamed; Pavani, Francesco; Rossion, Bruno; Collignon, Olivier

    2017-01-01

    Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions. PMID:28652333

  8. Nurses using futuristic technology in today's healthcare setting.

    PubMed

    Wolf, Debra M; Kapadia, Amar; Kintzel, Jessie; Anton, Bonnie B

    2009-01-01

    Human computer interaction (HCI) equates nurses using voice assisted technology within a clinical setting to document patient care real time, retrieve patient information from care plans, and complete routine tasks. This is a reality currently utilized by clinicians today in acute and long term care settings. Voice assisted documentation provides hands & eyes free accurate documentation while enabling effective communication and task management. The speech technology increases the accuracy of documentation, while interfacing directly into the electronic health record (EHR). Using technology consisting of a light weight headset and small fist size wireless computer, verbal responses to easy to follow cues are converted into a database systems allowing staff to obtain individualized care status reports on demand. To further assist staff in their daily process, this innovative technology allows staff to send and receive pages as needed. This paper will discuss how leading edge and award winning technology is being integrated within the United States. Collaborative efforts between clinicians and analyst will be discussed reflecting the interactive design and build functionality. Features such as the system's voice responses and directed cues will be shared and how easily data can be documented, viewed and retrieved. Outcome data will be presented on how the technology impacted organization's quality outcomes, financial reimbursement, and employee's level of satisfaction.

  9. Voice responses to changes in pitch of voice or tone auditory feedback

    NASA Astrophysics Data System (ADS)

    Sivasankar, Mahalakshmi; Bauer, Jay J.; Babu, Tara; Larson, Charles R.

    2005-02-01

    The present study was undertaken to examine if a subject's voice F0 responded not only to perturbations in pitch of voice feedback but also to changes in pitch of a side tone presented congruent with voice feedback. Small magnitude brief duration perturbations in pitch of voice or tone auditory feedback were randomly introduced during sustained vowel phonations. Results demonstrated a higher rate and larger magnitude of voice F0 responses to changes in pitch of the voice compared with a triangular-shaped tone (experiment 1) or a pure tone (experiment 2). However, response latencies did not differ across voice or tone conditions. Data suggest that subjects responded to the change in F0 rather than harmonic frequencies of auditory feedback because voice F0 response prevalence, magnitude, or latency did not statistically differ across triangular-shaped tone or pure-tone feedback. Results indicate the audio-vocal system is sensitive to the change in pitch of a variety of sounds, which may represent a flexible system capable of adapting to changes in the subject's voice. However, lower prevalence and smaller responses to tone pitch-shifted signals suggest that the audio-vocal system may resist changes to the pitch of other environmental sounds when voice feedback is present. .

  10. The Honors College Experience Reconsidered: Exploring the Student Perspective

    ERIC Educational Resources Information Center

    Young, James H., III; Story, Lachel; Tarver, Samantha; Weinauer, Ellen; Keeler, Julia; McQuirter, Allison

    2016-01-01

    Often administrators overlook the student voice in developing strategic plans, mission and vision statements, marketing strategies, student services, and extracurricular programming. Engaging students in these areas may enhance students' cooperation, interactions, responsibility, and expectations. In order to assess honors students' perspectives…

  11. Computer-automated dementia screening using a touch-tone telephone.

    PubMed

    Mundt, J C; Ferber, K L; Rizzo, M; Greist, J H

    2001-11-12

    This study investigated the sensitivity and specificity of a computer-automated telephone system to evaluate cognitive impairment in elderly callers to identify signs of early dementia. The Clinical Dementia Rating Scale was used to assess 155 subjects aged 56 to 93 years (n = 74, 27, 42, and 12, with a Clinical Dementia Rating Scale score of 0, 0.5, 1, and 2, respectively). These subjects performed a battery of tests administered by an interactive voice response system using standard Touch-Tone telephones. Seventy-four collateral informants also completed an interactive voice response version of the Symptoms of Dementia Screener. Sixteen cognitively impaired subjects were unable to complete the telephone call. Performances on 6 of 8 tasks were significantly influenced by Clinical Dementia Rating Scale status. The mean (SD) call length was 12 minutes 27 seconds (2 minutes 32 seconds). A subsample (n = 116) was analyzed using machine-learning methods, producing a scoring algorithm that combined performances across 4 tasks. Results indicated a potential sensitivity of 82.0% and specificity of 85.5%. The scoring model generalized to a validation subsample (n = 39), producing 85.0% sensitivity and 78.9% specificity. The kappa agreement between predicted and actual group membership was 0.64 (P<.001). Of the 16 subjects unable to complete the call, 11 provided sufficient information to permit us to classify them as impaired. Standard scoring of the interactive voice response-administered Symptoms of Dementia Screener (completed by informants) produced a screening sensitivity of 63.5% and 100% specificity. A lower criterion found a 90.4% sensitivity, without lowering specificity. Computer-automated telephone screening for early dementia using either informant or direct assessment is feasible. Such systems could provide wide-scale, cost-effective screening, education, and referral services to patients and caregivers.

  12. Mobile Health Devices as Tools for Worldwide Cardiovascular Risk Reduction and Disease Management.

    PubMed

    Piette, John D; List, Justin; Rana, Gurpreet K; Townsend, Whitney; Striplin, Dana; Heisler, Michele

    2015-11-24

    We examined evidence on whether mobile health (mHealth) tools, including interactive voice response calls, short message service, or text messaging, and smartphones, can improve lifestyle behaviors and management related to cardiovascular diseases throughout the world. We conducted a state-of-the-art review and literature synthesis of peer-reviewed and gray literature published since 2004. The review prioritized randomized trials and studies focused on cardiovascular diseases and risk factors, but included other reports when they represented the best available evidence. The search emphasized reports on the potential benefits of mHealth interventions implemented in low- and middle-income countries. Interactive voice response and short message service interventions can improve cardiovascular preventive care in developed countries by addressing risk factors including weight, smoking, and physical activity. Interactive voice response and short message service-based interventions for cardiovascular disease management also have shown benefits with respect to hypertension management, hospital readmissions, and diabetic glycemic control. Multimodal interventions including Web-based communication with clinicians and mHealth-enabled clinical monitoring with feedback also have shown benefits. The evidence regarding the potential benefits of interventions using smartphones and social media is still developing. Studies of mHealth interventions have been conducted in >30 low- and middle-income countries, and evidence to date suggests that programs are feasible and may improve medication adherence and disease outcomes. Emerging evidence suggests that mHealth interventions may improve cardiovascular-related lifestyle behaviors and disease management. Next-generation mHealth programs developed worldwide should be based on evidence-based behavioral theories and incorporate advances in artificial intelligence for adapting systems automatically to patients' unique and changing needs. © 2015 American Heart Association, Inc.

  13. Temporal signatures of processing voiceness and emotion in sound

    PubMed Central

    Gunter, Thomas C.

    2017-01-01

    Abstract This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance. PMID:28338796

  14. Temporal signatures of processing voiceness and emotion in sound.

    PubMed

    Schirmer, Annett; Gunter, Thomas C

    2017-06-01

    This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance. © The Author (2017). Published by Oxford University Press.

  15. Measuring the intuitive response of users when faced with different interactive paradigms to control a gastroenterology CAD system.

    PubMed

    Abrantes, D; Gomes, P; Pereira, D; Coimbra, M

    2016-08-01

    The gastroenterology specialty could benefit from the introduction of Computer Assisted Decision (CAD) systems, since gastric cancer is a serious concern in which an accurate and early diagnosis usually leads to a good prognosis. Still, the way doctors interact with these systems is very important because it will often determine its embracement or rejection, as any gains in productivity will frequently hinge on how comfortable they are with it. Using other types of interaction paradigms such as voice and motion control, is important in a way that typical inputs such as keyboard and mouse are sometimes not the best choice for certain clinical scenarios. In order to ascertain how a doctor could control a hypothetical CAD system during a gastroenterology exam, we measured the natural response of users when faced with three different task requests, using three types of interaction paradigms: voice, gesture and endoscope. Results fit in what was expected, with gesture control being the most intuitive to use, and the endoscope being on the other edge. All the technologies are mature enough to cope with the response concepts the participants gave us. However, when having into account the scenario context, better natural response scores may not always be the best choice for implementation. That way, simplification or reduction of tasks, along with a well tought-out interface, or even mixing more oriented paradigms for particular requests, could allow for better system control with fewer inconveniences for the user.

  16. 75 FR 64732 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    ... customer satisfaction surveys. These surveys provide the public with ongoing opportunity to express their... the public. Customers are defined as any individual or group seeking health or public health... customer satisfaction surveys. The Interactive Voice Response Survey--offered in English and Spanish and...

  17. Research on realization scheme of interactive voice response (IVR) system

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Zhu, Guangxi

    2003-12-01

    In this paper, a novel interactive voice response (IVR) system is proposed, which is apparently different from the traditional. Using software operation and network control, the IVR system is presented which only depends on software in the server in which the system lies and the hardware in network terminals on user side, such as gateway (GW), personal gateway (PG), PC and so on. The system transmits the audio using real time protocol (RTP) protocol via internet to the network terminals and controls flow using finite state machine (FSM) stimulated by H.245 massages sent from user side and the system control factors. Being compared with other existing schemes, this IVR system results in several advantages, such as greatly saving the system cost, fully utilizing the existing network resources and enhancing the flexibility. The system is capable to be put in any service server anywhere in the Internet and even fits for the wireless applications based on packet switched communication. The IVR system has been put into reality and passed the system test.

  18. Laughter catches attention!

    PubMed

    Pinheiro, Ana P; Barros, Carla; Dias, Marcelo; Kotz, Sonja A

    2017-12-01

    In social interactions, emotionally salient and sudden changes in vocal expressions attract attention. However, only a few studies examined how emotion and attention interact in voice processing. We investigated neutral, happy (laughs) and angry (growls) vocalizations in a modified oddball task. Participants silently counted the targets in each block and rated the valence and arousal of the vocalizations. A combined event-related potential and time-frequency analysis focused on the P3 and pre-stimulus alpha power to capture attention effects in response to unexpected events. Whereas an early differentiation between emotionally salient and neutral vocalizations was reflected in the P3a response, the P3b was selectively enhanced for happy voices. The P3b modulation was predicted by pre-stimulus frontal alpha desynchronization, and by the perceived pleasantness of the targets. These findings indicate that vocal emotions may be differently processed based on task relevance and valence. Increased anticipation and attention to positive vocal cues (laughter) may reflect their high social relevance. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. 75 FR 30845 - Request Voucher for Grant Payment and Line of Credit Control System (LOCCS) Voice Response System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-02

    ... request vouchers for distribution of grant funds using the automated Voice Response System (VRS). An... Payment and Line of Credit Control System (LOCCS) Voice Response System Access Authorization AGENCY... subject proposal. Payment request vouchers for distribution of grant funds using the automated Voice...

  20. Voice to Voice: Developing In-Service Teachers' Personal, Collaborative, and Public Voices.

    ERIC Educational Resources Information Center

    Thurber, Frances; Zimmerman, Enid

    1997-01-01

    Describes a model for inservice education that begins with an interchange of teachers' voices with those of the students in an interactive dialog. The exchange allows them to develop their private voices through self-reflection and validation of their own experiences. (JOW)

  1. The "VoiceForum" Platform for Spoken Interaction

    ERIC Educational Resources Information Center

    Fynn, Fohn; Wigham, Chiara R.

    2011-01-01

    Showcased in the courseware exhibition, "VoiceForum" is a web-based software platform for asynchronous learner interaction in threaded discussions using voice and text. A dedicated space is provided for the tutor who can give feedback on a posted message and dialogue with the participants at a separate level from the main interactional…

  2. Voices on Voice: Perspectives, Definitions, Inquiry.

    ERIC Educational Resources Information Center

    Yancey, Kathleen Blake, Ed.

    This collection of essays approaches "voice" as a means of expression that lives in the interactions of writers, readers, and language, and examines the conceptualizations of voice within the oral rhetorical and expressionist traditions, and the notion of voice as both a singular and plural phenomenon. An explanatory introduction by the…

  3. Comparing the demands of destination entry using Google Glass and the Samsung Galaxy S4 during simulated driving.

    PubMed

    Beckers, Niek; Schreiner, Sam; Bertrand, Pierre; Mehler, Bruce; Reimer, Bryan

    2017-01-01

    The relative impact of using a Google Glass based voice interface to enter a destination address compared to voice and touch-entry methods using a handheld Samsung Galaxy S4 smartphone was assessed in a driving simulator. Voice entry (Google Glass and Samsung) had lower subjective workload ratings, lower standard deviation of lateral lane position, shorter task durations, faster remote Detection Response Task (DRT) reaction times, lower DRT miss rates, and resulted in less time glancing off-road than the primary visual-manual interaction with the Samsung Touch interface. Comparing voice entry methods, using Google Glass took less time, while glance metrics and reaction time to DRT events responded to were similar. In contrast, DRT miss rate was higher for Google Glass, suggesting that drivers may be under increased distraction levels but for a shorter period of time; whether one or the other equates to an overall safer driving experience is an open question. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Temporal voice areas exist in autism spectrum disorder but are dysfunctional for voice identity recognition

    PubMed Central

    Borowiak, Kamila; von Kriegstein, Katharina

    2016-01-01

    The ability to recognise the identity of others is a key requirement for successful communication. Brain regions that respond selectively to voices exist in humans from early infancy on. Currently, it is unclear whether dysfunction of these voice-sensitive regions can explain voice identity recognition impairments. Here, we used two independent functional magnetic resonance imaging studies to investigate voice processing in a population that has been reported to have no voice-sensitive regions: autism spectrum disorder (ASD). Our results refute the earlier report that individuals with ASD have no responses in voice-sensitive regions: Passive listening to vocal, compared to non-vocal, sounds elicited typical responses in voice-sensitive regions in the high-functioning ASD group and controls. In contrast, the ASD group had a dysfunction in voice-sensitive regions during voice identity but not speech recognition in the right posterior superior temporal sulcus/gyrus (STS/STG)—a region implicated in processing complex spectrotemporal voice features and unfamiliar voices. The right anterior STS/STG correlated with voice identity recognition performance in controls but not in the ASD group. The findings suggest that right STS/STG dysfunction is critical for explaining voice recognition impairments in high-functioning ASD and show that ASD is not characterised by a general lack of voice-sensitive responses. PMID:27369067

  5. Effects of Voice Harmonic Complexity on ERP Responses to Pitch-Shifted Auditory Feedback

    PubMed Central

    Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.

    2011-01-01

    Objective The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Methods Event-related potentials (ERPs) were recorded in response to +200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. Results During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. Conclusions These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. Significance This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. PMID:21719346

  6. The effect of voice quality and competing speakers in a passage comprehension task: performance in relation to cognitive functioning in children with normal hearing.

    PubMed

    von Lochow, Heike; Lyberg-Åhlander, Viveka; Sahlén, Birgitta; Kastberg, Tobias; Brännström, K Jonas

    2018-04-01

    This study explores the effect of voice quality and competing speaker/-s on children's performance in a passage comprehension task. Furthermore, it explores the interaction between passage comprehension and cognitive functioning. Forty-nine children (27 girls and 22 boys) with normal hearing (aged 7-12 years) participated. Passage comprehension was tested in six different listening conditions; a typical voice (non-dysphonic voice) in quiet, a typical voice with one competing speaker, a typical voice with four competing speakers, a dysphonic voice in quiet, a dysphonic voice with one competing speaker, and a dysphonic voice with four competing speakers. The children's working memory capacity and executive functioning were also assessed. The findings indicate no direct effect of voice quality on the children's performance, but a significant effect of background listening condition. Interaction effects were seen between voice quality, background listening condition, and executive functioning. The children's susceptibility to the effect of the dysphonic voice and the background listening conditions are related to the individual's executive functions. The findings have several implications for design of interventions in language learning environments such as classrooms.

  7. Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients.

    PubMed

    Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier

    2016-10-01

    Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Spanish-Speaking Patients’ Engagement in Interactive Voice Response (IVR) Chronic Disease Self-Management Support Calls: Analyses of Data from Three Countries

    PubMed Central

    Piette, John D.; Marinec, Nicolle; Gallegos-Cabriales, Esther C.; Gutierrez-Valverde, Juana Mercedes; Rodriguez-Saldaña, Joel; Mendoz-Alevares, Milton; Silveira, Maria J.

    2013-01-01

    We used data from Interactive Voice Response (IVR) self-management support studies in Honduras, Mexico, and the United States (US) to determine whether IVR calls to Spanish-speaking patients with chronic illnesses is a feasible strategy for improving monitoring and education between face-to-face visits. 268 patients with diabetes or hypertension participated in 6–12 weeks of weekly IVR follow-up. IVR calls emanated from US servers with connections via Voice over IP. More than half (54%) of patients enrolled with an informal caregiver who received automated feedback based on the patient’s assessments, and clinical staff received urgent alerts. Participants had on average 6.1 years of education, and 73% were women. After 2,443 person weeks of follow-up, patients completed 1,494 IVR assessments. Call completion rates were higher in the US (75%) than in Honduras (59%) or Mexico (61%; p<0.001). Patients participating with an informal caregiver were more likely to complete calls (adjusted odds ratio [AOR]: 1.53; 95% confidence interval [CI]: 1.04, 2.25) while patients reporting fair or poor health at enrollment were less likely (AOR:0.59; 95% CI: 0.38, 0.92). Satisfaction rates were high, with 98% of patients reporting that the system was easy to use, and 86% reporting that the calls helped them a great deal in managing their health problems. In summary, IVR self-management support is feasible among Spanish-speaking patients with chronic disease, including those living in less-developed countries. Voice over IP can be used to deliver IVR disease management services internationally; involving informal caregivers may increase patient engagement. PMID:23532005

  9. Response time effects of alerting tone and semantic context for synthesized voice cockpit warnings

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.; Williams, D. H.

    1980-01-01

    Some handbooks and human factors design guides have recommended that a voice warning should be preceded by a tone to attract attention to the warning. As far as can be determined from a search of the literature, no experimental evidence supporting this exists. A fixed-base simulator flown by airline pilots was used to test the hypothesis that the total 'system-time' to respond to a synthesized voice cockpit warning would be longer when the message was preceded by a tone because the voice itself was expected to perform both the alerting and the information transfer functions. The simulation included realistic ATC radio voice communications, synthesized engine noise, cockpit conversation, and realistic flight routes. The effect of a tone before a voice warning was to lengthen response time; that is, responses were slower with an alerting tone. Lengthening the voice warning with another work, however, did not increase response time.

  10. Effects of voice harmonic complexity on ERP responses to pitch-shifted auditory feedback.

    PubMed

    Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R

    2011-12-01

    The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Event-related potentials (ERPs) were recorded in response to+200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Effects of the Interaction of Caffeine and Water on Voice Performance: A Pilot Study

    ERIC Educational Resources Information Center

    Franca, Maria Claudia; Simpson, Kenneth O.

    2013-01-01

    The objective of this "pilot" investigation was to study the effects of the interaction of caffeine and water intake on voice as evidenced by acoustic and aerodynamic measures, to determine whether ingestion of 200 mg of caffeine and various levels of water intake have an impact on voice. The participants were 48 females ranging in age…

  12. Vocal Responses to Perturbations in Voice Auditory Feedback in Individuals with Parkinson's Disease

    PubMed Central

    Liu, Hanjun; Wang, Emily Q.; Metman, Leo Verhagen; Larson, Charles R.

    2012-01-01

    Background One of the most common symptoms of speech deficits in individuals with Parkinson's disease (PD) is significantly reduced vocal loudness and pitch range. The present study investigated whether abnormal vocalizations in individuals with PD are related to sensory processing of voice auditory feedback. Perturbations in loudness or pitch of voice auditory feedback are known to elicit short latency, compensatory responses in voice amplitude or fundamental frequency. Methodology/Principal Findings Twelve individuals with Parkinson's disease and 13 age- and sex- matched healthy control subjects sustained a vowel sound (/α/) and received unexpected, brief (200 ms) perturbations in voice loudness (±3 or 6 dB) or pitch (±100 cents) auditory feedback. Results showed that, while all subjects produced compensatory responses in their voice amplitude or fundamental frequency, individuals with PD exhibited larger response magnitudes than the control subjects. Furthermore, for loudness-shifted feedback, upward stimuli resulted in shorter response latencies than downward stimuli in the control subjects but not in individuals with PD. Conclusions/Significance The larger response magnitudes in individuals with PD compared with the control subjects suggest that processing of voice auditory feedback is abnormal in PD. Although the precise mechanisms of the voice feedback processing are unknown, results of this study suggest that abnormal voice control in individuals with PD may be related to dysfunctional mechanisms of error detection or correction in sensory feedback processing. PMID:22448258

  13. Association of trait emotional intelligence and individual fMRI-activation patterns during the perception of social signals from voice and face.

    PubMed

    Kreifelts, Benjamin; Ethofer, Thomas; Huberle, Elisabeth; Grodd, Wolfgang; Wildgruber, Dirk

    2010-07-01

    Multimodal integration of nonverbal social signals is essential for successful social interaction. Previous studies have implicated the posterior superior temporal sulcus (pSTS) in the perception of social signals such as nonverbal emotional signals as well as in social cognitive functions like mentalizing/theory of mind. In the present study, we evaluated the relationships between trait emotional intelligence (EI) and fMRI activation patterns in individual subjects during the multimodal perception of nonverbal emotional signals from voice and face. Trait EI was linked to hemodynamic responses in the right pSTS, an area which also exhibits a distinct sensitivity to human voices and faces. Within all other regions known to subserve the perceptual audiovisual integration of human social signals (i.e., amygdala, fusiform gyrus, thalamus), no such linked responses were observed. This functional difference in the network for the audiovisual perception of human social signals indicates a specific contribution of the pSTS as a possible interface between the perception of social information and social cognition. (c) 2009 Wiley-Liss, Inc.

  14. Emotion and attention interactions in social cognition: brain regions involved in processing anger prosody.

    PubMed

    Sander, David; Grandjean, Didier; Pourtois, Gilles; Schwartz, Sophie; Seghier, Mohamed L; Scherer, Klaus R; Vuilleumier, Patrik

    2005-12-01

    Multiple levels of processing are thought to be involved in the appraisal of emotionally relevant events, with some processes being engaged relatively independently of attention, whereas other processes may depend on attention and current task goals or context. We conducted an event-related fMRI experiment to examine how processing angry voice prosody, an affectively and socially salient signal, is modulated by voluntary attention. To manipulate attention orthogonally to emotional prosody, we used a dichotic listening paradigm in which meaningless utterances, pronounced with either angry or neutral prosody, were presented simultaneously to both ears on each trial. In two successive blocks, participants selectively attended to either the left or right ear and performed a gender-decision on the voice heard on the target side. Our results revealed a functional dissociation between different brain areas. Whereas the right amygdala and bilateral superior temporal sulcus responded to anger prosody irrespective of whether it was heard from a to-be-attended or to-be-ignored voice, the orbitofrontal cortex and the cuneus in medial occipital cortex showed greater activation to the same emotional stimuli when the angry voice was to-be-attended rather than to-be-ignored. Furthermore, regression analyses revealed a strong correlation between orbitofrontal regions and sensitivity on a behavioral inhibition scale measuring proneness to anxiety reactions. Our results underscore the importance of emotion and attention interactions in social cognition by demonstrating that multiple levels of processing are involved in the appraisal of emotionally relevant cues in voices, and by showing a modulation of some emotional responses by both the current task-demands and individual differences.

  15. Involvement of the left insula in the ecological validity of the human voice

    PubMed Central

    Tamura, Yuri; Kuriki, Shinji; Nakano, Tamami

    2015-01-01

    A subtle difference between a real human and an artificial object that resembles a human evokes an impression of a large qualitative difference between them. This suggests the existence of a neural mechanism that processes the sense of humanness. To examine the presence of such a mechanism, we compared the behavioral and brain responses of participants who listened to human and artificial singing voices created from vocal fragments of a real human voice. The behavioral experiment showed that the song sung by human voices more often elicited positive feelings and feelings of humanness than the same song sung by artificial voices, although the lyrics, melody, and rhythm were identical. Functional magnetic resonance imaging revealed significantly higher activation in the left posterior insula in response to human voices than in response to artificial voices. Insular activation was not merely evoked by differences in acoustic features between the voices. Therefore, these results suggest that the left insula participates in the neural processing of the ecological quality of the human voice. PMID:25739519

  16. Dialogism and Carnival in Virginia Woolf's "To the Lighthouse": A Bakhtinian Reading

    ERIC Educational Resources Information Center

    Faizi, Hamed; Taghizadeh, Ali

    2015-01-01

    Mikhail Bakhtin's dialogism in a novel promises the creation of a domain of interactive context for different voices which results in a polyphonic discourse. Instead of trying to suppress each other, the voices of the novel interact upon the other voices in a way that none of them tries to silent the other ones, and each one has the opportunity to…

  17. It's not what you hear, it's the way you think about it: appraisals as determinants of affect and behaviour in voice hearers.

    PubMed

    Peters, E R; Williams, S L; Cooke, M A; Kuipers, E

    2012-07-01

    Previous studies have suggested that beliefs about voices mediate the relationship between actual voice experience and behavioural and affective response. We investigated beliefs about voice power (omnipotence), voice intent (malevolence/benevolence) and emotional and behavioural response (resistance/engagement) using the Beliefs About Voices Questionnaire - Revised (BAVQ-R) in 46 voice hearers. Distress was assessed using a wide range of measures: voice-related distress, depression, anxiety, self-esteem and suicidal ideation. Voice topography was assessed using measures of voice severity, frequency and intensity. We predicted that beliefs about voices would show a stronger association with distress than voice topography. Omnipotence had the strongest associations with all measures of distress included in the study whereas malevolence was related to resistance, and benevolence to engagement. As predicted, voice severity, frequency and intensity were not related to distress once beliefs were accounted for. These results concur with previous findings that beliefs about voice power are key determinants of distress in voice hearers, and should be targeted specifically in psychological interventions.

  18. Vibrant Student Voices: Exploring Effects of the Use of Clickers in Large College Courses

    ERIC Educational Resources Information Center

    Hoekstra, Angel

    2008-01-01

    Teachers have begun using student response systems (SRSs) in an effort to enhance the learning process in higher education courses. Research providing detailed information about how interactive technologies affect students as they learn is crucial for professors who seek to improve teaching quality, attendance rates and student learning. This…

  19. Understanding and Developing Interactive Voice Response Systems to Support Online Engagement of Older Adults

    ERIC Educational Resources Information Center

    Brewer, Robin Nicole

    2017-01-01

    Increasingly, people are engaging online and can participate in activities like searching for information, communicating with family and friends, and self-expression. However, some populations such as older adults, face barriers to online participation like device cost, access, and learnability, which prevent them from reaping the benefits of…

  20. Normal voice processing after posterior superior temporal sulcus lesion.

    PubMed

    Jiahui, Guo; Garrido, Lúcia; Liu, Ran R; Susilo, Tirta; Barton, Jason J S; Duchaine, Bradley

    2017-10-01

    The right posterior superior temporal sulcus (pSTS) shows a strong response to voices, but the cognitive processes generating this response are unclear. One possibility is that this activity reflects basic voice processing. However, several fMRI and magnetoencephalography findings suggest instead that pSTS serves as an integrative hub that combines voice and face information. Here we investigate whether right pSTS contributes to basic voice processing by testing Faith, a patient whose right pSTS was resected, with eight behavioral tasks assessing voice identity perception and recognition, voice sex perception, and voice expression perception. Faith performed normally on all the tasks. Her normal performance indicates right pSTS is not necessary for intact voice recognition and suggests that pSTS activations to voices reflect higher-level processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Development and pilot testing of daily Interactive Voice Response (IVR) calls to support antiretroviral adherence in India: A mixed-methods pilot study

    PubMed Central

    Swendeman, Dallas; Jana, Smarajit; Ray, Protim; Mindry, Deborah; Das, Madhushree; Bhakta, Bhumi

    2015-01-01

    This two-phase pilot study aimed to design, pilot, and refine an automated Interactive Voice Response (IVR) intervention to support antiretroviral adherence for people living with HIV (PLH), in Kolkata, India. Mixed-methods formative research included a community advisory board (CAB) for IVR message development, one-month pre-post pilot, post-pilot focus groups, and further message development. Two IVR calls are made daily, timed to patients’ dosing schedules, with brief messages (<1-minute) on strategies for self-management of three domains: medical (adherence, symptoms, co-infections), mental health (social support, stress, positive cognitions), and nutrition and hygiene (per PLH preferences). Three ART appointment reminders are also sent each month. One-month pilot results (n=46, 80% women, 60% sex workers) found significant increases in self-reported ART adherence, both within past three days (p=0.05) and time since missed last dose (p=0.015). Depression was common. Messaging content and assessment domains were expanded for testing in a randomized trial is currently underway. PMID:25638037

  2. Telemedicine to promote patient safety: Use of phone-based interactive voice response system (IVRS) to reduce adverse safety events in predialysis CKD

    PubMed Central

    Weiner, Shoshana; Fink, Jeffery C.

    2017-01-01

    Chronic kidney disease (CKD) patients have several features conferring upon them a high risk of adverse safety events, which are defined as incidents with unintended harm related to processes of care or medications. These characteristics include impaired renal function, polypharmacy, and frequent health system encounters. The consequences of such events in CKD can include new or prolonged hospitalization, accelerated renal function loss, acute kidney injury, end-stage renal disease and death. Health information technology administered via telemedicine presents opportunities for CKD patients to remotely communicate safety-related findings to providers for the purpose of improving their care. However, many CKD patients have limitations which hinder their use of telemedicine and access to the broad capabilities of health information technology. In this review we summarize previous assessments of the pre-dialysis CKD populations’ proficiency in using telemedicine modalities and describe the use of interactive voice-response system (IVRS) to gauge the safety phenotype of the CKD patient. We discuss the potential for expanded IVRS use in CKD to address the safety threats inherent to this population. PMID:28224940

  3. Using interactive voice response to improve disease management and compliance with acute coronary syndrome best practice guidelines: A randomized controlled trial.

    PubMed

    Sherrard, Heather; Duchesne, Lloyd; Wells, George; Kearns, Sharon Ann; Struthers, Christine

    2015-01-01

    There is evidence from large clinical trials that compliance with standardized best practice guidelines (BPGs) improves survival of acute coronary syndrome (ACS) patients. However, their application is often suboptimal. In this study, the researchers evaluated whether the use of an interactive voice response (IVR) follow-up system improved ACS BPG compliance. This was a single-centre randomized control trial (RCT) of 1,608 patients (IVR=803; usual care=805). The IVR group received five automated calls in 12 months. The primary composite outcome was increased medication compliance and decreased adverse events. A significant improvement of 60% in the IVR group for the primary composite outcome was found (RR 1.60, 95% CI: 1.29 to 2.00, p <0.001). There was significant improvement in medication compliance (p <0.001) and decrease in unplanned medical visits (p = 0.023). At one year, the majority of patients ( 85%) responded positively to using the system again. Follow-up by IVR produced positive outcomes in ACS patients.

  4. Development and Pilot Testing of Daily Interactive Voice Response (IVR) Calls to Support Antiretroviral Adherence in India: A Mixed-Methods Pilot Study.

    PubMed

    Swendeman, Dallas; Jana, Smarajit; Ray, Protim; Mindry, Deborah; Das, Madhushree; Bhakta, Bhumi

    2015-06-01

    This two-phase pilot study aimed to design, pilot, and refine an automated interactive voice response (IVR) intervention to support antiretroviral adherence for people living with HIV (PLH), in Kolkata, India. Mixed-methods formative research included a community advisory board for IVR message development, 1-month pre-post pilot, post-pilot focus groups, and further message development. Two IVR calls are made daily, timed to patients' dosing schedules, with brief messages (<1-min) on strategies for self-management of three domains: medical (adherence, symptoms, co-infections), mental health (social support, stress, positive cognitions), and nutrition and hygiene (per PLH preferences). Three ART appointment reminders are also sent each month. One-month pilot results (n = 46, 80 % women, 60 % sex workers) found significant increases in self-reported ART adherence, both within past three days (p = 0.05) and time since missed last dose (p = 0.015). Depression was common. Messaging content and assessment domains were expanded for testing in a randomized trial currently underway.

  5. Individual versus Interactive Task-Based Performance through Voice-Based Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Granena, Gisela

    2016-01-01

    Interaction is a necessary condition for second language (L2) learning (Long, 1980, 1996). Research in computer-mediated communication has shown that interaction opportunities make learners pay attention to form in a variety of ways that promote L2 learning. This research has mostly investigated text-based rather than voice-based interaction. The…

  6. The interaction of tone with voicing and foot structure: evidence from Kera phonetics and phonology

    NASA Astrophysics Data System (ADS)

    Pearce, Mary Dorothy

    This thesis uses acoustic measurements as a basis for the phonological analysis of the interaction of tone with voicing and foot structure in Kera (a Chadic language). In both tone spreading and vowel harmony, the iambic foot acts as a domain for spreading. Further evidence for the foot comes from measurements of duration, intensity and vowel quality. Kera is unusual in combining a tone system with a partially independent metrical system based on iambs. In words containing more than one foot, the foot is the tone bearing unit (TBU), but in shorter words, the TBU is the syllable. In perception and production experiments, results show that Kera speakers, unlike English and French, use the fundamental frequency as the principle cue to 'Voicing" contrast. Voice onset time (VOT) has only a minor role. Historically, tones probably developed from voicing through a process of tonogenesis, but synchronically, the feature voice is no longer contrastive and VOT is used in an enhancing role. Some linguists have claimed that Kera is a key example for their controversial theory of long-distance voicing spread. But as voice is not part of Kera phonology, this thesis gives counter-evidence to the voice spreading claim. An important finding from the experiments is that the phonological grammars are different between village women, men moving to town and town men. These differences are attributed to French contact. The interaction between Kera tone and voicing and contact with French have produced changes from a 2-way voicing contrast, through a 3-way tonal contrast, to a 2-way voicing contrast plus another contrast with short VOT. These diachronic and synchronic tone/voicing facts are analysed using laryngeal features and Optimality Theory. This thesis provides a body of new data, detailed acoustic measurements, and an analysis incorporating current theoretical issues in phonology, which make it of interest to Africanists and theoreticians alike.

  7. Comparison of interactive voice response, patient mailing, and mailed registry to encourage screening for osteoporosis: a randomized controlled trial.

    PubMed

    Heyworth, L; Kleinman, K; Oddleifson, S; Bernstein, L; Frampton, J; Lehrer, M; Salvato, K; Weiss, T W; Simon, S R; Connelly, M

    2014-05-01

    Guidelines recommend screening for osteoporosis with bone mineral density (BMD) testing in menopausal women, particularly those with additional risk factors for fracture. Many eligible women remain unscreened. This randomized study demonstrates that a single outreach interactive voice response phone call improves rates of BMD screening among high-risk women age 50-64. Osteoporotic fractures are a major cause of disability and mortality. Guidelines recommend screening with BMD for menopausal women, particularly those with additional risk factors for fracture. However, many women remain unscreened. We examined whether telephonic interactive voice response (IVR) or patient mailing could increase rates of BMD testing in high risk, menopausal women. We studied 4,685 women age 50-64 years within a not-for-profit health plan in the United States. All women had risk factors for developing osteoporosis and no prior BMD testing or treatment for osteoporosis. Patients were randomly allocated to usual care, usual care plus IVR, or usual care plus mailed educational materials. To avoid contamination, patients within a single primary care physician practice were randomized to receive the same intervention. The primary endpoint was BMD testing at 12 months. Secondary outcomes included BMD testing at 6 months and medication use at 12 months. Mean age was 57 years. Baseline demographic and clinical characteristics were similar across the three study groups. In adjusted analyses, the incidence of BMD screening was 24.6% in the IVR group compared with 18.6% in the usual care group (P < 0.001). There was no difference between the patient mailing group and the usual care group (P = 0.3). In this large community-based randomized trial of high risk, menopausal women age 50-64, IVR, but not patient mailing, improved rates of BMD screening. IVR remains a viable strategy to incorporate in population screening interventions.

  8. Independent Neuronal Representation of Facial and Vocal Identity in the Monkey Hippocampus and Inferotemporal Cortex.

    PubMed

    Sliwa, Julia; Planté, Aurélie; Duhamel, Jean-René; Wirth, Sylvia

    2016-03-01

    Social interactions make up to a large extent the prime material of episodic memories. We therefore asked how social signals are coded by neurons in the hippocampus. Human hippocampus is home to neurons representing familiar individuals in an abstract and invariant manner ( Quian Quiroga et al. 2009). In contradistinction, activity of rat hippocampal cells is only weakly altered by the presence of other rats ( von Heimendahl et al. 2012; Zynyuk et al. 2012). We probed the activity of monkey hippocampal neurons to faces and voices of familiar and unfamiliar individuals (monkeys and humans). Thirty-one percent of neurons recorded without prescreening responded to faces or to voices. Yet responses to faces were more informative about individuals than responses to voices and neuronal responses to facial and vocal identities were not correlated, indicating that in our sample identity information was not conveyed in an invariant manner like in human neurons. Overall, responses displayed by monkey hippocampal neurons were similar to the ones of neurons recorded simultaneously in inferotemporal cortex, whose role in face perception is established. These results demonstrate that the monkey hippocampus participates in the read-out of social information contrary to the rat hippocampus, but possibly lack an explicit conceptual coding of as found in humans. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Electronic Delivery System: Presentation Features.

    DTIC Science & Technology

    1981-04-01

    THE INFOR’"TiO 1. 0 THE FULNCTIONALITY OF THE PRESENTATIO,’, NOT ITS REPLIC., NATURE IS WHAT COUNTS. S-12 REAL ISM _(CNTD. ) * A SEQUENCE OF...E.G, A MOUSE) IS USED FOR INPUTTINZ RESPONSES, THEY CAN BE VERY EFFICIENT, , S-21 -~i INTERACTION - MECHANISt, S (CONTD.) * TOUCH PANELS -- NATURAL , NO...INTERACTION - MECHANISMS (CONTD, i fm O VOICE INPUT --USED WHERE HANDS OR EYES ARE BUSY (E.G., FOR MAINTENANCE AIDING), -- A NATURAL MEANS OF CO;r UNICATION

  10. Is children's listening effort in background noise influenced by the speaker's voice quality?

    PubMed

    Sahlén, Birgitta; Haake, Magnus; von Lochow, Heike; Holm, Lucas; Kastberg, Tobias; Brännström, K Jonas; Lyberg-Åhlander, Viveka

    2018-07-01

    The present study aims at exploring the influence of voice quality on listening effort in children performing a language comprehension test with sentences of increasing difficulty. Listening effort is explored in relation to gender ( = cisgender). The study has a between-groups design. Ninety-three mainstreamed children aged 8;2 to 9;3 with typical language development participated. The children were randomly assigned to two groups (n = 46/47) with equal allocation of boys and girls and for the analysis to four groups depending of gender and voice condition. Working memory capacity and executive functions were tested in quiet. A digital version of a language comprehension test (the TROG-2) was used to measure the effect of voice quality on listening effort, measured as response time in a forced-choice paradigm. The groups listened to sentences through recordings of the same female voice, one group with a typical voice and one with a dysphonic voice, both in competing multi-talker babble noise. Response times were logged after a time buffer between the sentence-ending and indication of response. There was a significant increase in response times with increased task difficulty and response times between the two voice conditions differed significantly. The girls in the dysphonic condition were slower with increasing task difficulty. A dysphonic voice clearly adds to the noise burden and listening effort is greater in girls than in boys when the teacher speaks with dysphonic voice in a noisy background. These findings might mirror gender differences as for coping strategies in challenging contexts and have important implications for education.

  11. Reading affect in the face and voice: neural correlates of interpreting communicative intent in children and adolescents with autism spectrum disorders.

    PubMed

    Wang, A Ting; Lee, Susan S; Sigman, Marian; Dapretto, Mirella

    2007-06-01

    Understanding a speaker's communicative intent in everyday interactions is likely to draw on cues such as facial expression and tone of voice. Prior research has shown that individuals with autism spectrum disorders (ASD) show reduced activity in brain regions that respond selectively to the face and voice. However, there is also evidence that activity in key regions can be increased if task demands allow for explicit processing of emotion. To examine the neural circuitry underlying impairments in interpreting communicative intentions in ASD using irony comprehension as a test case, and to determine whether explicit instructions to attend to facial expression and tone of voice will elicit more normative patterns of brain activity. Eighteen boys with ASD (aged 7-17 years, full-scale IQ >70) and 18 typically developing (TD) boys underwent functional magnetic resonance imaging at the Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles. Blood oxygenation level-dependent brain activity during the presentation of short scenarios involving irony. Behavioral performance (accuracy and response time) was also recorded. Reduced activity in the medial prefrontal cortex and right superior temporal gyrus was observed in children with ASD relative to TD children during the perception of potentially ironic vs control scenarios. Importantly, a significant group x condition interaction in the medial prefrontal cortex showed that activity was modulated by explicit instructions to attend to facial expression and tone of voice only in the ASD group. Finally, medial prefrontal cortex activity was inversely related to symptom severity in children with ASD such that children with greater social impairment showed less activity in this region. Explicit instructions to attend to facial expression and tone of voice can elicit increased activity in the medial prefrontal cortex, part of a network important for understanding the intentions of others, in children with ASD. These findings suggest a strategy for future intervention research.

  12. Single-channel voice-response-system program documentation volume I : system description

    DOT National Transportation Integrated Search

    1977-01-01

    This report documents the design and implementation of a Voice Response System (VRS) using Adaptive Differential Pulse Code Modulation (ADPCM) voice coding. Implemented on a Digital Equipment Corporation PDP-11/20,R this VRS system supports a single ...

  13. Voice Response Systems Technology.

    ERIC Educational Resources Information Center

    Gerald, Jeanette

    1984-01-01

    Examines two methods of generating synthetic speech in voice response systems, which allow computers to communicate in human terms (speech), using human interface devices (ears): phoneme and reconstructed voice systems. Considerations prior to implementation, current and potential applications, glossary, directory, and introduction to Input Output…

  14. Practical applications of interactive voice technologies: Some accomplishments and prospects

    NASA Technical Reports Server (NTRS)

    Grady, Michael W.; Hicklin, M. B.; Porter, J. E.

    1977-01-01

    A technology assessment of the application of computers and electronics to complex systems is presented. Three existing systems which utilize voice technology (speech recognition and speech generation) are described. Future directions in voice technology are also described.

  15. Sounds of Education: Teacher Role and Use of Voice in Interactions with Young Children

    ERIC Educational Resources Information Center

    Koch, Anette Boye

    2017-01-01

    Voice is a basic tool in communication between adults. However, in early educational settings, adult professionals use their voices in different paralinguistic ways when they communicate with children. A teacher's use of voice is important because it serves to communicate attitudes and emotions in ways that are often ignored in early childhood…

  16. 17 Ways to Say Yes: Toward Nuanced Tone of Voice in AAC and Speech Technology

    PubMed Central

    Pullin, Graham; Hennig, Shannon

    2015-01-01

    Abstract People with complex communication needs who use speech-generating devices have very little expressive control over their tone of voice. Despite its importance in human interaction, the issue of tone of voice remains all but absent from AAC research and development however. In this paper, we describe three interdisciplinary projects, past, present and future: The critical design collection Six Speaking Chairs has provoked deeper discussion and inspired a social model of tone of voice; the speculative concept Speech Hedge illustrates challenges and opportunities in designing more expressive user interfaces; the pilot project Tonetable could enable participatory research and seed a research network around tone of voice. We speculate that more radical interactions might expand frontiers of AAC and disrupt speech technology as a whole. PMID:25965913

  17. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.

  18. Double Voicing and Personhood in Collaborative Life Writing about Autism: the Transformative Narrative of Carly's Voice.

    PubMed

    Orlando, Monica

    2018-06-01

    Collaborative memoirs by co-writers with and without autism can enable the productive interaction of the voices of the writers in ways that can empower rather than exploit the disabled subject. Carly's Voice, co-written by Arthur Fleischmann and his autistic daughter Carly, demonstrates the capacity for such life narratives to facilitate the relational interaction between writers in the negotiation of understandings of disability. Though the text begins by focusing on the limitations of life with autism, it develops into a collaboration which helps both writers move toward new ways of understanding disability and their own and one another's life stories.

  19. Design and Implementation of an Interactive Website for Pediatric Voice Therapy-The Concept of In-Between Care: A Telehealth Model.

    PubMed

    Doarn, Charles R; Zacharias, Stephanie; Keck, Casey Stewart; Tabangin, Meredith; DeAlarcon, Alessandro; Kelchner, Lisa

    2018-06-05

    This article describes the design and implementation of a web-based portal developed to provide supported home practice between weekly voice therapy sessions delivered through telehealth to children with voice disorders. This in-between care consisted of supported home practice that was remotely monitored by speech-language pathologists (SLPs). A web-based voice therapy portal (VTP) was developed as a platform so participants could complete voice therapy home practice by an interdisciplinary team of SLPs (specialized in pediatric voice therapy), telehealth specialists, biomedical informaticians, and interface designers. The VTP was subsequently field tested in a group of children with voice disorders, participating in a larger telehealth study. Building the VTP for supported home practice for pediatric voice therapy was challenging, but successful. Key interactive features of the final site included 11 vocal hygiene questions, traditional voice therapy exercises grouped into levels, audio/visual voice therapy demonstrations, a store-and-retrieval system for voice samples, message/chat function, written guidelines for weekly therapy exercises, and questionnaires for parents to complete after each therapy session. Ten participants (9-14 years of age) diagnosed with a voice disorder were enrolled for eight weekly telehealth voice therapy sessions with follow-up in-between care provided using the VTP. The development and implementation of the VTP as a novel platform for the delivery of voice therapy home practice sessions were effective. We found that a versatile individual, who can work with all project staff (speak the language of both SLPs and information technologists), is essential to the development process. Once the website was established, participants and SLPs effectively utilized the web-based VTP. They found it feasible and useful for needed in-between care and reinforcement of therapeutic exercises.

  20. Frontal brain activation in premature infants' response to auditory stimuli in neonatal intensive care unit.

    PubMed

    Saito, Yuri; Fukuhara, Rie; Aoyama, Shiori; Toshima, Tamotsu

    2009-07-01

    The present study was focusing on the very few contacts with the mother's voice that NICU infants have in the womb as well as after birth, we examined whether they can discriminate between their mothers' utterances and those of female nurses in terms of the emotional bonding that is facilitated by prosodic utterances. Twenty-six premature infants were included in this study, and their cerebral blood flows were measured by near-infrared spectroscopy. They were exposed to auditory stimuli in the form of utterances made by their mothers and female nurses. A two (stimulus: mother and nurse) x two (recording site: right frontal area and left frontal area) analysis of variance (ANOVA) for these relative oxy-Hb values was conducted. The ANOVA showed a significant interaction between stimulus and recording site. The mother's and the nurse's voices were activated in the same way in the left frontal area, but showed different reactions in the right frontal area. We presume that the nurse's voice might become associated with pain and stress for premature infants. Our results showed that the premature infants reacted differently to the different voice stimuli. Therefore, we presume that both mothers' and nurses' voices represent positive stimuli for premature infants because both activate the frontal brain. Accordingly, we cannot explain our results only in terms of the state-dependent marker for infantile individual differences, but must also address the stressful trigger of nurses' voices for NICU infants.

  1. Mechanics of human voice production and control

    PubMed Central

    Zhang, Zhaoyan

    2016-01-01

    As the primary means of communication, voice plays an important role in daily life. Voice also conveys personal information such as social status, personal traits, and the emotional state of the speaker. Mechanically, voice production involves complex fluid-structure interaction within the glottis and its control by laryngeal muscle activation. An important goal of voice research is to establish a causal theory linking voice physiology and biomechanics to how speakers use and control voice to communicate meaning and personal information. Establishing such a causal theory has important implications for clinical voice management, voice training, and many speech technology applications. This paper provides a review of voice physiology and biomechanics, the physics of vocal fold vibration and sound production, and laryngeal muscular control of the fundamental frequency of voice, vocal intensity, and voice quality. Current efforts to develop mechanical and computational models of voice production are also critically reviewed. Finally, issues and future challenges in developing a causal theory of voice production and perception are discussed. PMID:27794319

  2. Mechanics of human voice production and control.

    PubMed

    Zhang, Zhaoyan

    2016-10-01

    As the primary means of communication, voice plays an important role in daily life. Voice also conveys personal information such as social status, personal traits, and the emotional state of the speaker. Mechanically, voice production involves complex fluid-structure interaction within the glottis and its control by laryngeal muscle activation. An important goal of voice research is to establish a causal theory linking voice physiology and biomechanics to how speakers use and control voice to communicate meaning and personal information. Establishing such a causal theory has important implications for clinical voice management, voice training, and many speech technology applications. This paper provides a review of voice physiology and biomechanics, the physics of vocal fold vibration and sound production, and laryngeal muscular control of the fundamental frequency of voice, vocal intensity, and voice quality. Current efforts to develop mechanical and computational models of voice production are also critically reviewed. Finally, issues and future challenges in developing a causal theory of voice production and perception are discussed.

  3. The professional voice.

    PubMed

    Benninger, M S

    2011-02-01

    The human voice is not only the key to human communication but also serves as the primary musical instrument. Many professions rely on the voice, but the most noticeable and visible are singers. Care of the performing voice requires a thorough understanding of the interaction between the anatomy and physiology of voice production, along with an awareness of the interrelationships between vocalisation, acoustic science and non-vocal components of performance. This review gives an overview of the care and prevention of professional voice disorders by describing the unique and integrated anatomy and physiology of singing, the roles of development and training, and the importance of the voice care team.

  4. Reasons for non-adherence to cardiometabolic medications, and acceptability of an interactive voice response intervention in patients with hypertension and type 2 diabetes in primary care: a qualitative study

    PubMed Central

    Sutton, Stephen

    2017-01-01

    Objectives This study explored the reasons for patients’ non-adherence to cardiometabolic medications, and tested the acceptability of the interactive voice response (IVR) as a way to address these reasons, and support patients, between primary care consultations. Design, method, participants and setting The study included face-to-face interviews with 19 patients with hypertension and/or type 2 diabetes mellitus, selected from primary care databases, and presumed to be non-adherent. Thirteen of these patients pretested elements of the IVR intervention few months later, using a think-aloud protocol. Five practice nurses were interviewed. Data were analysed using multiperspective, and longitudinalthematic analysis. Results Negative beliefs about taking medications, the complexity of prescribed medication regimens, and the limited ability to cope with the underlying affective state, within challenging contexts, were mentioned as important reasons for non-adherence. Nurses reported time constraints to address each patient’s different reasons for non-adherence, and limited efficacy to support patients, between primary care consultations. Patients gave positive experiential feedback about the IVR messages as a way to support them take their medicines, and provided recommendations for intervention content and delivery mode. Specifically, they liked the voice delivering the messages and the voice recognition software. For intervention content, they preferred messages that were tailored, and included messages with ‘information about health consequences’, ‘action plans’, or simple reminders for performing the behaviour. Conclusions Patients with hypertension and/or type 2 diabetes, and practice nurses, suggested messages tailored to each patient’s reasons for non-adherence. Participants recommended IVR as an acceptable platform to support adherence to cardiometabolic medications between primary care consultations. Future studies could usefully test the acceptability, and feasibility, of tailored IVR interventions to support medication adherence, as an adjunct to primary care. PMID:28801402

  5. Using Sarcasm to Compliment: Context, Intonation, and the Perception of Statements with a Negative Literal Meaning.

    PubMed

    Voyer, Daniel; Vu, Janie P

    2016-06-01

    The present study extended findings of contrast effects in an auditory sarcasm perception task manipulating context and tone of voice. In contrast to previous research that had used sarcastic and sincere statements with a positive literal meaning, the present experiment examined how statements with a negative literal meaning would affect the results. Eighty-four undergraduate students completed a task in which an ambiguous, positive, or negative computer-generated context spoken in a flat emotional tone was followed by a statement with a negative literal meaning spoken in a sincere or sarcastic tone of voice. Results for both the proportion of sarcastic responses and response time showed a significant context by tone interaction, reflecting relatively fast sarcastic responses for the situation in which sarcasm would turn the statement into a compliment (positive context, sarcastic intonation) and fast sincere responses when the literal insult was emphasized (negative context, sincere intonation). However, the ambiguous context produced a pattern of results modulated by the tone of voice that was similar to that observed when the context/intonation pairing could not be interpreted as a compliment or an insult (negative context/sarcastic intonation or positive context/sincere intonation). These findings add to the body of literature suggesting that situational contrast, context, and intonation influence how sarcasm is perceived while demonstrating the importance of the literal meaning in sarcasm perception. They can be interpreted in the context of models of sarcasm comprehension that postulate two stages of processing.

  6. Design and realization of intelligent tourism service system based on voice interaction

    NASA Astrophysics Data System (ADS)

    Hu, Lei-di; Long, Yi; Qian, Cheng-yang; Zhang, Ling; Lv, Guo-nian

    2008-10-01

    Voice technology is one of the important contents to improve the intelligence and humanization of tourism service system. Combining voice technology, the paper concentrates on application needs and the composition of system to present an overall intelligent tourism service system's framework consisting of presentation layer, Web services layer, and tourism application service layer. On the basis, the paper further elaborated the implementation of the system and its key technologies, including intelligent voice interactive technology, seamless integration technology of multiple data sources, location-perception-based guides' services technology, and tourism safety control technology. Finally, according to the situation of Nanjing tourism, a prototype of Tourism Services System is realized.

  7. Effects of an Automated Telephone Support System on Caregiver Burden and Anxiety: Findings from the REACH for TLC Intervention Study

    ERIC Educational Resources Information Center

    Mahoney, Diane Feeney; Tarlow, Barbara J.; Jones, Richard N.

    2003-01-01

    Purpose: We determine the main outcome effects of a 12-month computer-mediated automated interactive voice response (IVR) intervention designed to assist family caregivers managing persons with disruptive behaviors related to Alzheimer's disease (AD). Design and Methods: We conducted a randomized controlled study of 100 caregivers, 51 in the usual…

  8. The MetLife Survey of the American Teacher: Challenges for School Leadership

    ERIC Educational Resources Information Center

    MetLife, Inc., 2013

    2013-01-01

    "The MetLife Survey of the American Teacher: Challenges for School Leadership" (2012) was conducted by Harris Interactive and is the twenty-ninth in a series sponsored annually by MetLife since 1984 to give voice to those closest to the classroom. This report examines the views of teachers and principals on the responsibilities and challenges…

  9. Integrating cues of social interest and voice pitch in men's preferences for women's voices.

    PubMed

    Jones, Benedict C; Feinberg, David R; Debruine, Lisa M; Little, Anthony C; Vukovic, Jovana

    2008-04-23

    Most previous studies of vocal attractiveness have focused on preferences for physical characteristics of voices such as pitch. Here we examine the content of vocalizations in interaction with such physical traits, finding that vocal cues of social interest modulate the strength of men's preferences for raised pitch in women's voices. Men showed stronger preferences for raised pitch when judging the voices of women who appeared interested in the listener than when judging the voices of women who appeared relatively disinterested in the listener. These findings show that voice preferences are not determined solely by physical properties of voices and that men integrate information about voice pitch and the degree of social interest expressed by women when forming voice preferences. Women's preferences for raised pitch in women's voices were not modulated by cues of social interest, suggesting that the integration of cues of social interest and voice pitch when men judge the attractiveness of women's voices may reflect adaptations that promote efficient allocation of men's mating effort.

  10. Designing Audience-Centered Interactive Voice Response Messages to Promote Cancer Screenings Among Low-Income Latinas

    PubMed Central

    De Jesus, Maria; Sprunck-Harrild, Kim M.; Tellez, Trinidad; Bastani, Roshan; Battaglia, Tracy A.; Michaelson, James S.; Emmons, Karen M.

    2014-01-01

    Introduction Cancer screening rates among Latinas are suboptimal. The objective of this study was to explore how Latinas perceive cancer screening and the use and design of interactive voice response (IVR) messages to prompt scheduling of 1 or more needed screenings. Methods Seven focus groups were conducted with Latina community health center patients (n = 40) in need of 1 or more cancer screenings: 5 groups were of women in need of 1 cancer screening (breast, cervical, or colorectal), and 2 groups were of women in need of multiple screenings. A bilingual researcher conducted all focus groups in Spanish using a semistructured guide. Focus groups were recorded, transcribed, and translated into English for analysis. Emergent themes were identified by using thematic content analysis. Results Participants were familiar with cancer screening and viewed it positively, although barriers to screening were identified (unaware overdue for screening, lack of physician referral, lack of insurance or insufficient insurance coverage, embarrassment or fear of screening procedures, fear of screening outcomes). Women needing multiple screenings voiced more concern about screening procedures, whereas women in need of a single screening expressed greater worry about the screening outcome. Participants were receptive to receiving IVR messages and believed that culturally appropriate messages that specified needed screenings while emphasizing the benefit of preventive screening would motivate them to schedule needed screenings. Conclusion Participants’ receptiveness to IVR messages suggests that these messages may be an acceptable strategy to promote cancer screening among underserved Latina patients. Additional research is needed to determine the effectiveness of IVR messages in promoting completion of cancer screening. PMID:24625364

  11. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus

    PubMed Central

    2017-01-01

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553

  12. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    PubMed

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.

  13. Determinants of structural choice in visually situated sentence production.

    PubMed

    Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph

    2012-11-01

    Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Understanding The Neural Mechanisms Involved In Sensory Control Of Voice Production

    PubMed Central

    Parkinson, Amy L.; Flagmeier, Sabina G.; Manes, Jordan L.; Larson, Charles R.; Rogers, Bill; Robin, Donald A.

    2012-01-01

    Auditory feedback is important for the control of voice fundamental frequency (F0). In the present study we used neuroimaging to identify regions of the brain responsible for sensory control of the voice. We used a pitch-shift paradigm where subjects respond to an alteration, or shift, of voice pitch auditory feedback with a reflexive change in F0. To determine the neural substrates involved in these audio-vocal responses, subjects underwent fMRI scanning while vocalizing with or without pitch-shifted feedback. The comparison of shifted and unshifted vocalization revealed activation bilaterally in the superior temporal gyrus (STG) in response to the pitch shifted feedback. We hypothesize that the STG activity is related to error detection by auditory error cells located in the superior temporal cortex and efference copy mechanisms whereby this region is responsible for the coding of a mismatch between actual and predicted voice F0. PMID:22406500

  15. Crossing Cultures with Multi-Voiced Journals

    ERIC Educational Resources Information Center

    Styslinger, Mary E.; Whisenant, Alison

    2004-01-01

    In this article, the authors discuss the benefits of using multi-voiced journals as a teaching strategy in reading instruction. Multi-voiced journals, an adaptation of dual-voiced journals, encourage responses to reading in varied, cultured voices of characters. It is similar to reading journals in that they prod students to connect to the lives…

  16. Vocal recognition of owners by domestic cats (Felis catus).

    PubMed

    Saito, Atsuko; Shinozuka, Kazutaka

    2013-07-01

    Domestic cats have had a 10,000-year history of cohabitation with humans and seem to have the ability to communicate with humans. However, this has not been widely examined. We studied 20 domestic cats to investigate whether they could recognize their owners by using voices that called out the subjects' names, with a habituation-dishabituation method. While the owner was out of the cat's sight, we played three different strangers' voices serially, followed by the owner's voice. We recorded the cat's reactions to the voices and categorized them into six behavioral categories. In addition, ten naive raters rated the cats' response magnitudes. The cats responded to human voices not by communicative behavior (vocalization and tail movement), but by orienting behavior (ear movement and head movement). This tendency did not change even when they were called by their owners. Of the 20 cats, 15 demonstrated a lower response magnitude to the third voice than to the first voice. These habituated cats showed a significant rebound in response to the subsequent presentation of their owners' voices. This result indicates that cats are able to use vocal cues alone to distinguish between humans.

  17. It doesn't matter what you say: FMRI correlates of voice learning and recognition independent of speech content.

    PubMed

    Zäske, Romi; Awwad Shiekh Hasan, Bashar; Belin, Pascal

    2017-09-01

    Listeners can recognize newly learned voices from previously unheard utterances, suggesting the acquisition of high-level speech-invariant voice representations during learning. Using functional magnetic resonance imaging (fMRI) we investigated the anatomical basis underlying the acquisition of voice representations for unfamiliar speakers independent of speech, and their subsequent recognition among novel voices. Specifically, listeners studied voices of unfamiliar speakers uttering short sentences and subsequently classified studied and novel voices as "old" or "new" in a recognition test. To investigate "pure" voice learning, i.e., independent of sentence meaning, we presented German sentence stimuli to non-German speaking listeners. To disentangle stimulus-invariant and stimulus-dependent learning, during the test phase we contrasted a "same sentence" condition in which listeners heard speakers repeating the sentences from the preceding study phase, with a "different sentence" condition. Voice recognition performance was above chance in both conditions although, as expected, performance was higher for same than for different sentences. During study phases activity in the left inferior frontal gyrus (IFG) was related to subsequent voice recognition performance and same versus different sentence condition, suggesting an involvement of the left IFG in the interactive processing of speaker and speech information during learning. Importantly, at test reduced activation for voices correctly classified as "old" compared to "new" emerged in a network of brain areas including temporal voice areas (TVAs) of the right posterior superior temporal gyrus (pSTG), as well as the right inferior/middle frontal gyrus (IFG/MFG), the right medial frontal gyrus, and the left caudate. This effect of voice novelty did not interact with sentence condition, suggesting a role of temporal voice-selective areas and extra-temporal areas in the explicit recognition of learned voice identity, independent of speech content. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Home Diabetes Monitoring through Touch-Tone Computer Data Entry and Voice Synthesizer Response

    PubMed Central

    Arbogast, James G.; Dodrill, William H.

    1984-01-01

    Current studies suggest that the control of Diabetes mellitus can be improved with home monitoring of blood sugars. Voice synthesizers and recent technology, allowing decoding of Touch-Tone® pulses into their digital equivalents, make it possible for diabetics with no more sophisticated equipment than a Touch-Tone® telephone to enter their blood sugars directly into a medical office computer. A working prototype that can provide physicians with timely, logically oriented information about their diabetics is discussed along with plans to expand this concept into giving the patients uncomplicated therapeutic advice without the need for a direct patient/physician interaction. The potential impact on health care costs and the management of other chronic diseases is presented.

  19. Engaged Voices--Dialogic Interaction and the Construction of Shared Social Meanings

    ERIC Educational Resources Information Center

    Cruddas, Leora

    2007-01-01

    The notion of "pupil voice" reproduces the binary distinction between adult and child, pupil and teacher and therefore serves to reinforce "conventional" constructions of childhood. The concept of "voice" invokes an essentialist construction of self that is singular, coherent, consistent and rational. It is arguably…

  20. Mobile phone-based interactive voice response as a tool for improving access to healthcare in remote areas in Ghana - an evaluation of user experiences.

    PubMed

    Brinkel, J; May, J; Krumkamp, R; Lamshöft, M; Kreuels, B; Owusu-Dabo, E; Mohammed, A; Bonacic Marinovic, A; Dako-Gyeke, P; Krämer, A; Fobil, J N

    2017-05-01

    To investigate and determine the factors that enhanced or constituted barriers to the acceptance of an mHealth system which was piloted in Asante-Akim North District of Ghana to support healthcare of children. Four semi-structured focus group discussions were conducted with a total of 37 mothers. Participants were selected from a study population of mothers who subscribed to a pilot mHealth system which used an interactive voice response (IVR) for its operations. Data were evaluated using qualitative content analysis methods. In addition, a short quantitative questionnaire assessed system's usability (SUS). Results revealed 10 categories of factors that facilitated user acceptance of the IVR system including quality-of-care experience, health education and empowerment of women. The eight categories of factors identified as barriers to user acceptance included the lack of human interaction, lack of update and training on the electronic advices provided and lack of social integration of the system into the community. The usability (SUS median: 79.3; range: 65-97.5) of the system was rated acceptable. The principles of the tested mHealth system could be of interest during infectious disease outbreaks, such as Ebola or Lassa fever, when there might be a special need for disease-specific health information within populations. © 2017 John Wiley & Sons Ltd.

  1. Interface Anywhere: Development of a Voice and Gesture System for Spaceflight Operations

    NASA Technical Reports Server (NTRS)

    Thompson, Shelby; Haddock, Maxwell; Overland, David

    2013-01-01

    The Interface Anywhere Project was funded through Innovation Charge Account (ICA) at NASA JSC in the Fall of 2012. The project was collaboration between human factors and engineering to explore the possibility of designing an interface to control basic habitat operations through gesture and voice control; (a) Current interfaces require the users to be physically near an input device in order to interact with the system; and (b) By using voice and gesture commands, the user is able to interact with the system anywhere they want within the work environment.

  2. Validation of the Acoustic Voice Quality Index in the Japanese Language.

    PubMed

    Hosokawa, Kiyohito; Barsties, Ben; Iwahashi, Toshihiko; Iwahashi, Mio; Kato, Chieri; Iwaki, Shinobu; Sasai, Hisanori; Miyauchi, Akira; Matsushiro, Naoki; Inohara, Hidenori; Ogawa, Makoto; Maryn, Youri

    2017-03-01

    The Acoustic Voice Quality Index (AVQI) is a multivariate construct for quantification of overall voice quality based on the analysis of continuous speech and sustained vowel. The stability and validity of the AVQI is well established in several language families. However, the Japanese language has distinct characteristics with respect to several parameters of articulatory and phonatory physiology. The aim of the study was to confirm the criterion-related concurrent validity of AVQI, as well as its responsiveness to change and diagnostic accuracy for voice assessment in the Japanese-speaking population. This is a retrospective study. A total of 336 voice recordings, which included 69 pairs of voice recordings (before and after therapeutic interventions), were eligible for the study. The auditory-perceptual judgment of overall voice quality was evaluated by five experienced raters. The concurrent validity, responsiveness to change, and diagnostic accuracy of the AVQI were estimated. The concurrent validity and responsiveness to change based on the overall voice quality was indicated by high correlation coefficients 0.828 and 0.767, respectively. Receiver operating characteristic analysis revealed an excellent diagnostic accuracy for discrimination between dysphonic and normophonic voices (area under the curve: 0.905). The best threshold level for the AVQI of 3.15 corresponded with a sensitivity of 72.5% and specificity of 95.2%, with the positive and negative likelihood ratios of 15.1 and 0.29, respectively. We demonstrated the validity of the AVQI as a tool for assessment of overall voice quality and that of voice therapy outcomes in the Japanese-speaking population. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  3. Neural effects of environmental advertising: An fMRI analysis of voice age and temporal framing.

    PubMed

    Casado-Aranda, Luis-Alberto; Martínez-Fiestas, Myriam; Sánchez-Fernández, Juan

    2018-01-15

    Ecological information offered to society through advertising enhances awareness of environmental issues, encourages development of sustainable attitudes and intentions, and can even alter behavior. This paper, by means of functional Magnetic Resonance Imaging (fMRI) and self-reports, explores the underlying mechanisms of processing ecological messages. The study specifically examines brain and behavioral responses to persuasive ecological messages that differ in temporal framing and in the age of the voice pronouncing them. The findings reveal that attitudes are more positive toward future-framed messages presented by young voices. The whole-brain analysis reveals that future-framed (FF) ecological messages trigger activation in brain areas related to imagery, prospective memories and episodic events, thus reflecting the involvement of past behaviors in future ecological actions. Past-framed messages (PF), in turn, elicit brain activations within the episodic system. Young voices (YV), in addition to triggering stronger activation in areas involved with the processing of high-timbre, high-pitched and high-intensity voices, are perceived as more emotional and motivational than old voices (OV) as activations in anterior cingulate cortex and amygdala. Messages expressed by older voices, in turn, exhibit stronger activation in areas formerly linked to low-pitched voices and voice gender perception. Interestingly, a link is identified between neural and self-report responses indicating that certain brain activations in response to future-framed messages and young voices predicted higher attitudes toward future-framed and young voice advertisements, respectively. The results of this study provide invaluable insight into the unconscious origin of attitudes toward environmental messages and indicate which voice and temporal frame of a message generate the greatest subconscious value. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Interactive Communication: A Few Research Answers for a Technological Explosion.

    ERIC Educational Resources Information Center

    Chapanis, Alphonse

    The techniques, procedures, and principal findings of 15 different experiments in a research program on interactive communication are summarized in this paper. Among the principal findings reported are that: problems are solved faster in communication modes that have a voice channel than in those that do not have a voice channel, modes of…

  5. Collaborative Scaffolding in Online Task-Based Voice Interactions between Advanced Learners

    ERIC Educational Resources Information Center

    Kenning, Marie-Madeleine

    2010-01-01

    This paper reports some of the findings of a distinctive innovative use of audio-conferencing involving a population (campus-based advanced learners) and a type of application (task-based language learning) that have received little attention to date: the use of Wimba Voice Tools to provide additional opportunities for spoken interactions between…

  6. Voice Interactive Analysis System Study. Final Report, August 28, 1978 through March 23, 1979.

    ERIC Educational Resources Information Center

    Harry, D. P.; And Others

    The Voice Interactive Analysis System study continued research and development of the LISTEN real-time, minicomputer based connected speech recognition system, within NAVTRAEQUIPCEN'S program of developing automatic speech technology in support of training. An attempt was made to identify the most effective features detected by the TTI-500 model…

  7. Onset and Maturation of Fetal Heart Rate Response to the Mother's Voice over Late Gestation

    ERIC Educational Resources Information Center

    Kisilevsky, Barbara S.; Hains, Sylvia M. J.

    2011-01-01

    Background: Term fetuses discriminate their mother's voice from a female stranger's, suggesting recognition/learning of some property of her voice. Identification of the onset and maturation of the response would increase our understanding of the influence of environmental sounds on the development of sensory abilities and identify the period when…

  8. Blindness and Selective Mutism: One Student's Response to Voice-Output Devices

    ERIC Educational Resources Information Center

    Holley, Mary; Johnson, Ashli; Herzberg, Tina

    2014-01-01

    This case study was designed to measure the response of one student with blindness and selective mutism to the intervention of voice-output devices across two years and two different teachers in two instructional settings. Before the introduction of the voice output devices, the student did not choose to communicate using spoken language or…

  9. Scientific bases of human-machine communication by voice.

    PubMed Central

    Schafer, R W

    1995-01-01

    The scientific bases for human-machine communication by voice are in the fields of psychology, linguistics, acoustics, signal processing, computer science, and integrated circuit technology. The purpose of this paper is to highlight the basic scientific and technological issues in human-machine communication by voice and to point out areas of future research opportunity. The discussion is organized around the following major issues in implementing human-machine voice communication systems: (i) hardware/software implementation of the system, (ii) speech synthesis for voice output, (iii) speech recognition and understanding for voice input, and (iv) usability factors related to how humans interact with machines. PMID:7479802

  10. Vocal education for the professional voice user and singer.

    PubMed

    Murry, T; Rosen, C A

    2000-10-01

    Providing education on voice-related anatomy, physiology, and vocal hygiene information is the responsibility of every voice care professional. This article discusses the importance of a vocal education program for singers and professional voice users. An outline of a vocal education lecture is provided.

  11. Discourse-voice regulatory strategies in the psychotherapeutic interaction: a state-space dynamics analysis.

    PubMed

    Tomicic, Alemka; Martínez, Claudio; Pérez, J Carola; Hollenstein, Tom; Angulo, Salvador; Gerstmann, Adam; Barroux, Isabelle; Krause, Mariane

    2015-01-01

    This study seeks to provide evidence of the dynamics associated with the configurations of discourse-voice regulatory strategies in patient-therapist interactions in relevant episodes within psychotherapeutic sessions. Its central assumption is that discourses manifest themselves differently in terms of their prosodic characteristics according to their regulatory functions in a system of interactions. The association between discourse and vocal quality in patients and therapists was analyzed in a sample of 153 relevant episodes taken from 164 sessions of five psychotherapies using the state space grid (SSG) method, a graphical tool based on the dynamic systems theory (DST). The results showed eight recurrent and stable discourse-voice regulatory strategies of the patients and three of the therapists. Also, four specific groups of these discourse-voice strategies were identified. The latter were interpreted as regulatory configurations, that is to say, as emergent self-organized groups of discourse-voice regulatory strategies constituting specific interactional systems. Both regulatory strategies and their configurations differed between two types of relevant episodes: Change Episodes and Rupture Episodes. As a whole, these results support the assumption that speaking and listening, as dimensions of the interaction that takes place during therapeutic conversation, occur at different levels. The study not only shows that these dimensions are dependent on each other, but also that they function as a complex and dynamic whole in therapeutic dialog, generating relational offers which allow the patient and the therapist to regulate each other and shape the psychotherapeutic process that characterizes each type of relevant episode.

  12. McGurk Effect in Gender Identification: Vision Trumps Audition in Voice Judgments.

    PubMed

    Peynircioǧlu, Zehra F; Brent, William; Tatz, Joshua R; Wyatt, Jordan

    2017-01-01

    Demonstrations of non-speech McGurk effects are rare, mostly limited to emotion identification, and sometimes not considered true analogues. We presented videos of males and females singing a single syllable on the same pitch and asked participants to indicate the true range of the voice-soprano, alto, tenor, or bass. For one group of participants, the gender shown on the video matched the gender of the voice heard, and for the other group they were mismatched. Soprano or alto responses were interpreted as "female voice" decisions and tenor or bass responses as "male voice" decisions. Identification of the voice gender was 100% correct in the preceding audio-only condition. However, whereas performance was also 100% correct in the matched video/audio condition, it was only 31% correct in the mismatched video/audio condition. Thus, the visual gender information overrode the voice gender identification, showing a robust non-speech McGurk effect.

  13. Data equivalency of an interactive voice response system for home assessment of back pain and function.

    PubMed

    Shaw, William S; Verma, Santosh K

    2007-01-01

    Interactive voice response (IVR) systems that collect survey data using automated, push-button telephone responses may be useful to monitor patients' pain and function at home; however, its equivalency to other data collection methods has not been studied. To study the data equivalency of IVR measurement of pain and function to live telephone interviewing. In a prospective cohort study, 547 working adults (66% male) with acute back pain were recruited at an initial outpatient visit and completed telephone assessments one month later to track outcomes of pain, function, treatment helpfulness and return to work. An IVR system was introduced partway through the study (after the first 227 participants) to reduce the staff time necessary to contact participants by telephone during nonworking hours. Of 368 participants who were subsequently recruited and offered the IVR option, 131 (36%) used IVR, 189 (51%) were contacted by a telephone interviewer after no IVR attempt was made within five days, and 48 (13%) were lost to follow-up. Those with lower income were more likely to use IVR. Analysis of outcome measures showed that IVR respondents reported comparatively lower levels of function and less effective treatment, but not after controlling for differences due to the delay in reaching non-IVR users by telephone (mean: 35.4 versus 29.2 days). The results provided no evidence of information or selection bias associated with IVR use; however, IVR must be supplemented with other data collection options to maintain high response rates.

  14. Micro-Based Speech Recognition: Instructional Innovation for Handicapped Learners.

    ERIC Educational Resources Information Center

    Horn, Carin E.; Scott, Brian L.

    A new voice based learning system (VBLS), which allows the handicapped user to interact with a microcomputer by voice commands, is described. Speech or voice recognition is the computerized process of identifying a spoken word or phrase, including those resulting from speech impediments. This new technology is helpful to the severely physically…

  15. Empowering Student Voice through Interactive Design and Digital Making

    ERIC Educational Resources Information Center

    Kim, Yanghee; Searle, Kristin

    2017-01-01

    Over the last two decades online technology and digital media have provided space for students to participate and express their voices. This paper further explores how new digital technologies, such as humanoid robots and wearable electronics, can be used to offer additional spaces where students' voices are heard. In these spaces, young students…

  16. Gender in Voice Perception in Autism

    ERIC Educational Resources Information Center

    Groen, Wouter B.; van Orsouw, Linda; Zwiers, Marcel; Swinkels, Sophie; van der Gaag, Rutger Jan; Buitelaar, Jan K.

    2008-01-01

    Deficits in the perception of social stimuli may contribute to the characteristic impairments in social interaction in high functioning autism (HFA). Although the cortical processing of voice is abnormal in HFA, it is unclear whether this gives rise to impairments in the perception of voice gender. About 20 children with HFA and 20 matched…

  17. Measures of voiced frication for automatic classification

    NASA Astrophysics Data System (ADS)

    Jackson, Philip J. B.; Jesus, Luis M. T.; Shadle, Christine H.; Pincas, Jonathan

    2004-05-01

    As an approach to understanding the characteristics of the acoustic sources in voiced fricatives, it seems apt to draw on knowledge of vowels and voiceless fricatives, which have been relatively well studied. However, the presence of both phonation and frication in these mixed-source sounds offers the possibility of mutual interaction effects, with variations across place of articulation. This paper examines the acoustic and articulatory consequences of these interactions and explores automatic techniques for finding parametric and statistical descriptions of these phenomena. A reliable and consistent set of such acoustic cues could be used for phonetic classification or speech recognition. Following work on devoicing of European Portuguese voiced fricatives [Jesus and Shadle, in Mamede et al. (eds.) (Springer-Verlag, Berlin, 2003), pp. 1-8]. and the modulating effect of voicing on frication [Jackson and Shadle, J. Acoust. Soc. Am. 108, 1421-1434 (2000)], the present study focuses on three types of information: (i) sequences and durations of acoustic events in VC transitions, (ii) temporal, spectral and modulation measures from the periodic and aperiodic components of the acoustic signal, and (iii) voicing activity derived from simultaneous EGG data. Analysis of interactions observed in British/American English and European Portuguese speech corpora will be compared, and the principal findings discussed.

  18. Synthesized speech rate and pitch effects on intelligibility of warning messages for pilots

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.; Marchionda-Frost, K.

    1984-01-01

    In civilian and military operations, a future threat-warning system with a voice display could warn pilots of other traffic, obstacles in the flight path, and/or terrain during low-altitude helicopter flights. The present study was conducted to learn whether speech rate and voice pitch of phoneme-synthesized speech affects pilot accuracy and response time to typical threat-warning messages. Helicopter pilots engaged in an attention-demanding flying task and listened for voice threat warnings presented in a background of simulated helicopter cockpit noise. Performance was measured by flying-task performance, threat-warning intelligibility, and response time. Pilot ratings were elicited for the different voice pitches and speech rates. Significant effects were obtained only for response time and for pilot ratings, both as a function of speech rate. For the few cases when pilots forgot to respond to a voice message, they remembered 90 percent of the messages accurately when queried for their response 8 to 10 sec later.

  19. Dissociating Long and Short-term Memory in Three-Month-Old Infants Using the Mismatch Response to Voice Stimuli

    PubMed Central

    Zinke, Katharina; Thöne, Leonie; Bolinger, Elaina M.; Born, Jan

    2018-01-01

    Auditory event-related potentials (ERPs) have been successfully used in adults as well as in newborns to discriminate recall of longer-term and shorter-term memories. Specifically the Mismatch Response (MMR) to deviant stimuli of an oddball paradigm is larger if the deviant stimuli are highly familiar (i.e., retrieved from long-term memory) than if they are unfamiliar, representing an immediate change to the standard stimuli kept in short-term memory. Here, we aimed to extend previous findings indicating a differential MMR to familiar and unfamiliar deviants in newborns (Beauchemin et al., 2011), to 3-month-old infants who are starting to interact more with their social surroundings supposedly based on forming more (social) long-term representations. Using a voice discrimination paradigm, each infant was repeatedly presented with the word “baby” (400 ms, interstimulus interval: 600 ms, 10 min overall duration) pronounced by three different female speakers. One voice that was unfamiliar to the infants served as the frequently presented “standard” stimulus, whereas another unfamiliar voice served as the “unfamiliar deviant” stimulus, and the voice of the infant’s mother served as the “familiar deviant.” Data collection was successful for 31 infants (mean age = 100 days). The MMR was determined by the difference between the ERP to standard stimuli and the ERP to the unfamiliar and familiar deviant, respectively. The MMR to the familiar deviant (mother’s voice) was larger, i.e., more positive, than that to the unfamiliar deviant between 100 and 400 ms post-stimulus over the frontal and central cortex. However, a genuine MMR differentiating, as a positive deflection, between ERPs to familiar deviants and standard stimuli was only found in the 300–400 ms interval. On the other hand, a genuine MMR differentiating, as a negative deflection, between ERPs to unfamiliar deviants from ERPs to standard stimuli was revealed for the 200–300 ms post-stimulus interval. Overall results confirm a differential MMR response to unfamiliar and familiar deviants in 3-month-olds, with the earlier negative MMR to unfamiliar deviants likely reflecting change detection based on comparison processes in short-term memory, and the later positive MMR to familiar deviants reflecting subsequent long-term memory-based processing of stimulus relevance. PMID:29441032

  20. Dissociating Long and Short-term Memory in Three-Month-Old Infants Using the Mismatch Response to Voice Stimuli.

    PubMed

    Zinke, Katharina; Thöne, Leonie; Bolinger, Elaina M; Born, Jan

    2018-01-01

    Auditory event-related potentials (ERPs) have been successfully used in adults as well as in newborns to discriminate recall of longer-term and shorter-term memories. Specifically the Mismatch Response (MMR) to deviant stimuli of an oddball paradigm is larger if the deviant stimuli are highly familiar (i.e., retrieved from long-term memory) than if they are unfamiliar, representing an immediate change to the standard stimuli kept in short-term memory. Here, we aimed to extend previous findings indicating a differential MMR to familiar and unfamiliar deviants in newborns (Beauchemin et al., 2011), to 3-month-old infants who are starting to interact more with their social surroundings supposedly based on forming more (social) long-term representations. Using a voice discrimination paradigm, each infant was repeatedly presented with the word "baby" (400 ms, interstimulus interval: 600 ms, 10 min overall duration) pronounced by three different female speakers. One voice that was unfamiliar to the infants served as the frequently presented "standard" stimulus, whereas another unfamiliar voice served as the "unfamiliar deviant" stimulus, and the voice of the infant's mother served as the "familiar deviant." Data collection was successful for 31 infants (mean age = 100 days). The MMR was determined by the difference between the ERP to standard stimuli and the ERP to the unfamiliar and familiar deviant, respectively. The MMR to the familiar deviant (mother's voice) was larger, i.e., more positive, than that to the unfamiliar deviant between 100 and 400 ms post-stimulus over the frontal and central cortex. However, a genuine MMR differentiating, as a positive deflection, between ERPs to familiar deviants and standard stimuli was only found in the 300-400 ms interval. On the other hand, a genuine MMR differentiating, as a negative deflection, between ERPs to unfamiliar deviants from ERPs to standard stimuli was revealed for the 200-300 ms post-stimulus interval. Overall results confirm a differential MMR response to unfamiliar and familiar deviants in 3-month-olds, with the earlier negative MMR to unfamiliar deviants likely reflecting change detection based on comparison processes in short-term memory, and the later positive MMR to familiar deviants reflecting subsequent long-term memory-based processing of stimulus relevance.

  1. Teaching and Learning Foreign Languages via System of "Voice over Internet Protocol" and Language Interactions Case Study: Skype

    ERIC Educational Resources Information Center

    Wahid, Wazira Ali Abdul; Ahmed, Eqbal Sulaiman; Wahid, Muntaha Ali Abdul

    2015-01-01

    This issue expresses a research study based on the online interactions of English teaching specially conversation through utilizing VOIP (Voice over Internet Protocol) and cosmopolitan online theme. Data has been achieved by interviews. Simplifiers indicate how oral tasks require to be planned upon to facilitate engagement models propitious to…

  2. Effects of a Voice Output Communication Aid on Interactions between Support Personnel and an Individual with Multiple Disabilities.

    ERIC Educational Resources Information Center

    Schepis, Maureen M.; Reid, Dennis H.

    1995-01-01

    A young adult with multiple disabilities (profound mental retardation, spastic quadriplegia, and visual impairment) was provided with a voice output communication aid (VOCA) which allowed communication through synthesized speech. Both educational and residential staff members interacted with the individual more frequently when she had access to…

  3. Comparing Heterosexuals' and Gay Men/Lesbians' Responses to Relationship Problems and the Effects of Internalized Homophobia on Gay Men/Lesbians' Responses to Relationship Problems in Turkey.

    PubMed

    Okutan, Nur; Buyuksahin Sunal, Ayda; Sakalli Ugurlu, Nuray

    2017-01-01

    The purpose of the present study was twofold: (1) to investigate the effects of sexual orientation (heterosexuals and gay men/lesbians) and gender difference on responses to romantic relationship problems (Exit, Voice, Loyalty, and Neglect [EVLN] responses) and of perceived partner's EVLN responses in Turkey, and (2) to examine whether internalized homophobia was associated with EVLN responses and perceived partner's EVLN responses for gay men and lesbians. Responses to Dissatisfaction Scale-Accommodation Instrument, Internalized Homophobia, and Demographics Information were administered to 187 participants (44 lesbians, 44 gay men, 53 heterosexual women, 46 heterosexual men).The MANCOVA results showed that men reported higher loyalty than women, whereas women presented more exit responses than men. Further, the interactions between gender and sexual orientation on the participants' EVLN responses and on the perceived partner's EVLN responses were significant. With respect to heterosexual women, heterosexual men displayed more loyalty responses. Lesbians had higher scores on loyalty than did heterosexual women. Lesbians also had higher scores on perceived partner's exit response than did heterosexual women and gay men. On the contrary, heterosexual women reported more perceived partner's voice response than lesbians. In addition, lesbians reported higher perceived partner's neglect responses than heterosexual women. Compared to heterosexual women, heterosexual men reported higher perceived partner's exit response. Finally, internalized homophobia was associated with destructive responses for both lesbians and gay men.

  4. The Interaction of Eye-Voice Span with Syntactic Chunking and Predictability in Right- and Left-Embedded Sentences.

    ERIC Educational Resources Information Center

    Balajthy, Ernest P., Jr.

    Sixty tenth graders participated in this study of relationships between eye/voice span, phrase and clause boundaries, reading ability, and sentence structure. Results indicated that sentences apparently are "chunked" into surface constituents during processing. Better tenth grade readers had longer eye/voice spans than did poorer readers and…

  5. Strategies for the Production of Spanish Stop Consonants by Native Speakers of English.

    ERIC Educational Resources Information Center

    Zampini, Mary L.

    A study examined patterns in production of Spanish voiced and voiceless stop consonants by native English speakers, focusing on the interaction between two acoustic cues of stops: voice closure interval and voice onset time (VOT). The study investigated whether learners acquire the appropriate phonetic categories with regard to these stops and if…

  6. Relating to the Speaker behind the Voice: What Is Changing?

    PubMed Central

    Deamer, Felicity; Hayward, Mark

    2018-01-01

    We introduce therapeutic techniques that encourage voice hearers to view their voices as coming from intentional agents whose behavior may be dependent on how the voice hearer relates to and interacts with them. We suggest that this approach is effective because the communicative aspect of voice hearing might fruitfully be seen as explanatorily primitive, meaning that the agentive aspect, the auditory properties, and the intended meaning (interpretation) are all necessary parts of the experience, which contribute to the impact the experience has on the voice hearer. We examine the experiences of a patient who received Relating Therapy, and explore the kinds of changes that can result from this therapeutic approach. PMID:29422879

  7. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    NASA Astrophysics Data System (ADS)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  8. Degree and reciprocity of self-disclosure in online forums.

    PubMed

    Barak, Azy; Gluck-Ofri, Orit

    2007-06-01

    Cyberspace has become a common social environment in which people interact and operate in many ways. The purpose of the present study was to investigate the occurrence and reciprocity of self-disclosure, two subjects that are extensively studied in face-to-face interactions but only to a limited degree in virtual, computer-mediated, textual communication. Data was based on 240 first messages in a thread, sampled in equal numbers from six Internet forums (three discussion and three support groups), and written in equal numbers by each gender, and 240 first responses to them (a total of 480 forum messages). Trained, expert judges blindly rated each message on the degree to which it disclosed personal information, thoughts, and feelings. Linguistic parameters (total number of words and number of first-voice words) were also used as dependent variables. Results showed the following: (a) self-disclosure in support forums was much higher than in discussion forums, in terms of both total number and type of disclosure; (b) messages in support forums were longer and included more first-voice words than in discussion forums; (c) there were no gender differences interacting with level of self-disclosure; (d) reciprocity of self-disclosure was evident, yielding positive correlations between the measures of self-disclosure in messages and responses to them; (e) some differences appeared in level of reciprocity of self-disclosure between male and female participants, with female respondents tending to be more reciprocal than male respondents. The implications of these results are discussed in light of growing social interactions online, and possible applications are suggested.

  9. Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency.

    PubMed

    Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich

    2017-02-01

    The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. T’ain’t what you say, it’s the way that you say it – left insula and inferior frontal cortex work in interaction with superior temporal regions to control the performance of vocal impersonations

    PubMed Central

    McGettigan, Carolyn; Eisner, Frank; Agnew, Zarinah K; Manly, Tom; Wisbey, Duncan; Scott, Sophie K

    2014-01-01

    Historically, the study of human identity perception has focused on faces, but the voice is also central to our expressions and experiences of identity (P. Belin, Fecteau, & Bedard, 2004). Our voices are highly flexible and dynamic; talkers speak differently depending on their health, emotional state, and the social setting, as well as extrinsic factors such as background noise. However, to date, there have been no studies of the neural correlates of identity modulation in speech production. In the current fMRI experiment, we measured the neural activity supporting controlled voice change in adult participants performing spoken impressions. We reveal that deliberate modulation of vocal identity recruits the left anterior insula and inferior frontal gyrus, supporting the planning of novel articulations. Bilateral sites in posterior superior temporal/inferior parietal cortex and a region in right mid/anterior superior temporal sulcus showed greater responses during the emulation of specific vocal identities than for impressions of generic accents. Using functional connectivity analyses, we describe roles for these three sites in their interactions with the brain regions supporting speech planning and production. Our findings mark a significant step toward understanding the neural control of vocal identity, with wider implications for the cognitive control of voluntary motor acts. PMID:23691984

  11. Does negative affect mediate the relationship between daily PTSD symptoms and daily alcohol involvement in female rape victims? Evidence from 14 days of interactive voice response assessment

    PubMed Central

    Cohn, Amy; Hagman, Brett T.; Moore, Kathleen; Mitchell, Jessica; Ehlke, Sarah

    2014-01-01

    The negative reinforcement model of addiction posits that individuals may use alcohol to reduce with negative affective (NA) distress. The current study investigated the mediating effect of daily NA on the relationship between daily PTSD symptoms and same-day and next-day alcohol involvement (consumption and desire to drink) in a sample of 54 non-treatment-seeking female rape victims who completed 14 days of interactive voice response assessment. The moderating effect of lifetime alcohol use disorder diagnosis (AUD) on daily relationships was also examined. Multilevel models suggested that NA mediated the relationship between PTSD and same-day, but not next-day alcohol involvement. NA was greater on days characterized by more severe PTSD symptoms, and alcohol consumption and desire to drink were greater on days characterized by higher NA. Further, daily PTSD symptoms and NA were more strongly associated with same-day (but not next-day) alcohol consumption and desire to drink for women with an AUD than without. Results suggest that NA plays an important role in female rape victims’ daily alcohol use. Differences between women with and without an AUD indicate the need for treatment matching to sub-types of female rape victims. PMID:24731112

  12. Assessment of an interactive voice response system for identifying falls in a statewide sample of older adults.

    PubMed

    Albert, Steven M; King, Jennifer; Keene, Robert M

    2015-02-01

    Interactive voice response (IVR) systems offer great advantages for data collection in large, geographically dispersed samples involving frequent contact. We assessed the quality of IVR data collected from older respondents participating in a statewide falls prevention program evaluation in Pennsylvania in 2010-12. Participants (n=1834) were followed up monthly for up to 10 months to compare respondents who completed all, some, or no assessments in the IVR system. Validity was assessed by examining IVR-reported falls incidence relative to baseline in-person self-report and performance assessment of balance. While a third of the sample switched from IVR to in-person calls over follow-up, IVR interviews were successfully used to complete 68.1% of completed monthly assessments (10,511/15,430). Switching to in-person interviews was not associated with measures of participant function or cognition. Both self-reported (p<.0001) and performance assessment of balance (p=.05) at baseline were related to falls incidence. IVR is a productive modality for falls research among older adults. Future research should establish what level of initial personal research contact is optimal for boosting IVR completion rates and what research domains are most appropriate for this kind of contact. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Learned face-voice pairings facilitate visual search.

    PubMed

    Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2015-04-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.

  14. Male and female voices activate distinct regions in the male brain.

    PubMed

    Sokhi, Dilraj S; Hunter, Michael D; Wilkinson, Iain D; Woodruff, Peter W R

    2005-09-01

    In schizophrenia, auditory verbal hallucinations (AVHs) are likely to be perceived as gender-specific. Given that functional neuro-imaging correlates of AVHs involve multiple brain regions principally including auditory cortex, it is likely that those brain regions responsible for attribution of gender to speech are invoked during AVHs. We used functional magnetic resonance imaging (fMRI) and a paradigm utilising 'gender-apparent' (unaltered) and 'gender-ambiguous' (pitch-scaled) male and female voice stimuli to test the hypothesis that male and female voices activate distinct brain areas during gender attribution. The perception of female voices, when compared with male voices, affected greater activation of the right anterior superior temporal gyrus, near the superior temporal sulcus. Similarly, male voice perception activated the mesio-parietal precuneus area. These different gender associations could not be explained by either simple pitch perception or behavioural response because the activations that we observed were conjointly activated by both 'gender-apparent' and 'gender-ambiguous' voices. The results of this study demonstrate that, in the male brain, the perception of male and female voices activates distinct brain regions.

  15. Discourse-voice regulatory strategies in the psychotherapeutic interaction: a state-space dynamics analysis

    PubMed Central

    Tomicic, Alemka; Martínez, Claudio; Pérez, J. Carola; Hollenstein, Tom; Angulo, Salvador; Gerstmann, Adam; Barroux, Isabelle; Krause, Mariane

    2015-01-01

    This study seeks to provide evidence of the dynamics associated with the configurations of discourse-voice regulatory strategies in patient–therapist interactions in relevant episodes within psychotherapeutic sessions. Its central assumption is that discourses manifest themselves differently in terms of their prosodic characteristics according to their regulatory functions in a system of interactions. The association between discourse and vocal quality in patients and therapists was analyzed in a sample of 153 relevant episodes taken from 164 sessions of five psychotherapies using the state space grid (SSG) method, a graphical tool based on the dynamic systems theory (DST). The results showed eight recurrent and stable discourse-voice regulatory strategies of the patients and three of the therapists. Also, four specific groups of these discourse-voice strategies were identified. The latter were interpreted as regulatory configurations, that is to say, as emergent self-organized groups of discourse-voice regulatory strategies constituting specific interactional systems. Both regulatory strategies and their configurations differed between two types of relevant episodes: Change Episodes and Rupture Episodes. As a whole, these results support the assumption that speaking and listening, as dimensions of the interaction that takes place during therapeutic conversation, occur at different levels. The study not only shows that these dimensions are dependent on each other, but also that they function as a complex and dynamic whole in therapeutic dialog, generating relational offers which allow the patient and the therapist to regulate each other and shape the psychotherapeutic process that characterizes each type of relevant episode. PMID:25932014

  16. Selective attention modulates early human evoked potentials during emotional face-voice processing.

    PubMed

    Ho, Hao Tam; Schröger, Erich; Kotz, Sonja A

    2015-04-01

    Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective-one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.

  17. Use of speech generating devices can improve perception of qualifications for skilled, verbal, and interactive jobs.

    PubMed

    Stern, Steven E; Chobany, Chelsea M; Beam, Alexander A; Hoover, Brittany N; Hull, Thomas T; Linsenbigler, Melissa; Makdad-Light, Courtney; Rubright, Courtney N

    2017-01-01

    We have previously demonstrated that when speech generating devices (SGD) are used as assistive technologies, they are preferred over the users' natural voices. We sought to examine whether using SGDs would affect listener's perceptions of hirability of people with complex communication needs. In a series of three experiments, participants rated videotaped actors, one using SGD and the other using their natural, mildly dysarthric voice, on (a) a measurement of perceptions of speaker credibility, strength, and informedness and (b) measurements of hirability for jobs coded in terms of skill, verbal ability, and interactivity. Experiment 1 examined hirability for jobs varying in terms of skill and verbal ability. Experiment 2 was a replication that examined hirability for jobs varying in terms of interactivity. Experiment 3 examined jobs in terms of skill and specific mode of interaction (face-to-face, telephone, computer-mediated). Actors were rated more favorably when using SGD than their own voices. Actors using SGD were also rated more favorably for highly skilled and highly verbal jobs. This preference for SGDs over mildly dysarthric voice was also found for jobs entailing computer-mediated-communication, particularly skillful jobs.

  18. Voice Response System Statistics Program : Operational Handbook.

    DOT National Transportation Integrated Search

    1980-06-01

    This report documents the Voice Response System (VRS) Statistics Program developed for the preflight weather briefing VRS. It describes the VRS statistical report format and contents, the software program structure, and the program operation.

  19. Distress, omnipotence, and responsibility beliefs in command hallucinations.

    PubMed

    Ellett, Lyn; Luzon, Olga; Birchwood, Max; Abbas, Zarina; Harris, Abi; Chadwick, Paul

    2017-09-01

    Command hallucinations are considered to be one of the most distressing and disturbing symptoms of schizophrenia. Building on earlier studies, we compare key attributes in the symptomatic, affective, and cognitive profiles of people diagnosed with schizophrenia and hearing voices that do (n = 77) or do not (n = 74) give commands. The study employed a cross-sectional design, in which we assessed voice severity, distress and control (PSYRATs), anxiety and depression (HADS), beliefs about voices (BAVQ-R), and responsibility beliefs (RIQ). Clinical and demographic variables were also collected. Command hallucinations were found to be more distressing and controlling, perceived as more omnipotent and malevolent, linked to higher anxiety and depression, and resisted more than hallucinations without commands. Commanding voices were also associated with higher conviction ratings for being personally responsible for preventing harm. The findings suggest key differences in the affective and cognitive profiles of people who hear commanding voices, which have important implications for theory and psychological interventions. Command hallucinations are associated with higher distress, malevolence, and omnipotence. Command hallucinations are associated with higher responsibility beliefs for preventing harm. Responsibility beliefs are associated with voice-related distress. Future psychological interventions for command hallucinations might benefit from focussing not only on omnipotence, but also on responsibility beliefs, as is done in psychological therapies for obsessive compulsive disorder. Limitations The cross-sectional design does not assess issues of causality. We did not measure the presence or severity of delusions. © 2017 The British Psychological Society.

  20. Low is large: spatial location and pitch interact in voice-based body size estimation.

    PubMed

    Pisanski, Katarzyna; Isenstein, Sari G E; Montano, Kelyn J; O'Connor, Jillian J M; Feinberg, David R

    2017-05-01

    The binding of incongruent cues poses a challenge for multimodal perception. Indeed, although taller objects emit sounds from higher elevations, low-pitched sounds are perceptually mapped both to large size and to low elevation. In the present study, we examined how these incongruent vertical spatial cues (up is more) and pitch cues (low is large) to size interact, and whether similar biases influence size perception along the horizontal axis. In Experiment 1, we measured listeners' voice-based judgments of human body size using pitch-manipulated voices projected from a high versus a low, and a right versus a left, spatial location. Listeners associated low spatial locations with largeness for lowered-pitch but not for raised-pitch voices, demonstrating that pitch overrode vertical-elevation cues. Listeners associated rightward spatial locations with largeness, regardless of voice pitch. In Experiment 2, listeners performed the task while sitting or standing, allowing us to examine self-referential cues to elevation in size estimation. Listeners associated vertically low and rightward spatial cues with largeness more for lowered- than for raised-pitch voices. These correspondences were robust to sex (of both the voice and the listener) and head elevation (standing or sitting); however, horizontal correspondences were amplified when participants stood. Moreover, when participants were standing, their judgments of how much larger men's voices sounded than women's increased when the voices were projected from the low speaker. Our results provide novel evidence for a multidimensional spatial mapping of pitch that is generalizable to human voices and that affects performance in an indirect, ecologically relevant spatial task (body size estimation). These findings suggest that crossmodal pitch correspondences evoke both low-level and higher-level cognitive processes.

  1. Voice interactive electronic warning systems (VIEWS) - An applied approach to voice technology in the helicopter cockpit

    NASA Technical Reports Server (NTRS)

    Voorhees, J. W.; Bucher, N. M.

    1983-01-01

    The cockpit has been one of the most rapidly changing areas of new aircraft design over the past thirty years. In connection with these developments, a pilot can now be considered a decision maker/system manager as well as a vehicle controller. There is, however, a trend towards an information overload in the cockpit, and information processing problems begin to occur for the rotorcraft pilot. One approach to overcome the arising difficulties is based on the utilization of voice technology to improve the information transfer rate in the cockpit with respect to both input and output. Attention is given to the background of speech technology, the application of speech technology within the cockpit, voice interactive electronic warning system (VIEWS) simulation, and methodology. Information subsystems are considered along with a dynamic simulation study, and data collection.

  2. AdaRTE: adaptable dialogue architecture and runtime engine. A new architecture for health-care dialogue systems.

    PubMed

    Rojas-Barahona, L M; Giorgino, T

    2007-01-01

    Spoken dialogue systems have been increasingly employed to provide ubiquitous automated access via telephone to information and services for the non-Internet-connected public. In the health care context, dialogue systems have been successfully applied. Nevertheless, speech-based technology is not easy to implement because it requires a considerable development investment. The advent of VoiceXML for voice applications contributed to reduce the proliferation of incompatible dialogue interpreters, but introduced new complexity. As a response to these issues, we designed an architecture for dialogue representation and interpretation, AdaRTE, which allows developers to layout dialogue interactions through a high level formalism that offers both declarative and procedural features. AdaRTE aim is to provide a ground for deploying complex and adaptable dialogues whilst allows the experimentation and incremental adoption of innovative speech technologies. It provides the dynamic behavior of Augmented Transition Networks and enables the generation of different backends formats such as VoiceXML. It is especially targeted to the health care context, where a framework for easy dialogue deployment could reduce the barrier for a more widespread adoption of dialogue systems.

  3. Bilingual Voicing: A Study of Code-Switching in the Reported Speech of Finnish Immigrants in Estonia

    ERIC Educational Resources Information Center

    Frick, Maria; Riionheimo, Helka

    2013-01-01

    Through a conversation analytic investigation of Finnish-Estonian bilingual (direct) reported speech (i.e., voicing) by Finns who live in Estonia, this study shows how code-switching is used as a double contextualization device. The code-switched voicings are shaped by the on-going interactional situation, serving its needs by opening up a context…

  4. Eye-movements and Voice as Interface Modalities to Computer Systems

    NASA Astrophysics Data System (ADS)

    Farid, Mohsen M.; Murtagh, Fionn D.

    2003-03-01

    We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.

  5. The role of voice input for human-machine communication.

    PubMed Central

    Cohen, P R; Oviatt, S L

    1995-01-01

    Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology. PMID:7479803

  6. Understanding the mechanisms of familiar voice-identity recognition in the human brain.

    PubMed

    Maguinness, Corrina; Roswandowitz, Claudia; von Kriegstein, Katharina

    2018-03-31

    Humans have a remarkable skill for voice-identity recognition: most of us can remember many voices that surround us as 'unique'. In this review, we explore the computational and neural mechanisms which may support our ability to represent and recognise a unique voice-identity. We examine the functional architecture of voice-sensitive regions in the superior temporal gyrus/sulcus, and bring together findings on how these regions may interact with each other, and additional face-sensitive regions, to support voice-identity processing. We also contrast findings from studies on neurotypicals and clinical populations which have examined the processing of familiar and unfamiliar voices. Taken together, the findings suggest that representations of familiar and unfamiliar voices might dissociate in the human brain. Such an observation does not fit well with current models for voice-identity processing, which by-and-large assume a common sequential analysis of the incoming voice signal, regardless of voice familiarity. We provide a revised audio-visual integrative model of voice-identity processing which brings together traditional and prototype models of identity processing. This revised model includes a mechanism of how voice-identity representations are established and provides a novel framework for understanding and examining the potential differences in familiar and unfamiliar voice processing in the human brain. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Voice and endocrinology

    PubMed Central

    Hari Kumar, K. V. S.; Garg, Anurag; Ajai Chandra, N. S.; Singh, S. P.; Datta, Rakesh

    2016-01-01

    Voice is one of the advanced features of natural evolution that differentiates human beings from other primates. The human voice is capable of conveying the thoughts into spoken words along with a subtle emotion to the tone. This extraordinary character of the voice in expressing multiple emotions is the gift of God to the human beings and helps in effective interpersonal communication. Voice generation involves close interaction between cerebral signals and the peripheral apparatus consisting of the larynx, vocal cords, and trachea. The human voice is susceptible to the hormonal changes throughout life right from the puberty until senescence. Thyroid, gonadal and growth hormones have tremendous impact on the structure and function of the vocal apparatus. The alteration of voice is observed even in physiological states such as puberty and menstruation. Astute clinical observers make out the changes in the voice and refer the patients for endocrine evaluation. In this review, we shall discuss the hormonal influence on the voice apparatus in normal and endocrine disorders. PMID:27730065

  8. Vocalization-Induced Enhancement of the Auditory Cortex Responsiveness during Voice F0 Feedback Perturbation

    PubMed Central

    Behroozmand, Roozbeh; Karvelis, Laura; Liu, Hanjun; Larson, Charles R.

    2009-01-01

    Objective The present study investigated whether self-vocalization enhances auditory neural responsiveness to voice pitch feedback perturbation and how this vocalization-induced neural modulation can be affected by the extent of the feedback deviation. Method Event related potentials (ERPs) were recorded in 15 subjects in response to +100, +200 and +500 cents pitch-shifted voice auditory feedback during active vocalization and passive listening to the playback of the self-produced vocalizations. Result The amplitude of the evoked P1 (latency: 73.51 ms) and P2 (latency: 199.55 ms) ERP components in response to feedback perturbation were significantly larger during vocalization than listening. The difference between P2 peak amplitudes during vocalization vs. listening was shown to be significantly larger for +100 than +500 cents stimulus. Conclusion Results indicate that the human auditory cortex is more responsive to voice F0 feedback perturbations during vocalization than passive listening. Greater vocalization-induced enhancement of the auditory responsiveness to smaller feedback perturbations may imply that the audio-vocal system detects and corrects for errors in vocal production that closely match the expected vocal output. Significance Findings of this study support previous suggestions regarding the enhanced auditory sensitivity to feedback alterations during self-vocalization, which may serve the purpose of feedback-based monitoring of one’s voice. PMID:19520602

  9. National Voice Response System (VRS) Implementation Plan Alternatives Study

    DOT National Transportation Integrated Search

    1979-07-01

    This study examines the alternatives available to implement a national Voice Response System (VRS) for automated preflight weather briefings and flight plan filing. Four major hardware configurations are discussed. A computerized analysis model was d...

  10. Twenty-Channel Voice Response System

    DOT National Transportation Integrated Search

    1981-06-01

    This report documents the design and implementation of a Voice Response System, which provides Direct-User Access to the FAA's aviation-weather data base. This system supports 20 independent audio channels, and as of this report, speaks three weather...

  11. Connections between voice ergonomic risk factors and voice symptoms, voice handicap, and respiratory tract diseases.

    PubMed

    Rantala, Leena M; Hakala, Suvi J; Holmqvist, Sofia; Sala, Eeva

    2012-11-01

    The aim of the study was to investigate the connections between voice ergonomic risk factors found in classrooms and voice-related problems in teachers. Voice ergonomic assessment was performed in 39 classrooms in 14 elementary schools by means of a Voice Ergonomic Assessment in Work Environment--Handbook and Checklist. The voice ergonomic risk factors assessed included working culture, noise, indoor air quality, working posture, stress, and access to a sound amplifier. Teachers from the above-mentioned classrooms reported their voice symptoms, respiratory tract diseases, and completed a Voice Handicap Index (VHI). The more voice ergonomic risk factors found in the classroom the higher were the teachers' total scores on voice symptoms and VHI. Stress was the factor that correlated most strongly with voice symptoms. Poor indoor air quality increased the occurrence of laryngitis. Voice ergonomics were poor in the classrooms studied and voice ergonomic risk factors affected the voice. It is important to convey information on voice ergonomics to education administrators and those responsible for school planning and taking care of school buildings. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  12. The prevalence of voice disorders in 911 emergency telecommunicators.

    PubMed

    Johns-Fiedler, Heidi; van Mersbergen, Miriam

    2015-05-01

    Emergency 911 dispatchers or telecommunicators have been cited as occupational voice users who could be at risk for voice disorders. To test the theoretical assumption that the 911 emergency telecommunicators (911ETCs) are exposed to risk for voice disorders because of their heavy vocal load, this study assessed the prevalence of voice complaints in 911ETCs. A cross-sectional survey was sent to two large national organizations for 911ETCs with 71 complete responses providing information about voice health, voice complaints, and work load. Although 911ETCs have a higher rate of reported voice symptoms and score higher on the Voice Handicap Index-10 than the general public, they have a voice disorder diagnosis prevalence that mirrors the prevalence of the general population. The 911ETCs may be underserved in the voice community and would benefit from education on vocal health and treatments for voice complaints. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Computational Modeling of Fluid–Structure–Acoustics Interaction during Voice Production

    PubMed Central

    Jiang, Weili; Zheng, Xudong; Xue, Qian

    2017-01-01

    The paper presented a three-dimensional, first-principle based fluid–structure–acoustics interaction computer model of voice production, which employed a more realistic human laryngeal and vocal tract geometries. Self-sustained vibrations, important convergent–divergent vibration pattern of the vocal folds, and entrainment of the two dominant vibratory modes were captured. Voice quality-associated parameters including the frequency, open quotient, skewness quotient, and flow rate of the glottal flow waveform were found to be well within the normal physiological ranges. The analogy between the vocal tract and a quarter-wave resonator was demonstrated. The acoustic perturbed flux and pressure inside the glottis were found to be at the same order with their incompressible counterparts, suggesting strong source–filter interactions during voice production. Such high fidelity computational model will be useful for investigating a variety of pathological conditions that involve complex vibrations, such as vocal fold paralysis, vocal nodules, and vocal polyps. The model is also an important step toward a patient-specific surgical planning tool that can serve as a no-risk trial and error platform for different procedures, such as injection of biomaterials and thyroplastic medialization. PMID:28243588

  14. Negotiating Voice Construction between Writers and Readers in College Writing: A Case Study of an L2 Writer

    ERIC Educational Resources Information Center

    Jwa, Soomin

    2018-01-01

    Voice is co-constructed, a result of the "text-mediated interaction between the writer and the reader." The present study, using the context of U.S. college writing, explores the complicated process by which an L2 novice writer--one who has a growing awareness of, yet peripheral access to, discourse practices--constructs a voice. Through…

  15. Expectations and Experiences: The Voice of a First-Generation First-Year College Student and the Question of Student Persistence

    ERIC Educational Resources Information Center

    Stieha, Vicki

    2010-01-01

    This single case study takes a phenomenological approach using the voice centered analysis to analyze qualitative interview data so that the voice of this first-generation college student is brought forward. It is a poignant voice filled with conflicting emotional responses to the desire for college success, for family stability, for meaningful…

  16. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception.

    PubMed

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J

    2016-02-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative computer-generated context spoken in a flat emotional tone was followed by a literally positive statement spoken in a sincere or sarcastic tone of voice. Participants indicated for each statement whether the intonation was sincere or sarcastic. In Experiment 1, a congruent context/tone of voice pairing (negative/sarcastic, positive/sincere) produced fast response times and proportions of sarcastic responses in the direction predicted by the tone of voice. Incongruent pairings produced mid-range proportions and slower response times. Experiment 2 introduced ambiguous contexts to determine whether a lower context/statements contrast would affect the proportion of sarcastic responses and response time. Results showed the expected findings for proportions (values between those obtained for congruent and incongruent pairings in the direction predicted by the tone of voice). However, response time failed to produce the predicted pattern, suggesting potential issues with the choice of stimuli. Experiments 3 and 4 extended the results of Experiments 1 and 2, respectively, to auditory stimuli based on written vignettes used in neuropsychological assessment. Results were exactly as predicted by contrast effects in both experiments. Taken together, the findings suggest that both context and tone influence how sarcasm is perceived while supporting the importance of contrast effects in sarcasm perception.

  17. Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.

    PubMed

    Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.

  18. Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex

    PubMed Central

    Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.

    2013-01-01

    The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157

  19. Audio-vocal system regulation in children with autism spectrum disorders.

    PubMed

    Russo, Nicole; Larson, Charles; Kraus, Nina

    2008-06-01

    Do children with autism spectrum disorders (ASD) respond similarly to perturbations in auditory feedback as typically developing (TD) children? Presentation of pitch-shifted voice auditory feedback to vocalizing participants reveals a close coupling between the processing of auditory feedback and vocal motor control. This paradigm was used to test the hypothesis that abnormalities in the audio-vocal system would negatively impact ASD compensatory responses to perturbed auditory feedback. Voice fundamental frequency (F(0)) was measured while children produced an /a/ sound into a microphone. The voice signal was fed back to the subjects in real time through headphones. During production, the feedback was pitch shifted (-100 cents, 200 ms) at random intervals for 80 trials. Averaged voice F(0) responses to pitch-shifted stimuli were calculated and correlated with both mental and language abilities as tested via standardized tests. A subset of children with ASD produced larger responses to perturbed auditory feedback than TD children, while the other children with ASD produced significantly lower response magnitudes. Furthermore, robust relationships between language ability, response magnitude and time of peak magnitude were identified. Because auditory feedback helps to stabilize voice F(0) (a major acoustic cue of prosody) and individuals with ASD have problems with prosody, this study identified potential mechanisms of dysfunction in the audio-vocal system for voice pitch regulation in some children with ASD. Objectively quantifying this deficit may inform both the assessment of a subgroup of ASD children with prosody deficits, as well as remediation strategies that incorporate pitch training.

  20. Voice emotion perception and production in cochlear implant users.

    PubMed

    Jiam, N T; Caldwell, M; Deroche, M L; Chatterjee, M; Limb, C J

    2017-09-01

    Voice emotion is a fundamental component of human social interaction and social development. Unfortunately, cochlear implant users are often forced to interface with highly degraded prosodic cues as a result of device constraints in extraction, processing, and transmission. As such, individuals with cochlear implants frequently demonstrate significant difficulty in recognizing voice emotions in comparison to their normal hearing counterparts. Cochlear implant-mediated perception and production of voice emotion is an important but relatively understudied area of research. However, a rich understanding of the voice emotion auditory processing offers opportunities to improve upon CI biomedical design and to develop training programs benefiting CI performance. In this review, we will address the issues, current literature, and future directions for improved voice emotion processing in cochlear implant users. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Guidelines for Selecting Microphones for Human Voice Production Research

    ERIC Educational Resources Information Center

    Svec, Jan G.; Granqvist, Svante

    2010-01-01

    Purpose: This tutorial addresses fundamental characteristics of microphones (frequency response, frequency range, dynamic range, and directionality), which are important for accurate measurements of voice and speech. Method: Technical and voice literature was reviewed and analyzed. The following recommendations on desirable microphone…

  2. Repetitive transcranial magnetic stimulation of Broca's area affects verbal responses to gesture observation.

    PubMed

    Gentilucci, Maurizio; Bernardis, Paolo; Crisi, Girolamo; Dalla Volta, Riccardo

    2006-07-01

    The aim of the present study was to determine whether Broca's area is involved in translating some aspects of arm gesture representations into mouth articulation gestures. In Experiment 1, we applied low-frequency repetitive transcranial magnetic stimulation over Broca's area and over the symmetrical loci of the right hemisphere of participants responding verbally to communicative spoken words, to gestures, or to the simultaneous presentation of the two signals. We performed also sham stimulation over the left stimulation loci. In Experiment 2, we performed the same stimulations as in Experiment 1 to participants responding with words congruent and incongruent with gestures. After sham stimulation voicing parameters were enhanced when responding to communicative spoken words or to gestures as compared to a control condition of word reading. This effect increased when participants responded to the simultaneous presentation of both communicative signals. In contrast, voicing was interfered when the verbal responses were incongruent with gestures. The left stimulation neither induced enhancement on voicing parameters of words congruent with gestures nor interference on words incongruent with gestures. We interpreted the enhancement of the verbal response to gesturing in terms of intention to interact directly. Consequently, we proposed that Broca's area is involved in the process of translating into speech aspects concerning the social intention coded by the gesture. Moreover, we discussed the results in terms of evolution to support the theory [Corballis, M. C. (2002). From hand to mouth: The origins of language. Princeton, NJ: Princeton University Press] proposing spoken language as evolved from an ancient communication system using arm gestures.

  3. Phonological experience modulates voice discrimination: Evidence from functional brain networks analysis.

    PubMed

    Hu, Xueping; Wang, Xiangpeng; Gu, Yan; Luo, Pei; Yin, Shouhang; Wang, Lijun; Fu, Chao; Qiao, Lei; Du, Yi; Chen, Antao

    2017-10-01

    Numerous behavioral studies have found a modulation effect of phonological experience on voice discrimination. However, the neural substrates underpinning this phenomenon are poorly understood. Here we manipulated language familiarity to test the hypothesis that phonological experience affects voice discrimination via mediating the engagement of multiple perceptual and cognitive resources. The results showed that during voice discrimination, the activation of several prefrontal regions was modulated by language familiarity. More importantly, the same effect was observed concerning the functional connectivity from the fronto-parietal network to the voice-identity network (VIN), and from the default mode network to the VIN. Our findings indicate that phonological experience could bias the recruitment of cognitive control and information retrieval/comparison processes during voice discrimination. Therefore, the study unravels the neural substrates subserving the modulation effect of phonological experience on voice discrimination, and provides new insights into studying voice discrimination from the perspective of network interactions. Copyright © 2017. Published by Elsevier Inc.

  4. Writing about rape: use of the passive voice and other distancing text features as an expression of perceived responsibility of the victim.

    PubMed

    Bohner, G

    2001-12-01

    The hypothesis that the passive voice is used to put the actor in the background and the acted-upon person in the focus of discourse is tested in the realm of sexual violence. German university students (N = 67) watched a silent video segment depicting a rape whose circumstances, depending on condition, could or could not be easily interpreted in terms of rape myths. Then they wrote down what they had seen, judged the responsibility of assailant and victim, and completed a rape-myth acceptance scale. Participants used the passive voice more frequently to describe the rape itself vs. other actions they had watched. When circumstances of the rape were easily interpretable in terms of rape myths, use of the passive voice correlated positively with rape-myth acceptance and perceived responsibility of the victim, and negatively with perceived responsibility of the assailant. The language of headlines that participants generated for their reports also reflected judgments of assailant and victim responsibility. Implications for the non-reactive assessment of responsibility attributions and directions for future research are discussed.

  5. Vocal aging and adductor spasmodic dysphonia: response to botulinum toxin injection.

    PubMed

    Cannito, Michael P; Kahane, Joel C; Chorna, Lesya

    2008-01-01

    Aging of the larynx is characterized by involutional changes which alter its biomechanical and neural properties and create a biological environment that is different from younger counterparts. Illustrative anatomical examples are presented. This natural, non-disease process appears to set conditions which may influence the effectiveness of botulinum toxin injection and our expectations for its success. Adductor spasmodic dysphonia, a type of laryngeal dystonia, is typically treated using botulinum toxin injections of the vocal folds in order to suppress adductory muscle spasms which are disruptive to production of speech and voice. A few studies have suggested diminished response to treatment in older patients with adductor spasmodic dysphonia. This retrospective study provides a reanalysis of existing pre-to-post treatment data as function of age. Perceptual judgments of speech produced by 42 patients with ADSD were made by two panels of professional listeners with expertise in voice or fluency of speech. Results demonstrate a markedly reduced positive response to botulinum toxin treatment in the older patients. Perceptual findings are further elucidated by means of acoustic spectrography. Literature on vocal aging is reviewed to provide a specific set of biological mechanisms that best account for the observed interaction of botulinum toxin treatment with advancing age.

  6. The smartphone and the driver's cognitive workload: A comparison of Apple, Google, and Microsoft's intelligent personal assistants.

    PubMed

    Strayer, David L; Cooper, Joel M; Turrill, Jonna; Coleman, James R; Hopman, Rachel J

    2017-06-01

    The goal of this research was to examine the impact of voice-based interactions using 3 different intelligent personal assistants (Apple's Siri , Google's Google Now for Android phones, and Microsoft's Cortana ) on the cognitive workload of the driver. In 2 experiments using an instrumented vehicle on suburban roadways, we measured the cognitive workload of drivers when they used the voice-based features of each smartphone to place a call, select music, or send text messages. Cognitive workload was derived from primary task performance through video analysis, secondary-task performance using the Detection Response Task (DRT), and subjective mental workload. We found that workload was significantly higher than that measured in the single-task drive. There were also systematic differences between the smartphones: The Google system placed lower cognitive demands on the driver than the Apple and Microsoft systems, which did not differ. Video analysis revealed that the difference in mental workload between the smartphones was associated with the number of system errors, the time to complete an action, and the complexity and intuitiveness of the devices. Finally, surprisingly high levels of cognitive workload were observed when drivers were interacting with the devices: "on-task" workload measures did not systematically differ from that associated with a mentally demanding Operation Span (OSPAN) task. The analysis also found residual costs associated using each of the smartphones that took a significant time to dissipate. The data suggest that caution is warranted in the use of smartphone voice-based technology in the vehicle because of the high levels of cognitive workload associated with these interactions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback.

    PubMed

    Behroozmand, Roozbeh; Larson, Charles R

    2011-06-06

    The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback. Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  8. Voice disorders in teachers and the general population: effects on work performance, attendance, and future career choices.

    PubMed

    Roy, Nelson; Merrill, Ray M; Thibeault, Susan; Gray, Steven D; Smith, Elaine M

    2004-06-01

    To examine the frequency and adverse effects of voice disorders on job performance and attendance in teachers and the general population, 2,401 participants from Iowa and Utah (n1 = 1,243 teachers and n2 = 1,279 nonteachers) were randomly selected and were interviewed by telephone using a voice disorder questionnaire. Teachers were significantly more likely than nonteachers to have experienced multiple voice symptoms and signs including hoarseness, discomfort, and increased effort while using their voice, tiring or experiencing a change in voice quality after short use, difficulty projecting their voice, trouble speaking or singing softly, and a loss of their singing range (all odds ratios [ORs] p <.05). Furthermore, teachers consistently attributed these voice symptoms to their occupation and were significantly more likely to indicate that their voice limited their ability to perform certain tasks at work, and had reduced activities or interactions as a result. Teachers, as compared with nonteachers, had missed more workdays over the preceding year because of voice problems and were more likely to consider changing occupations because of their voice (all comparisons p <.05). These findings strongly suggest that occupationally related voice dysfunction in teachers can have significant adverse effects on job performance, attendance, and future career choices.

  9. Compensation for pitch-shifted auditory feedback during the production of Mandarin tone sequences

    NASA Astrophysics Data System (ADS)

    Xu, Yi; Larson, Charles R.; Bauer, Jay J.; Hain, Timothy C.

    2004-08-01

    Recent research has found that while speaking, subjects react to perturbations in pitch of voice auditory feedback by changing their voice fundamental frequency (F0) to compensate for the perceived pitch-shift. The long response latencies (150-200 ms) suggest they may be too slow to assist in on-line control of the local pitch contour patterns associated with lexical tones on a syllable-to-syllable basis. In the present study, we introduced pitch-shifted auditory feedback to native speakers of Mandarin Chinese while they produced disyllabic sequences /ma ma/ with different tonal combinations at a natural speaking rate. Voice F0 response latencies (100-150 ms) to the pitch perturbations were shorter than syllable durations reported elsewhere. Response magnitudes increased from 50 cents during static tone to 85 cents during dynamic tone productions. Response latencies and peak times decreased in phrases involving a dynamic change in F0. The larger response magnitudes and shorter latency and peak times in tasks requiring accurate, dynamic control of F0, indicate this automatic system for regulation of voice F0 may be task-dependent. These findings suggest that auditory feedback may be used to help regulate voice F0 during production of bi-tonal Mandarin phrases.

  10. Finding and Learning to Use the Singing Voice: A Manual for Teachers.

    ERIC Educational Resources Information Center

    Gould, A. Oren

    The child who is unable to reproduce a melody at a given pitch range can begin to "carry a tune" by learning to hear and control his singing voice and to match his voice with voices of other singers or with instruments. The "too low problem singer," the child with the most common difficulty, must learn to make successful song responses in his…

  11. Inadequate vocal hygiene habits associated with the presence of self-reported voice symptoms in telemarketers.

    PubMed

    Fuentes-López, Eduardo; Fuente, Adrian; Contreras, Karem V

    2017-12-18

    The aim of this study is to determine possible associations between vocal hygiene habits and self-reported vocal symptoms in telemarketers. A cross-sectional study that included 79 operators from call centres in Chile was carried out. Their vocal hygiene habits and self-reported symptoms were investigated using a validated and reliable questionnaire created for the purposes of this study. Forty-five percent of telemarketers reported having one or more vocal symptoms. Among them, 16.46% reported that their voices tense up when talking and 10.13% needed to clear their throat to make their voices clearer. Five percent mentioned that they always talk without taking a break and 40.51% reported using their voices in noisy environments. The number of working hours per day and inadequate vocal hygiene habits were associated with the presence of self-reported symptoms. Additionally, an interaction between the use of the voice in noisy environments and not taking breaks during the day was observed. Finally, the frequency of inadequate vocal hygiene habits was associated with the number of symptoms reported. Using the voice in noisy environments and talking without taking breaks were both associated with the presence of specific vocal symptoms. This study provides some evidence about the interaction between these two inadequate vocal hygiene habits that potentiates vocal symptoms.

  12. Analysis of Measured and Simulated Supraglottal Acoustic Waves.

    PubMed

    Fraile, Rubén; Evdokimova, Vera V; Evgrafova, Karina V; Godino-Llorente, Juan I; Skrelin, Pavel A

    2016-09-01

    To date, although much attention has been paid to the estimation and modeling of the voice source (ie, the glottal airflow volume velocity), the measurement and characterization of the supraglottal pressure wave have been much less studied. Some previous results have unveiled that the supraglottal pressure wave has some spectral resonances similar to those of the voice pressure wave. This makes the supraglottal wave partially intelligible. Although the explanation for such effect seems to be clearly related to the reflected pressure wave traveling upstream along the vocal tract, the influence that nonlinear source-filter interaction has on it is not as clear. This article provides an insight into this issue by comparing the acoustic analyses of measured and simulated supraglottal and voice waves. Simulations have been performed using a high-dimensional discrete vocal fold model. Results of such comparative analysis indicate that spectral resonances in the supraglottal wave are mainly caused by the regressive pressure wave that travels upstream along the vocal tract and not by source-tract interaction. On the contrary and according to simulation results, source-tract interaction has a role in the loss of intelligibility that happens in the supraglottal wave with respect to the voice wave. This loss of intelligibility mainly corresponds to spectral differences for frequencies above 1500 Hz. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Neurobiological correlates of emotional intelligence in voice and face perception networks

    PubMed Central

    Karle, Kathrin N; Ethofer, Thomas; Jacob, Heike; Brück, Carolin; Erb, Michael; Lotze, Martin; Nizielski, Sophia; Schütz, Astrid; Wildgruber, Dirk; Kreifelts, Benjamin

    2018-01-01

    Abstract Facial expressions and voice modulations are among the most important communicational signals to convey emotional information. The ability to correctly interpret this information is highly relevant for successful social interaction and represents an integral component of emotional competencies that have been conceptualized under the term emotional intelligence. Here, we investigated the relationship of emotional intelligence as measured with the Salovey-Caruso-Emotional-Intelligence-Test (MSCEIT) with cerebral voice and face processing using functional and structural magnetic resonance imaging. MSCEIT scores were positively correlated with increased voice-sensitivity and gray matter volume of the insula accompanied by voice-sensitivity enhanced connectivity between the insula and the temporal voice area, indicating generally increased salience of voices. Conversely, in the face processing system, higher MSCEIT scores were associated with decreased face-sensitivity and gray matter volume of the fusiform face area. Taken together, these findings point to an alteration in the balance of cerebral voice and face processing systems in the form of an attenuated face-vs-voice bias as one potential factor underpinning emotional intelligence. PMID:29365199

  14. Neurobiological correlates of emotional intelligence in voice and face perception networks.

    PubMed

    Karle, Kathrin N; Ethofer, Thomas; Jacob, Heike; Brück, Carolin; Erb, Michael; Lotze, Martin; Nizielski, Sophia; Schütz, Astrid; Wildgruber, Dirk; Kreifelts, Benjamin

    2018-02-01

    Facial expressions and voice modulations are among the most important communicational signals to convey emotional information. The ability to correctly interpret this information is highly relevant for successful social interaction and represents an integral component of emotional competencies that have been conceptualized under the term emotional intelligence. Here, we investigated the relationship of emotional intelligence as measured with the Salovey-Caruso-Emotional-Intelligence-Test (MSCEIT) with cerebral voice and face processing using functional and structural magnetic resonance imaging. MSCEIT scores were positively correlated with increased voice-sensitivity and gray matter volume of the insula accompanied by voice-sensitivity enhanced connectivity between the insula and the temporal voice area, indicating generally increased salience of voices. Conversely, in the face processing system, higher MSCEIT scores were associated with decreased face-sensitivity and gray matter volume of the fusiform face area. Taken together, these findings point to an alteration in the balance of cerebral voice and face processing systems in the form of an attenuated face-vs-voice bias as one potential factor underpinning emotional intelligence.

  15. Dysphonia, Perceived Control, and Psychosocial Distress: A Qualitative Study.

    PubMed

    Misono, Stephanie; Haut, Caroline; Meredith, Liza; Frazier, Patricia A; Stockness, Ali; Michael, Deirdre D; Butcher, Lisa; Harwood, Eileen M

    2018-05-11

    The purpose of this qualitative study was to examine relationships between psychological factors, particularly perceived control, and voice symptoms in adults seeking treatment for a voice problem. Semistructured interviews of adult patients with a clinical diagnosis of muscle tension dysphonia were conducted and transcribed. Follow-up interviews were conducted as needed for further information or clarification. A multidisciplinary team analyzed interview content using inductive techniques. Common themes and subthemes were identified. A conceptual model was developed describing the association between voice symptoms, psychological factors, precipitants of ongoing voice symptoms, and perceived control. Thematic saturation was reached after 23 interviews. No participants reported a direct psychological cause for their voice problem, although half described significant life events preceding voice problem onset (eg, miscarriage and other health events, interpersonal conflicts, and family members' illnesses, injuries, and deaths). Participants described psychological influences on voice symptoms that led to rapid exacerbation of their voice symptoms. Participants described the helpfulness of speech therapy and sometimes also challenges of applying techniques in daily life. They also discussed personal coping strategies that included behavioral (eg, avoiding triggers and seeking social support) and psychological (eg, mind-body awareness and emotion regulation) components. Voice-related perceived control was associated with adaptive emotional and behavioral responses, which appeared to facilitate symptom improvement. In this qualitative pilot study, participant narratives suggested that psychological factors and emotions influence voice symptoms, facilitating development of a preliminary conceptual model of how adaptive and maladaptive responses develop and how they influence vocal function. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. Increasing communicative interactions of young children with autism using a voice output communication aid and naturalistic teaching.

    PubMed

    Schepis, M M; Reid, D H; Behrmann, M M; Sutton, K A

    1998-01-01

    We evaluated the effects of a voice output communication aid (VOCA) and naturalistic teaching procedures on the communicative interactions of young children with autism. A teacher and three assistants were taught to use naturalistic teaching strategies to provide opportunities for VOCA use in the context of regularly occurring classroom routines. Naturalistic teaching procedures and VOCA use were introduced in multiple probe fashion across 4 children and two classroom routines (snack and play). As the procedures were implemented, all children showed increases in communicative interactions using VOCAs. Also, there was no apparent reductive effect of VOCA use within the naturalistic teaching paradigm on other communicative behaviors. Teachers' ratings of children's VOCA communication, as well as ratings of a person unfamiliar with the children, supported the contextual appropriateness of the VOCA. Probes likewise indicated that the children used the VOCAs for a variety of different messages including requests, yes and no responses, statements, and social comments. Results are discussed in regard to the potential benefits of a VOCA when combined with naturalistic teaching procedures. Future research needs are also discussed, focusing on more precise identification of the attributes of VOCA use for children with autism, as well as for their support personnel.

  17. Voice hearing within the context of hearers' social worlds: an interpretative phenomenological analysis.

    PubMed

    Mawson, Amy; Berry, Katherine; Murray, Craig; Hayward, Mark

    2011-09-01

    Research has found relational qualities of power and intimacy to exist within hearer-voice interactions. The present study aimed to provide a deeper understanding of the interpersonal context of voice hearing by exploring participants' relationships with their voices and other people in their lives. This research was designed in consultation with service users and employed a qualitative, phenomenological, and idiographic design using semi-structured interviews. Ten participants, recruited via mental health services, and who reported hearing voices in the previous week, completed the interviews. These were transcribed verbatim and analysed using interpretative phenomenological analysis. Five themes resulted from the analysis. Theme 1: 'person and voice' demonstrated that participants' voices often reflected the identity, but not always the quality of social acquaintances. Theme 2: 'voices changing and confirming relationship with the self' explored the impact of voice hearing in producing an inferior sense-of-self in comparison to others. Theme 3: 'a battle for control' centred on issues of control and a dilemma of independence within voice relationships. Theme 4: 'friendships facilitating the ability to cope' and theme 5: 'voices creating distance in social relationships' explored experiences of social relationships within the context of voice hearing, and highlighted the impact of social isolation for voice hearers. The study demonstrated the potential role of qualitative research in developing theories of voice hearing. It extended previous research by highlighting the interface between voices and the social world of the hearer, including reciprocal influences of social relationships on voices and coping. Improving voice hearers' sense-of-self may be a key factor in reducing the distress caused by voices. ©2010 The British Psychological Society.

  18. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback

    PubMed Central

    2011-01-01

    Background The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback. Results Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Conclusions Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds. PMID:21645406

  19. Cause-effect relationship between vocal fold physiology and voice production in a three-dimensional phonation model

    PubMed Central

    Zhang, Zhaoyan

    2016-01-01

    The goal of this study is to better understand the cause-effect relation between vocal fold physiology and the resulting vibration pattern and voice acoustics. Using a three-dimensional continuum model of phonation, the effects of changes in vocal fold stiffness, medial surface thickness in the vertical direction, resting glottal opening, and subglottal pressure on vocal fold vibration and different acoustic measures are investigated. The results show that the medial surface thickness has dominant effects on the vertical phase difference between the upper and lower margins of the medial surface, closed quotient, H1-H2, and higher-order harmonics excitation. The main effects of vocal fold approximation or decreasing resting glottal opening are to lower the phonation threshold pressure, reduce noise production, and increase the fundamental frequency. Increasing subglottal pressure is primarily responsible for vocal intensity increase but also leads to significant increase in noise production and an increased fundamental frequency. Increasing AP stiffness significantly increases the fundamental frequency and slightly reduces noise production. The interaction among vocal fold thickness, stiffness, approximation, and subglottal pressure in the control of F0, vocal intensity, and voice quality is discussed. PMID:27106298

  20. Driving While Interacting With Google Glass: Investigating the Combined Effect of Head-Up Display and Hands-Free Input on Driving Safety and Multitask Performance.

    PubMed

    Tippey, Kathryn G; Sivaraj, Elayaraj; Ferris, Thomas K

    2017-06-01

    This study evaluated the individual and combined effects of voice (vs. manual) input and head-up (vs. head-down) display in a driving and device interaction task. Advances in wearable technology offer new possibilities for in-vehicle interaction but also present new challenges for managing driver attention and regulating device usage in vehicles. This research investigated how driving performance is affected by interface characteristics of devices used for concurrent secondary tasks. A positive impact on driving performance was expected when devices included voice-to-text functionality (reducing demand for visual and manual resources) and a head-up display (HUD) (supporting greater visibility of the driving environment). Driver behavior and performance was compared in a texting-while-driving task set during a driving simulation. The texting task was completed with and without voice-to-text using a smartphone and with voice-to-text using Google Glass's HUD. Driving task performance degraded with the addition of the secondary texting task. However, voice-to-text input supported relatively better performance in both driving and texting tasks compared to using manual entry. HUD functionality further improved driving performance compared to conditions using a smartphone and often was not significantly worse than performance without the texting task. This study suggests that despite the performance costs of texting-while-driving, voice input methods improve performance over manual entry, and head-up displays may further extend those performance benefits. This study can inform designers and potential users of wearable technologies as well as policymakers tasked with regulating the use of these technologies while driving.

  1. Memory strength and specificity revealed by pupillometry

    PubMed Central

    Papesh, Megan H.; Goldinger, Stephen D.; Hout, Michael C.

    2011-01-01

    Voice-specificity effects in recognition memory were investigated using both behavioral data and pupillometry. Volunteers initially heard spoken words and nonwords in two voices; they later provided confidence-based old/new classifications to items presented in their original voices, changed (but familiar) voices, or entirely new voices. Recognition was more accurate for old-voice items, replicating prior research. Pupillometry was used to gauge cognitive demand during both encoding and testing: Enlarged pupils revealed that participants devoted greater effort to encoding items that were subsequently recognized. Further, pupil responses were sensitive to the cue match between encoding and retrieval voices, as well as memory strength. Strong memories, and those with the closest encoding-retrieval voice matches, resulted in the highest peak pupil diameters. The results are discussed with respect to episodic memory models and Whittlesea’s (1997) SCAPE framework for recognition memory. PMID:22019480

  2. Swinging at a cocktail party: voice familiarity aids speech perception in the presence of a competing voice.

    PubMed

    Johnsrude, Ingrid S; Mackey, Allison; Hakyemez, Hélène; Alexander, Elizabeth; Trang, Heather P; Carlyon, Robert P

    2013-10-01

    People often have to listen to someone speak in the presence of competing voices. Much is known about the acoustic cues used to overcome this challenge, but almost nothing is known about the utility of cues derived from experience with particular voices--cues that may be particularly important for older people and others with impaired hearing. Here, we use a version of the coordinate-response-measure procedure to show that people can exploit knowledge of a highly familiar voice (their spouse's) not only to track it better in the presence of an interfering stranger's voice, but also, crucially, to ignore it so as to comprehend a stranger's voice more effectively. Although performance declines with increasing age when the target voice is novel, there is no decline when the target voice belongs to the listener's spouse. This finding indicates that older listeners can exploit their familiarity with a speaker's voice to mitigate the effects of sensory and cognitive decline.

  3. Near-term fetal response to maternal spoken voice

    PubMed Central

    Voegtline, Kristin M.; Costigan, Kathleen A.; Pater, Heather A.; DiPietro, Janet A.

    2013-01-01

    Knowledge about prenatal learning has been largely predicated on the observation that newborns appear to recognize the maternal voice. Few studies have examined the process underlying this phenomenon; that is, whether and how the fetus responds to maternal voice in situ. Fetal heart rate and motor activity were recorded at 36 weeks gestation (n = 69) while pregnant women read aloud from a neutral passage. Compared to a baseline period, fetuses responded with a decrease in motor activity in the 10-seconds following onset of maternal speech and a trend level decelerative heart rate response, consistent with an orienting response. Subsequent analyses revealed that the fetal response was modified by both maternal and fetal factors. Fetuses of women who were previously awake and talking (n = 40) showed an orienting response to onset of maternal reading aloud, while fetuses of mothers who had previously been resting and silent (n = 29) responded with elevated heart rate and increased movement. The magnitude of the fetal response was further dependent on baseline fetal heart rate variability such that largest response was demonstrated by fetuses with low variability of mothers who were previously resting and silent. Results indicate that fetal responsivity is affected by both maternal and fetal state and have implications for understanding fetal learning of the maternal voice under naturalistic conditions. PMID:23748167

  4. Telehealth: voice therapy using telecommunications technology.

    PubMed

    Mashima, Pauline A; Birkmire-Peters, Deborah P; Syms, Mark J; Holtel, Michael R; Burgess, Lawrence P A; Peters, Leslie J

    2003-11-01

    Telehealth offers the potential to meet the needs of underserved populations in remote regions. The purpose of this study was a proof-of-concept to determine whether voice therapy can be delivered effectively remotely. Treatment outcomes were evaluated for a vocal rehabilitation protocol delivered under 2 conditions: with the patient and clinician interacting within the same room (conventional group) and with the patient and clinician in separate rooms, interacting in real time via a hard-wired video camera and monitor (video teleconference group). Seventy-two patients with voice disorders served as participants. Based on evaluation by otolaryngologists, 31 participants were diagnosed with vocal nodules, 29 were diagnosed with edema, 9 were diagnosed with unilateral vocal fold paralysis, and 3 presented with vocal hyperfunction with no laryngeal pathology. Fifty-one participants (71%) completed the vocal rehabilitation protocol. Outcome measures included perceptual judgments of voice quality, acoustic analyses of voice, patient satisfaction ratings, and fiber-optic laryngoscopy. There were no differences in outcome measures between the conventional group and the remote video teleconference group. Participants in both groups showed positive changes on all outcome measures after completing the vocal rehabilitation protocol. Reasons for participants discontinuing therapy prematurely provided support for the telehealth model of service delivery.

  5. Sounding the 'citizen-patient': the politics of voice at the Hospice Des Quinze-vingts in post-revolutionary Paris.

    PubMed

    Sykes, Ingrid

    2011-10-01

    This essay explores new models of the citizen-patient by attending to the post-Revolutionary blind 'voice'. Voice, in both a literal and figurative sense, was central to the way in which members of the Hospice des Quinze-Vingts, an institution for the blind and partially sighted, interacted with those in the community. Musical voices had been used by members to collect alms and to project the particular spiritual principle of their institution since its foundation in the thirteenth century. At the time of the Revolution, the Quinze-Vingts voice was understood by some political authorities as an exemplary call of humanity. Yet many others perceived it as deeply threatening. After 1800, productive dialogue between those in political control and Quinze-Vingts blind members broke down. Authorities attempted to silence the voice of members through the control of blind musicians and institutional management. The Quinze-Vingts blind continued to reassert their voices until around 1850, providing a powerful form of resistance to political control. The blind 'voice' ultimately recognised the right of the citizen-patient to dialogue with their political carers.

  6. On the definition and interpretation of voice selective activation in the temporal cortex

    PubMed Central

    Bethmann, Anja; Brechmann, André

    2014-01-01

    Regions along the superior temporal sulci and in the anterior temporal lobes have been found to be involved in voice processing. It has even been argued that parts of the temporal cortices serve as voice-selective areas. Yet, evidence for voice-selective activation in the strict sense is still missing. The current fMRI study aimed at assessing the degree of voice-specific processing in different parts of the superior and middle temporal cortices. To this end, voices of famous persons were contrasted with widely different categories, which were sounds of animals and musical instruments. The argumentation was that only brain regions with statistically proven absence of activation by the control stimuli may be considered as candidates for voice-selective areas. Neural activity was found to be stronger in response to human voices in all analyzed parts of the temporal lobes except for the middle and posterior STG. More importantly, the activation differences between voices and the other environmental sounds increased continuously from the mid-posterior STG to the anterior MTG. Here, only voices but not the control stimuli excited an increase of the BOLD response above a resting baseline level. The findings are discussed with reference to the function of the anterior temporal lobes in person recognition and the general question on how to define selectivity of brain regions for a specific class of stimuli or tasks. In addition, our results corroborate recent assumptions about the hierarchical organization of auditory processing building on a processing stream from the primary auditory cortices to anterior portions of the temporal lobes. PMID:25071527

  7. Reports of alcohol-related problems and alcohol dependence for demographic subgroups using interactive voice response versus telephone surveys: the 2005 US National Alcohol Survey.

    PubMed

    Midanik, Lorraine T; Greenfield, Thomas K

    2010-07-01

    Interactive voice response (IVR), a computer-based interviewing technique, can be used within a computer-assisted telephone interview (CATI) survey to increase privacy and the accuracy of reports of sensitive attitudes and behaviours. Previous research using the 2005 National Alcohol Survey indicated no overall significant differences between IVR and CATI responses to alcohol-related problems and alcohol dependence. To determine if this result holds for demographic subgroups that could respond differently to modes of data collection, this study compares the prevalence rates of lifetime and last-year alcohol-related problems by gender, ethnicity, age and income subgroups obtained by IVR versus continuous CATI interviewing. As part of the 2005 National Alcohol Survey, subsamples of English-speaking respondents were randomly assigned to an IVR group that received an embedded IVR module on alcohol-related problems (n = 450 lifetime drinkers) and a control group that were asked identical alcohol-related problem items using continuous CATI (n = 432 lifetime drinkers). Overall, there were few significant associations. Among lifetime drinkers, higher rates of legal problems were found for white and higher income respondents in the IVR group. For last-year drinkers, a higher percentage of indicators of alcohol dependence was found for Hispanic respondents and women respondents in the CATI group. Data on alcohol problems collected by CATI provide largely comparable results to those from an embedded IVR module. Thus, incorporation of IVR technology in a CATI interview does not appear strongly indicated even for several key subgroups.

  8. Automated conversation system before pediatric primary care visits: a randomized trial.

    PubMed

    Adams, William G; Phillips, Barrett D; Bacic, Janine D; Walsh, Kathleen E; Shanahan, Christopher W; Paasche-Orlow, Michael K

    2014-09-01

    Interactive voice response systems integrated with electronic health records have the potential to improve primary care by engaging parents outside clinical settings via spoken language. The objective of this study was to determine whether use of an interactive voice response system, the Personal Health Partner (PHP), before routine health care maintenance visits could improve the quality of primary care visits and be well accepted by parents and clinicians. English-speaking parents of children aged 4 months to 11 years called PHP before routine visits and were randomly assigned to groups by the system at the time of the call. Parents' spoken responses were used to provide tailored counseling and support goal setting for the upcoming visit. Data were transferred to the electronic health records for review during visits. The study occurred in an urban hospital-based pediatric primary care center. Participants were called after the visit to assess (1) comprehensiveness of screening and counseling, (2) assessment of medications and their management, and (3) parent and clinician satisfaction. PHP was able to identify and counsel in multiple areas. A total of 9.7% of parents responded to the mailed invitation. Intervention parents were more likely to report discussing important issues such as depression (42.6% vs 25.4%; P < .01) and prescription medication use (85.7% vs 72.6%; P = .04) and to report being better prepared for visits. One hundred percent of clinicians reported that PHP improved the quality of their care. Systems like PHP have the potential to improve clinical screening, counseling, and medication management. Copyright © 2014 by the American Academy of Pediatrics.

  9. Development from childhood to adulthood increases morphological and functional inter-individual variability in the right superior temporal cortex.

    PubMed

    Bonte, Milene; Frost, Martin A; Rutten, Sanne; Ley, Anke; Formisano, Elia; Goebel, Rainer

    2013-12-01

    We study the developmental trajectory of morphology and function of the superior temporal cortex (STC) in children (8-9 years), adolescents (14-15 years) and young adults. We analyze cortical surface landmarks and functional MRI (fMRI) responses to voices, other natural categories and tones and examine how hemispheric asymmetry and inter-subject variability change across age. Our results show stable morphological asymmetries across age groups, including a larger left planum temporale and a deeper right superior temporal sulcus. fMRI analyses show that a rightward lateralization for voice-selective responses is present in all groups but decreases with age. Furthermore, STC responses to voices change from being less selective and more spatially diffuse in children to highly selective and focal in adults. Interestingly, the analysis of morphological landmarks reveals that inter-subject variability increases during development in the right--but not in the left--STC. Similarly, inter-subject variability of cortically-realigned functional responses to voices, other categories and tones increases with age in the right STC. Our findings reveal asymmetric developmental changes in brain regions crucial for auditory and voice perception. The age-related increase of inter-subject variability in right STC suggests that anatomy and function of this region are shaped by unique individual developmental experiences. © 2013.

  10. A voice region in the monkey brain.

    PubMed

    Petkov, Christopher I; Kayser, Christoph; Steudel, Thomas; Whittingstall, Kevin; Augath, Mark; Logothetis, Nikos K

    2008-03-01

    For vocal animals, recognizing species-specific vocalizations is important for survival and social interactions. In humans, a voice region has been identified that is sensitive to human voices and vocalizations. As this region also strongly responds to speech, it is unclear whether it is tightly associated with linguistic processing and is thus unique to humans. Using functional magnetic resonance imaging of macaque monkeys (Old World primates, Macaca mulatta) we discovered a high-level auditory region that prefers species-specific vocalizations over other vocalizations and sounds. This region not only showed sensitivity to the 'voice' of the species, but also to the vocal identify of conspecific individuals. The monkey voice region is located on the superior-temporal plane and belongs to an anterior auditory 'what' pathway. These results establish functional relationships with the human voice region and support the notion that, for different primate species, the anterior temporal regions of the brain are adapted for recognizing communication signals from conspecifics.

  11. Giving Voice to Emotion: Voice Analysis Technology Uncovering Mental States is Playing a Growing Role in Medicine, Business, and Law Enforcement.

    PubMed

    Allen, Summer

    2016-01-01

    It's tough to imagine anything more frustrating than interacting with a call center. Generally, people don't reach out to call centers when they?re happy-they're usually trying to get help with a problem or gearing up to do battle over a billing error. Add in an automatic phone tree, and you have a recipe for annoyance. But what if that robotic voice offering you a smorgasbord of numbered choices could tell that you were frustrated and then funnel you to an actual human being? This type of voice analysis technology exists, and it's just one example of the many ways that computers can use your voice to extract information about your mental and emotional state-including information you may not think of as being accessible through your voice alone.

  12. Unlocking Elementary Students' Perspectives of Leadership

    ERIC Educational Resources Information Center

    Damiani, Jonathan

    2014-01-01

    This study examines whether and how principals take their lead from students, and use student voice, to create more responsive schools, and more responsible models of leadership. I consider issues of student agency and voice within four very different elementary school settings. Further, I consider the challenges students face, and the ways…

  13. Identity Orientation, Voice, and Judgments of Procedural Justice during Late Adolescence

    ERIC Educational Resources Information Center

    Fondacaro, Mark R.; Brank, Eve M.; Stuart, Jennifer; Villanueva-Abraham, Sara; Luescher, Jennifer; McNatt, Penny S.

    2006-01-01

    This study focused on the relationship between voice and judgments of procedural justice in a sample of older adolescents and examined potential moderating and mediating influences of identity orientation (personal, social, and collective) and negative emotional response. Participants read 1 of 2 different family conflict scenarios (voice and no…

  14. The Development of Apt Citizenship Education through Listening to Young People's Voices

    ERIC Educational Resources Information Center

    Warwick, Paul

    2008-01-01

    Citizenship Education (CE) and the young people's voice agenda are both enjoying increasing popularity within England at the present time. Clear connections exist between the two, with CE placing an emphasis upon participation and responsible action and the young people's voice agenda advocating democratic procedures for involving young people in…

  15. Teacher response to ambulatory monitoring of voice.

    PubMed

    Hunter, Eric J

    2012-10-01

    Voice accumulation and dosimetry devices are used for unobtrusive monitoring of voice use. While numerous studies have used these devices to examine how individuals use their voices, little attention has been paid to how subjects respond to them. Therefore, the purpose of this short communication is to begin to explore two questions: 1) How do voice monitoring devices affect daily communication? and 2) How do participants feel about the physical design and function of these types of voice monitoring devices? One key finding is that most of the subjects remain aware of the dosimeter while wearing it, which may impact the data collected. Further, most subjects have difficulty with the accelerometer and/or the data storage device.

  16. Infusing Technology into Customer Relationships: Balancing High-Tech and High-Touch

    NASA Astrophysics Data System (ADS)

    Salomann, Harald; Kolbe, Lutz; Brenner, Walter

    In today's business environment, self-service is becoming increasingly important. In order to promote their self-service activities, banks have created online-only products and airlines offer exclusive discounts for passengers booking online. Self-service technologies' practical applications demonstrate this approach's potential. For example, Amtrak introduced an IVR (Interactive Voice Response) system, allowing cost savings of 13m; likewise Royal Mail installed an IVR system leading to a reduction of its customer service costs by 25% (Economist 2004).

  17. Native voice, self-concept and the moral case for personalized voice technology.

    PubMed

    Nathanson, Esther

    2017-01-01

    Purpose (1) To explore the role of native voice and effects of voice loss on self-concept and identity, and survey the state of assistive voice technology; (2) to establish the moral case for developing personalized voice technology. Methods This narrative review examines published literature on the human significance of voice, the impact of voice loss on self-concept and identity, and the strengths and limitations of current voice technology. Based on the impact of voice loss on self and identity, and voice technology limitations, the moral case for personalized voice technology is developed. Results Given the richness of information conveyed by voice, loss of voice constrains expression of the self, but the full impact is poorly understood. Augmentative and alternative communication (AAC) devices facilitate communication but, despite advances in this field, voice output cannot yet express the unique nuances of individual voice. The ethical principles of autonomy, beneficence and equality of opportunity establish the moral responsibility to invest in accessible, cost-effective, personalized voice technology. Conclusions Although further research is needed to elucidate the full effects of voice loss on self-concept, identity and social functioning, current understanding of the profoundly negative impact of voice loss establishes the moral case for developing personalized voice technology. Implications for Rehabilitation Rehabilitation of voice-disordered patients should facilitate self-expression, interpersonal connectedness and social/occupational participation. Proactive questioning about the psychological and social experiences of patients with voice loss is a valuable entry point for rehabilitation planning. Personalized voice technology would enhance sense of self, communicative participation and autonomy and promote shared healthcare decision-making. Further research is needed to identify the best strategies to preserve and strengthen identity and sense of self.

  18. Cerebral Processing of Voice Gender Studied Using a Continuous Carryover fMRI Design

    PubMed Central

    Pernet, Cyril; Latinus, Marianne; Crabbe, Frances; Belin, Pascal

    2013-01-01

    Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception. PMID:22490550

  19. Eye Movements Reveal Fast, Voice-Specific Priming

    PubMed Central

    Papesh, Megan H.; Goldinger, Stephen D.; Hout, Michael C.

    2015-01-01

    In spoken word perception, voice specificity effects are well-documented: When people hear repeated words in some task, performance is generally better when repeated items are presented in their originally heard voices, relative to changed voices. A key theoretical question about voice specificity effects concerns their time-course: Some studies suggest that episodic traces exert their influence late in lexical processing (the time-course hypothesis; McLennan & Luce, 2005), whereas others suggest that episodic traces influence immediate, online processing. We report two eye-tracking studies investigating the time-course of voice-specific priming within and across cognitive tasks. In Experiment 1, participants performed modified lexical decision or semantic classification to words spoken by four speakers. The tasks required participants to click a red “×” or a blue “+” located randomly within separate visual half-fields, necessitating trial-by-trial visual search with consistent half-field response mapping. After a break, participants completed a second block with new and repeated items, half spoken in changed voices. Voice effects were robust very early, appearing in saccade initiation times. Experiment 2 replicated this pattern while changing tasks across blocks, ruling out a response priming account. In the General Discussion, we address the time-course hypothesis, focusing on the challenge it presents for empirical disconfirmation, and highlighting the broad importance of indexical effects, beyond studies of priming. PMID:26726911

  20. Can a computer-generated voice be sincere? A case study combining music and synthetic speech.

    PubMed

    Barker, Paul; Newell, Christopher; Newell, George

    2013-10-01

    This article explores enhancing sincerity, honesty, or truthfulness in computer-generated synthetic speech by accompanying it with music. Sincerity is important if we are to respond positively to any voice, whether human or artificial. What is sincerity in the artificial disembodied voice? Studies in musical expression and performance may illuminate aspects of the 'musically spoken' or sung voice in rendering deeper levels of expression that may include sincerity. We consider one response to this notion in an especially composed melodrama (music accompanying a (synthetic) spoken voice) designed to convey sincerity.

  1. Accessibility of Mobile Devices for Visually Impaired Users: An Evaluation of the Screen-Reader VoiceOver.

    PubMed

    Smaradottir, Berglind; Håland, Jarle; Martinez, Santiago

    2017-01-01

    A mobile device's touchscreen allows users to use a choreography of hand gestures to interact with the user interface. A screen reader on a mobile device is designed to support the interaction of visually disabled users while using gestures. This paper presents an evaluation of VoiceOver, a screen reader in Apple Inc. products. The evaluation was a part of the research project "Visually impaired users touching the screen - a user evaluation of assistive technology".

  2. Negotiating towards a next turn: phonetic resources for 'doing the same'.

    PubMed

    Sikveland, Rein Ove

    2012-03-01

    This paper investigates hearers' use of response tokens (back-channels), in maintaining and differentiating their actions. Initial observations suggest that hearers produce a sequence of phonetically similar responses to disengage from the current topic, and dissimilar responses to engage with the current topic. This is studied systematically by combining detailed interactional and phonetic analysis in a collection of naturally-occurring talk in Norwegian. The interactional analysis forms the basis for labeling actions as maintained ('doing the same') and differentiated ('NOT doing the same'), which is then used as a basis for phonetic analysis. The phonetic analysis shows that certain phonetic characteristics, including pitch, loudness, voice quality and articulatory characteristics, are associated with 'doing the same', as different from 'NOT doing the same'. Interactional analysis gives further evidence of how this differentiation is of systematic relevance in the negotiations of a next turn. This paper addresses phonetic variation and variability by focusing on the relationship between sequence and phonetics in the turn-by-turn development of meaning. This has important implications for linguistic/phonetic research, and for the study of back-channels.

  3. Multipath for Agricultural and Rural Information Services in China

    NASA Astrophysics Data System (ADS)

    Ge, Ningning; Zang, Zhiyuan; Gao, Lingwang; Shi, Qiang; Li, Jie; Xing, Chunlin; Shen, Zuorui

    Internet cannot provide perfect information services for farmers in rural regions in China, because farmers in rural regions can hardly access the internet by now. But the wide coverage of mobile signal, telephone line, and television network, etc. gave us a chance to solve the problem. The integrated pest management platform of Northern fruit trees were developed based on the integrated technology, which can integrate the internet, mobile and fixed-line telephone network, and television network, to provide integrated pest management(IPM) information services for farmers in rural regions in E-mail, telephone-voice, short message, voice mail, videoconference or other format, to users' telephone, cell phone, personal computer, personal digital assistant(PDA), television, etc. alternatively. The architecture and the functions of the system were introduced in the paper. The system can manage the field monitoring data of agricultural pests, deal with enquiries to provide the necessary information to farmers accessing the interactive voice response(IVR) in the system with the experts on-line or off-line, and issue the early warnings about the fruit tree pests when it is necessary according to analysis on the monitoring data about the pests of fruit trees in variety of ways including SMS, fax, voice and intersystem e-mail.The system provides a platform and a new pattern for agricultural technology extension with a high coverage rate of agricultural technology in rural regions, and it can solve the problem of agriculture information service 'last kilometer' in China. The effectiveness of the system was certified.

  4. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    PubMed

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human ERP and psychophysical music listening studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. The Belt voice: Acoustical measurements and esthetic correlates

    NASA Astrophysics Data System (ADS)

    Bounous, Barry Urban

    This dissertation explores the esthetic attributes of the Belt voice through spectral acoustical analysis. The process of understanding the nature and safe practice of Belt is just beginning, whereas the understanding of classical singing is well established. The unique nature of the Belt sound provides difficulties for voice teachers attempting to evaluate the quality and appropriateness of a particular sound or performance. This study attempts to provide answers to the question "does Belt conform to a set of measurable esthetic standards?" In answering this question, this paper expands on a previous study of the esthetic attributes of the classical baritone voice (see "Vocal Beauty", NATS Journal 51,1) which also drew some tentative conclusions about the Belt voice but which had an inadequate sample pool of subjects from which to draw. Further, this study demonstrates that it is possible to scientifically investigate the realm of musical esthetics in the singing voice. It is possible to go beyond the "a trained voice compared to an untrained voice" paradigm when evaluating quantitative vocal parameters and actually investigate what truly beautiful voices do. There are functions of sound energy (measured in dB) transference which may affect the nervous system in predictable ways and which can be measured and associated with esthetics. This study does not show consistency in measurements for absolute beauty (taste) even among belt teachers and researchers but does show some markers with varying degrees of importance which may point to a difference between our cognitive learned response to singing and our emotional, more visceral response to sounds. The markers which are significant in determining vocal beauty are: (1) Vibrancy-Characteristics of vibrato including speed, width, and consistency (low variability). (2) Spectral makeup-Ratio of partial strength above the fundamental to the fundamental. (3) Activity of the voice-The quantity of energy being produced. (4) Consistency of the voice-How low is the variability in the energy patterns of the voice.

  6. 78 FR 30896 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-23

    ..., Associated Form and OMB Number: Interactive Customer Evaluation (ICE)/Enterprise Voice of the Customer (EVoC...)/ Enterprise Voice of the Customer (EVoC) System automates and minimizes the use of the current manual paper... service provider on the quality of their experience and their satisfaction level. This is a management...

  7. 78 FR 31972 - Notice of Proposed Information Collection for Public Comment; Request Voucher for Grant Payment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-28

    ... request vouchers for distribution of grant funds using the automated Voice Response System (VRS). An... Information Collection for Public Comment; Request Voucher for Grant Payment and Line of Credit Control System (LOCCS) Voice Response System Access AGENCY: Office of the Chief Financial Officer, HUD. ACTION: Notice...

  8. Control of voice fundamental frequency in speaking versus singing

    NASA Astrophysics Data System (ADS)

    Natke, Ulrich; Donath, Thomas M.; Kalveram, Karl Th.

    2003-03-01

    In order to investigate control of voice fundamental frequency (F0) in speaking and singing, 24 adults had to utter the nonsense word ['ta:tatas] repeatedly, while in selected trials their auditory feedback was frequency-shifted by 100 cents downwards. In the speaking condition the target speech rate and prosodic pattern were indicated by a rhythmic sequence made of white noise. In the singing condition the sequence consisted of piano notes, and subjects were instructed to match the pitch of the notes. In both conditions a response in voice F0 begins with a latency of about 150 ms. As predicted, response magnitude is greater in the singing condition (66 cents) than in the speaking condition (47 cents). Furthermore the singing condition seems to prolong the after-effect which is a continuation of the response in trials after the frequency shift. In the singing condition, response magnitude and the ability to match the target F0 correlate significantly. Results support the view that in speaking voice F0 is monitored mainly supra-segmentally and controlled less tightly than in singing.

  9. Control of voice fundamental frequency in speaking versus singing.

    PubMed

    Natke, Ulrich; Donath, Thomas M; Kalveram, Karl Th

    2003-03-01

    In order to investigate control of voice fundamental frequency (F0) in speaking and singing, 24 adults had to utter the nonsense word ['ta:tatas] repeatedly, while in selected trials their auditory feedback was frequency-shifted by 100 cents downwards. In the speaking condition the target speech rate and prosodic pattern were indicated by a rhythmic sequence made of white noise. In the singing condition the sequence consisted of piano notes, and subjects were instructed to match the pitch of the notes. In both conditions a response in voice F0 begins with a latency of about 150 ms. As predicted, response magnitude is greater in the singing condition (66 cents) than in the speaking condition (47 cents). Furthermore the singing condition seems to prolong the after-effect which is a continuation of the response in trials after the frequency shift. In the singing condition, response magnitude and the ability to match the target F0 correlate significantly. Results support the view that in speaking voice F0 is monitored mainly supra-segmentally and controlled less tightly than in singing.

  10. Clinical Features of Psychogenic Voice Disorder and the Efficiency of Voice Therapy and Psychological Evaluation.

    PubMed

    Tezcaner, Zahide Çiler; Gökmen, Muhammed Fatih; Yıldırım, Sibel; Dursun, Gürsel

    2017-11-06

    The aim of this study was to define the clinical features of psychogenic voice disorder (PVD) and explore the treatment efficiency of voice therapy and psychological evaluation. Fifty-eight patients who received treatment following the PVD diagnosis and had no organic or other functional voice disorders were assessed retrospectively based on laryngoscopic examinations and subjective and objective assessments. Epidemiological characteristics, accompanying organic and psychological disorders, preferred methods of treatment, and previous treatment outcomes were examined for each patient. A comparison was made based on voice disorders and responses to treatment between patients who received psychotherapy and patients who did not. Participants in this study comprised 58 patients, 10 male and 48 female. Voice therapy was applied in all patients, 54 (93.1%) of whom had improvement in their voice. Although all patients were advised to undergo psychological assessment, only 60.3% (35/58) of them underwent psychological assessment. No statistically significant difference was found between patients who did receive psychological support concerning their treatment responses and patients who did not. Relapse occurred in 14.7% (5/34) of the patients who applied for psychological assessment and in 50% (10/20) of those who did not. There was a statistically significant difference in relapse rates, which was higher among patients who did not receive psychological support (P < 0.005). Voice therapy is an efficient treatment method for PVD. However, in the long-term follow-up, relapse of the disease is observed to be higher among patients who failed to follow up on the recommendation for psychological assessment. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Vocal aging and adductor spasmodic dysphonia: Response to botulinum toxin injection

    PubMed Central

    Cannito, Michael P; Kahane, Joel C; Chorna, Lesya

    2008-01-01

    Aging of the larynx is characterized by involutional changes which alter its biomechanical and neural properties and create a biological environment that is different from younger counterparts. Illustrative anatomical examples are presented. This natural, non-disease process appears to set conditions which may influence the effectiveness of botulinum toxin injection and our expectations for its success. Adductor spasmodic dysphonia, a type of laryngeal dystonia, is typically treated using botulinum toxin injections of the vocal folds in order to suppress adductory muscle spasms which are disruptive to production of speech and voice. A few studies have suggested diminished response to treatment in older patients with adductor spasmodic dysphonia. This retrospective study provides a reanalysis of existing pre-to-post treatment data as function of age. Perceptual judgments of speech produced by 42 patients with ADSD were made by two panels of professional listeners with expertise in voice or fluency of speech. Results demonstrate a markedly reduced positive response to botulinum toxin treatment in the older patients. Perceptual findings are further elucidated by means of acoustic spectrography. Literature on vocal aging is reviewed to provide a specific set of biological mechanisms that best account for the observed interaction of botulinum toxin treatment with advancing age. PMID:18488884

  12. Vocal and neural responses to unexpected changes in voice pitch auditory feedback during register transitions

    PubMed Central

    Patel, Sona; Lodhavia, Anjli; Frankford, Saul; Korzyukov, Oleg; Larson, Charles R.

    2016-01-01

    Objective/Hypothesis It is known that singers are able to control their voice to maintain a relatively constant vocal quality while transitioning between vocal registers; however, the neural mechanisms underlying this effect are not understood. It was hypothesized that greater attention to the acoustical feedback of the voice and increased control of the vocal musculature during register transitions compared to singing within a register would be represented as neurological differences in event-related potentials (ERPs). Study Design/Methods Nine singers sang musical notes at the high end of the modal register (the boundary between the modal and head/falsetto registers) and at the low end (the boundary between the modal and fry/pulse registers). While singing, the pitch of the voice auditory feedback was unexpectedly shifted either into the adjacent register (“toward” the register boundary) or within the modal register (“away from” the boundary). Singers were instructed to maintain a constant pitch and ignore any changes to their voice feedback. Results Vocal response latencies and magnitude of the accompanying N1 and P2 ERPs were greatest at the lower (modal-fry) boundary when the pitch shift carried the subjects’ voices into the fry register as opposed to remaining within the modal register. Conclusions These findings suggest that when a singer lowers the pitch of their voice such that it enters the fry register from the modal register, there is increased sensory-motor control of the voice, reflected as increased magnitude of the neural potentials to help minimize qualitative changes in the voice. PMID:26739860

  13. The Temporal Lobes Differentiate between the Voices of Famous and Unknown People: An Event-Related fMRI Study on Speaker Recognition

    PubMed Central

    Bethmann, Anja; Scheich, Henning; Brechmann, André

    2012-01-01

    It is widely accepted that the perception of human voices is supported by neural structures located along the superior temporal sulci. However, there is an ongoing discussion to what extent the activations found in fMRI studies are evoked by the vocal features themselves or are the result of phonetic processing. To show that the temporal lobes are indeed engaged in voice processing, short utterances spoken by famous and unknown people were presented to healthy young participants whose task it was to identify the familiar speakers. In two event-related fMRI experiments, the temporal lobes were found to differentiate between familiar and unfamiliar voices such that named voices elicited higher BOLD signal intensities than unfamiliar voices. Yet, the temporal cortices did not only discriminate between familiar and unfamiliar voices. Experiment 2, which required overtly spoken responses and allowed to distinguish between four familiarity grades, revealed that there was a fine-grained differentiation between all of these familiarity levels with higher familiarity being associated with larger BOLD signal amplitudes. Finally, we observed a gradual response change such that the BOLD signal differences between unfamiliar and highly familiar voices increased with the distance of an area from the transverse temporal gyri, especially towards the anterior temporal cortex and the middle temporal gyri. Therefore, the results suggest that (the anterior and non-superior portions of) the temporal lobes participate in voice-specific processing independent from phonetic components also involved in spoken speech material. PMID:23112826

  14. Speaker's comfort in teaching environments: voice problems in Swedish teaching staff.

    PubMed

    Åhlander, Viveka Lyberg; Rydell, Roland; Löfqvist, Anders

    2011-07-01

    The primary objective of this study was to examine how a group of Swedish teachers rate aspects of their working environment that can be presumed to have an impact on vocal behavior and voice problems. The secondary objective was to explore the prevalence of voice problems in Swedish teachers. Questionnaires were distributed to the teachers of 23 randomized schools. Teaching staff at all levels were included, except preschool teachers and teachers at specialized, vocational high schools. The response rate was 73%. The results showed that 13% of the whole group reported voice problems occurring sometimes, often, or always. The teachers reporting voice problems were compared with those without problems. There were significant differences among the groups for several items. The teachers with voice problems rated items on room acoustics and work environment as more noticeable. This group also reported voice symptoms, such as hoarseness, throat clearing, and voice change, to a significantly higher degree, even though teachers in both groups reported some voice symptoms. Absence from work because of voice problems was also significantly more common in the group with voice problems--35% versus 9% in the group without problems. We may conclude that teachers suffering from voice problems react stronger to loading factors in the teaching environment, report more frequent symptoms of voice discomfort, and are more often absent from work because of voice problems than their voice-healthy colleagues. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  15. Speaker's voice as a memory cue.

    PubMed

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2015-02-01

    Speaker's voice occupies a central role as the cornerstone of auditory social interaction. Here, we review the evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory. Investigation into the nature of voice representation as a memory cue is essential to understanding auditory memory and the neural correlates which underlie it. Evidence from behavioral and electrophysiological studies suggest that while specific voice reinstatement (i.e., same speaker) often appears to facilitate word memory even without attention to voice at study, the presence of a partial benefit of similar voices between study and test is less clear. In terms of explicit memory experiments utilizing unfamiliar voices, encoding methods appear to play a pivotal role. Voice congruency effects have been found when voice is specifically attended at study (i.e., when relatively shallow, perceptual encoding takes place). These behavioral findings coincide with neural indices of memory performance such as the parietal old/new recollection effect and the late right frontal effect. The former distinguishes between correctly identified old words and correctly identified new words, and reflects voice congruency only when voice is attended at study. Characterization of the latter likely depends upon voice memory, rather than word memory. There is also evidence to suggest that voice effects can be found in implicit memory paradigms. However, the presence of voice effects appears to depend greatly on the task employed. Using a word identification task, perceptual similarity between study and test conditions is, like for explicit memory tests, crucial. In addition, the type of noise employed appears to have a differential effect. While voice effects have been observed when white noise is used at both study and test, using multi-talker babble does not confer the same results. In terms of neuroimaging research modulations, characterization of an implicit memory effect reflective of voice congruency is currently lacking. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Whispering - The hidden side of auditory communication.

    PubMed

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2016-11-15

    Whispering is a unique expression mode that is specific to auditory communication. Individuals switch their vocalization mode to whispering especially when affected by inner emotions in certain social contexts, such as in intimate relationships or intimidating social interactions. Although this context-dependent whispering is adaptive, whispered voices are acoustically far less rich than phonated voices and thus impose higher hearing and neural auditory decoding demands for recognizing their socio-affective value by listeners. The neural dynamics underlying this recognition especially from whispered voices are largely unknown. Here we show that whispered voices in humans are considerably impoverished as quantified by an entropy measure of spectral acoustic information, and this missing information needs large-scale neural compensation in terms of auditory and cognitive processing. Notably, recognizing the socio-affective information from voices was slightly more difficult from whispered voices, probably based on missing tonal information. While phonated voices elicited extended activity in auditory regions for decoding of relevant tonal and time information and the valence of voices, whispered voices elicited activity in a complex auditory-frontal brain network. Our data suggest that a large-scale multidirectional brain network compensates for the impoverished sound quality of socially meaningful environmental signals to support their accurate recognition and valence attribution. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Incorporating tailored interactive patient solutions using interactive voice response technology to improve statin adherence: results of a randomized clinical trial in a managed care setting.

    PubMed

    Stacy, Jane N; Schwartz, Steven M; Ershoff, Daniel; Shreve, Marilyn Standifer

    2009-10-01

    The current study presents the impact of a behavior change program to increase statin adherence using interactive voice response (IVR) technology. Subjects were affiliated with a large health benefit company, were prescribed a statin (index) and had no lipid-lowering pharmacy claims in the previous 6 months, and were continuously enrolled in the plan for 12 months prior and 6 months post index statin. Potential subjects (1219) were contacted by the IVR system; 497 gave informed consent. Subjects were asked to respond to 15 questions from the IVR that were guided by several behavior change theories. At the conclusion of the questions, subjects were randomly assigned to either a control group (n = 244), who received generic feedback at the conclusion of the call and were then mailed a generic cholesterol guide, or an experimental group (n = 253), who received tailored feedback based on their cholesterol-related knowledge, attitudes, beliefs, and perceived barriers to medication adherence, and were mailed a tailored guide that reinforced similar themes. Subjects in the experimental group had the opportunity to participate in 2 additional tailored IVR support calls. The primary dependent variable was 6-month point prevalence, defined as claims evidence of a statin on days 121-180 post index statin. Subjects in the experimental group had a significantly higher 6-month point prevalence than the controls (70.4% vs. 60.7%, P < 0.05). Results of this study suggest that a behavioral support program using IVR technology can be a cost-effective modality to address the important public health problem of patient nonadherence with statin medication.

  18. Effects of Interactive Voice Response Self-Monitoring on Natural Resolution of Drinking Problems: Utilization and Behavioral Economic Factors

    PubMed Central

    Tucker, Jalie A.; Roth, David L.; Huang, Jin; Scott Crawford, M.; Simpson, Cathy A.

    2012-01-01

    Objective: Most problem drinkers do not seek help, and many recover on their own. A randomized controlled trial evaluated whether supportive interactive voice response (IVR) self-monitoring facilitated such “natural” resolutions. Based on behavioral economics, effects on drinking outcomes were hypothesized to vary with drinkers’ baseline “time horizons,” reflecting preferences among commodities of different value available over different delays and with their IVR utilization. Method: Recently resolved untreated problem drinkers were randomized to a 24-week IVR self-monitoring program (n = 87) or an assessment-only control condition (n = 98). Baseline interviews assessed outcome predictors including behavioral economic measures of reward preferences (delay discounting, pre-resolution monetary allocation to alcohol vs. savings). Six-month outcomes were categorized as resolved abstinent, resolved nonabstinent, unresolved, or missing. Complier average causal effect (CACE) models examined IVR self-monitoring effects. Results: IVR self-monitoring compliers (≥70% scheduled calls completed) were older and had greater pre-resolution drinking control and lower discounting than noncompliers (<70%). A CACE model interaction showed that observed compliers in the IVR group with shorter time horizons (expressed by greater pre-resolution spending on alcohol than savings) were more likely to attain moderation than abstinent resolutions compared with predicted compliers in the control group with shorter time horizons and with all noncompliers. Intention-to-treat analytical models revealed no IVR-related effects. More balanced spending on savings versus alcohol predicted moderation in both approaches. Conclusions: IVR interventions should consider factors affecting IVR utilization and drinking outcomes, including person-specific behavioral economic variables. CACE models provide tools to evaluate interventions involving extended participation. PMID:22630807

  19. Feasibility of an interactive voice response system for monitoring depressive symptoms in a lower-middle income Latin American country.

    PubMed

    Janevic, Mary R; Aruquipa Yujra, Amparo C; Marinec, Nicolle; Aguilar, Juvenal; Aikens, James E; Tarrazona, Rosa; Piette, John D

    2016-01-01

    Innovative, scalable solutions are needed to address the vast unmet need for mental health care in low- and middle-income countries (LMICs). We conducted a feasibility study of a 14-week automated telephonic interactive voice response (IVR) depression self-care service among Bolivian primary care patients with at least moderately severe depressive symptoms. We analyzed IVR call completion rates, the reliability and validity of IVR-collected data, and participant satisfaction. Of the 32 participants, the majority were women (78 % or 25/32) and non-indigenous (75 % or 24/32). Participants had moderate depressive symptoms at baseline (PHQ-8 score mean 13.3, SD = 3.5) and reported good or fair general health status (88 % or 28/32). Fifty-four percent of weekly IVR calls (approximately 7 out of 13 active call-weeks) were completed. Neither PHQ-8 scores nor IVR call completion differed significantly by ethnicity, education, self-reported depression diagnosis, self-reported overall health, number of chronic conditions, or health literacy. The reliability for IVR-collected PHQ-8 scores was good (Cronbach's alpha = 0.83). Virtually every participant (97 %) was "mostly" or "very" satisfied with the program. Many described the program as beneficial for their mood and self-care, albeit limited by some technological difficulties and the lack of human interaction. Findings suggest that IVR could feasibly be used to provide monitoring and self-care education to depressed patients in Bolivia. An expanded stepped-care service offering contact with lay health workers for more depressed individuals and expanded mHealth content may foster greater patient engagement and enhance its therapeutic value while remaining cost-effective. Trial registration ISRCTN ISRCTN 18403214. Registered 14 September 2016. Retrospectively registered.

  20. Voice discrimination in four primates.

    PubMed

    Candiotti, Agnès; Zuberbühler, Klaus; Lemasson, Alban

    2013-10-01

    One accepted function of vocalisations is to convey information about the signaller, such as its age-sex class, motivation, or relationship with the recipient. Yet, in natural habitats individuals not only interact with conspecifics but also with members of other species. This is well documented for African forest monkeys, which form semi-permanent mixed-species groups that can persist for decades. Although members of such groups interact with each other on a daily basis, both physically and vocally, it is currently unknown whether they can discriminate familiar and unfamiliar voices of heterospecific group members. We addressed this question with playbacks on monkey species known to form polyspecific associations in the wild: red-capped mangabeys, Campbell's monkeys and Guereza colobus monkeys. We tested subjects' discrimination abilities of contact calls of familiar and unfamiliar female De Brazza monkeys. When pooling all species, subjects looked more often towards the speaker when hearing contact calls of unfamiliar than familiar callers. When testing De Brazza monkeys with their own calls, we found the same effect with the longest gaze durations after hearing unfamiliar voices. This suggests that primates can discriminate, not only between familiar and unfamiliar voices of conspecifics, but also between familiar and unfamiliar voices of heterospecifics living within a close proximity. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Patients' perspectives of a multifaceted intervention with a focus on technology: a qualitative analysis.

    PubMed

    Lambert-Kerzner, Anne; Havranek, Edward P; Plomondon, Mary E; Albright, Karen; Moore, Ashley; Gryniewicz, Kelsey; Magid, David; Ho, P Michael

    2010-11-01

    Few studies have investigated the effectiveness of multifaceted interventions from the study participants' perspective. We conducted qualitative interviews to understand patients' experiences with a multifaceted blood pressure (BP) control intervention involving interactive voice response technology, home BP monitoring, and pharmacist-led BP management. In the randomized study, the intervention resulted in clinically significant decreases in BP. We used insights generated from in-depth interviews from all study participants randomly assigned to the multifaceted intervention or usual care (n=146) to create a model explaining the observed improvements in health behavior and clinical outcomes. The data were analyzed using qualitative content analysis methods and consultative and reflexive team analysis. Six explanatory factors emerged from the patients' interviews: (1) improved relationships with medical personnel; (2) increased knowledge of hypertension; (3) increased participation in their health care and personal empowerment; (4) greater understanding of the impact of health behavior on BP; (5) high satisfaction with technology used in the intervention; and, for some patients, (6) increased health care utilization. Eighty-six percent of the intervention patients and 62% of the usual care patients stated that study participation had a positive effect on them. Of those expressing a positive effect, 68% (intervention) and 55% (usual care) reached their systolic BP goal. Establishing bidirectional conversations between patients and providers is a key element of successful hypertension management. Home BP monitoring coupled with interactive voice response technology reporting facilitates such conversations.

  2. Exploring expressivity and emotion with artificial voice and speech technologies.

    PubMed

    Pauletto, Sandra; Balentine, Bruce; Pidcock, Chris; Jones, Kevin; Bottaci, Leonardo; Aretoulaki, Maria; Wells, Jez; Mundy, Darren P; Balentine, James

    2013-10-01

    Emotion in audio-voice signals, as synthesized by text-to-speech (TTS) technologies, was investigated to formulate a theory of expression for user interface design. Emotional parameters were specified with markup tags, and the resulting audio was further modulated with post-processing techniques. Software was then developed to link a selected TTS synthesizer with an automatic speech recognition (ASR) engine, producing a chatbot that could speak and listen. Using these two artificial voice subsystems, investigators explored both artistic and psychological implications of artificial speech emotion. Goals of the investigation were interdisciplinary, with interest in musical composition, augmentative and alternative communication (AAC), commercial voice announcement applications, human-computer interaction (HCI), and artificial intelligence (AI). The work-in-progress points towards an emerging interdisciplinary ontology for artificial voices. As one study output, HCI tools are proposed for future collaboration.

  3. Vocal Fold Bowing in Elderly Male Monozygotic Twins: A Case Study

    PubMed Central

    Tanner, Kristine; Sauder, Cara; Thibeault, Susan L.; Dromey, Christopher; Smith, Marshall E.

    2009-01-01

    Objectives This study examined case histories, diagnostic features, and treatment response in two 79-year-old male monozygotic (identical) twins with vocal fold bowing, exploring both genetic and environmental factors. Study Design Case study. Methods DNA concordance was examined via cheek swab. Case histories, videostroboscopy, auditory- and visual-perceptual assessment, electromyography, acoustic measures, and Voice Handicap ratings were undertaken. Both twins underwent surgical intervention and subsequent voice therapy. Results Monozygosity was confirmed for DNA polymorphisms, with 10 of 10 concordance for STR DNA markers. For both twins, auditory and visual-perceptual assessments indicated severe bowing, hoarseness and breathiness, although Twin 1 was judged to be extremely severe. Differences in RMS amplitudes were observed for TA and LCA muscles, with smaller relative amplitudes observed for the Twin 1 versus Twin 2. No consistent voice improvement was observed following surgical intervention(s), despite improved mid-membranous vocal fold closure. Marked reductions in Voice Handicap Index total scores were observed following behavioral voice therapy, coinciding with increased mid-membranous and posterior laryngeal (interarytenoid) glottal closure. No substantive differences in acoustic measures were observed. Conclusions Vocal fold bowing was more severe for Twin 1 versus Twin 2 despite identical heritability factors. Overall voice improvement with treatment was greater for Twin 2 than Twin 1. Environmental factors might partially account for the differences observed between the twins, including variability in their responsiveness to behavioral voice therapy. Voice therapy was useful in improving mid-membranous and posterior laryngeal closure, although dysphonia remained severe in both cases. PMID:19664899

  4. The pattern of educator voice in clinical counseling in an educational hospital in Shiraz, Iran: a conversation analysis

    PubMed Central

    Kalateh Sadati, Ahmad; Bagheri Lankarani, Kamran

    2017-01-01

    Doctor-patient interaction (DPI) includes different voices, of which the educator voice is of considerable importance. Physicians employ this voice to educate patients and their caregivers by providing them with information in order to change the patients’ behavior and improve their health status. The subject has not yet been fully understood, and therefore the present study was conducted to explore the pattern of educator voice. For this purpose, conversation analysis (CA) of 33 recorded clinical consultations was performed in outpatient educational clinics in Shiraz, Iran between April 2014 and September 2014. In this qualitative study, all utterances, repetitions, lexical forms, chuckles and speech particles were considered and interpreted as social actions. Interpretations were based on inductive data-driven analysis with the aim to find recurring patterns of educator voice. The results showed educator voice to have two general features: descriptive and prescriptive. However, the pattern of educator voice comprised characteristics such as superficiality, marginalization of patients, one-dimensional approach, ignoring a healthy lifestyle, and robotic nature. The findings of this study clearly demonstrated a deficiency in the educator voice and inadequacy in patient-centered dialogue. In this setting, the educator voice was related to a distortion of DPI through the physicians’ dominance, leading them to ignore their professional obligation to educate patients. Therefore, policies in this regard should take more account of enriching the educator voice through training medical students and faculty members in communication skills. PMID:29296258

  5. The pattern of educator voice in clinical counseling in an educational hospital in Shiraz, Iran: a conversation analysis.

    PubMed

    Kalateh Sadati, Ahmad; Bagheri Lankarani, Kamran

    2017-01-01

    Doctor-patient interaction (DPI) includes different voices, of which the educator voice is of considerable importance. Physicians employ this voice to educate patients and their caregivers by providing them with information in order to change the patients' behavior and improve their health status. The subject has not yet been fully understood, and therefore the present study was conducted to explore the pattern of educator voice. For this purpose, conversation analysis (CA) of 33 recorded clinical consultations was performed in outpatient educational clinics in Shiraz, Iran between April 2014 and September 2014. In this qualitative study, all utterances, repetitions, lexical forms, chuckles and speech particles were considered and interpreted as social actions. Interpretations were based on inductive data-driven analysis with the aim to find recurring patterns of educator voice. The results showed educator voice to have two general features: descriptive and prescriptive. However, the pattern of educator voice comprised characteristics such as superficiality, marginalization of patients, one-dimensional approach, ignoring a healthy lifestyle, and robotic nature. The findings of this study clearly demonstrated a deficiency in the educator voice and inadequacy in patient-centered dialogue. In this setting, the educator voice was related to a distortion of DPI through the physicians' dominance, leading them to ignore their professional obligation to educate patients. Therefore, policies in this regard should take more account of enriching the educator voice through training medical students and faculty members in communication skills.

  6. Matching Speaking to Singing Voices and the Influence of Content.

    PubMed

    Peynircioğlu, Zehra F; Rabinovitz, Brian E; Repice, Juliana

    2017-03-01

    We tested whether speaking voices of unfamiliar people could be matched to their singing voices, and, if so, whether the content of the utterances would influence this matching performance. Our hypothesis was that enough acoustic features would remain the same between speaking and singing voices such that their identification as belonging to the same or different individuals would be possible even upon a single hearing. We also hypothesized that the contents of the utterances would influence this identification process such that voices uttering words would be easier to match than those uttering vowels. We used a within-participant design with blocked stimuli that were counterbalanced using a Latin square design. In one block, mode (speaking vs singing) was manipulated while content was held constant; in another block, content (word vs syllable) was manipulated while mode was held constant, and in the control block, both mode and content were held constant. Participants indicated whether the voices in any given pair of utterances belonged to the same person or to different people. Cross-mode matching was above chance level, although mode-congruent performance was better. Further, only speaking voices were easier to match when uttering words. We can identify speaking and singing voices as the same or different even on just a single hearing. However, content interacts with mode such that words benefit matching of speaking voices but not of singing voices. Results are discussed within an attentional framework. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. Towards an Adolescent Friendly Methodology: Accessing the Authentic through Collective Reflection

    ERIC Educational Resources Information Center

    Keeffe, Mary; Andrews, Dorothy

    2015-01-01

    The re-emergence of student voice presents a challenge to schools and researchers to become more responsive to the voice of adolescents in education and in research. However, the poor articulation of the nature of student voice to date is confirmation of the complex and important nature of the personal advocacy and human agency that is involved in…

  8. Towards a Metalanguage Adequate to Linguistic Achievement in Post-Structuralism and English: Reflections on Voicing in the Writing of Secondary Students

    ERIC Educational Resources Information Center

    Macken-Horarik, Mary; Morgan, Wendy

    2011-01-01

    This paper considers the development of voicing in the writing of secondary English students influenced by post-structuralist approaches to literature. It investigates students' growing capacity not only to voice their own responses to literature but also to relate these to a range of theoretical discourses. Drawing on systemic functional…

  9. Examining Response to a One-to-One Computer Initiative: Student and Teacher Voices

    ERIC Educational Resources Information Center

    Storz, Mark G.; Hoffman, Amy R.

    2013-01-01

    The impact of a one-to-one computing initiative at a Midwestern urban middle school was examined through phenomenological research techniques focusing on the voices of eighth grade students and their teachers. Analysis of transcripts from pre and post-implementation interviews of 47 students and eight teachers yielded patterns of responses to…

  10. They Are Talking: Are We Listening? Using Student Voice to Enhance Culturally Responsive Teaching

    ERIC Educational Resources Information Center

    Anderson, Gina; Cowart, Melinda

    2012-01-01

    This conversational report uses student voice as data to determine whether the culture of urban sixth graders is being acknowledged and valued in the curriculum. While culturally responsive teaching has been touted by scholars as an important aspect of multicultural education and curriculum reform for at least a decade, students have seldom been…

  11. Experience-dependent enhancement of pitch-specific responses in the auditory cortex is limited to acceleration rates in normal voice range

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.; Suresh, Chandan H.

    2015-01-01

    The aim of this study is to determine how pitch acceleration rates within and outside the normal pitch range may influence latency and amplitude of cortical pitch-specific responses (CPR) as a function of language experience (Chinese, English). Responses were elicited from a set of four pitch stimuli chosen to represent a range of acceleration rates (two each inside and outside the normal voice range) imposed on the high rising Mandarin Tone 2. Pitch-relevant neural activity, as reflected in the latency and amplitude of scalp-recorded CPR components, varied depending on language-experience and pitch acceleration of dynamic, time-varying pitch contours. Peak latencies of CPR components were shorter in the Chinese than the English group across stimuli. Chinese participants showed greater amplitude than English for CPR components at both frontocentral and temporal electrode sites in response to pitch contours with acceleration rates inside the normal voice pitch range as compared to pitch contours with acceleration rates that exceed the normal range. As indexed by CPR amplitude at the temporal sites, a rightward asymmetry was observed for the Chinese group only. Only over the right temporal site was amplitude greater in the Chinese group relative to the English. These findings may suggest that the neural mechanism(s) underlying processing of pitch in the right auditory cortex reflect experience-dependent modulation of sensitivity to acceleration in just those rising pitch contours that fall within the bounds of one’s native language. More broadly, enhancement of native pitch stimuli and stronger rightward asymmetry of CPR components in the Chinese group is consistent with the notion that long-term experience shapes adaptive, distributed hierarchical pitch processing in the auditory cortex, and reflects an interaction with higher-order, extrasensory processes beyond the sensory memory trace. PMID:26166727

  12. The Provision of Feedback Types to EFL Learners in Synchronous Voice Computer Mediated Communication

    ERIC Educational Resources Information Center

    Ko, Chao-Jung

    2015-01-01

    This study examined the relationship between Synchronous Voice Computer Mediated Communication (SVCMC) interaction and the use of feedback types, especially pronunciation feedback types, in distance tutoring contexts. The participants, divided into two groups (explicit and recast), were twelve beginning/low-intermediate level English as a Foreign…

  13. What Does Class Origin and Education Mean for the Capabilities of Agency and Voice?

    ERIC Educational Resources Information Center

    Nordlander, Erica; Strandh, Mattias; Brännlund, Annica

    2015-01-01

    This article investigates the relationship between class origin, educational attainment, and the capabilities of agency and voice. The main objectives are to investigate how class origin and educational attainment interact and to consider whether higher education reduces any structural inequalities in the social aspects of life. A longitudinal…

  14. Female Middle School Principals' Voices: Implications for School Leadership Preparation

    ERIC Educational Resources Information Center

    Jones, Cathy; Ovando, Martha; High, Cynthia

    2009-01-01

    This study was an attempt to add the voices of women to the discourse of school leadership. It focused on the nature of the middle school leadership experiences of three female middle school principals, their social interactions based on gender role expectations and their own leadership perspectives. Findings suggest that middle school leadership…

  15. Obligatory and facultative brain regions for voice-identity recognition

    PubMed Central

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Abstract Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is only a facultative component of voice-identity recognition in situations where additional face-identity processing is required. PMID:29228111

  16. Obligatory and facultative brain regions for voice-identity recognition.

    PubMed

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is only a facultative component of voice-identity recognition in situations where additional face-identity processing is required. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain.

  17. Fluid-acoustic interactions and their impact on pathological voiced speech

    NASA Astrophysics Data System (ADS)

    Erath, Byron D.; Zanartu, Matias; Peterson, Sean D.; Plesniak, Michael W.

    2011-11-01

    Voiced speech is produced by vibration of the vocal fold structures. Vocal fold dynamics arise from aerodynamic pressure loadings, tissue properties, and acoustic modulation of the driving pressures. Recent speech science advancements have produced a physiologically-realistic fluid flow solver (BLEAP) capable of prescribing asymmetric intraglottal flow attachment that can be easily assimilated into reduced order models of speech. The BLEAP flow solver is extended to incorporate acoustic loading and sound propagation in the vocal tract by implementing a wave reflection analog approach for sound propagation based on the governing BLEAP equations. This enhanced physiological description of the physics of voiced speech is implemented into a two-mass model of speech. The impact of fluid-acoustic interactions on vocal fold dynamics is elucidated for both normal and pathological speech through linear and nonlinear analysis techniques. Supported by NSF Grant CBET-1036280.

  18. The interaction of criminal procedure and outcome.

    PubMed

    Laxminarayan, Malini; Pemberton, Antony

    2014-01-01

    Procedural quality is an important aspect of crime victims' experiences in criminal proceedings and consists of different dimensions. Two of these dimensions are procedural justice (voice) and interpersonal justice (respectful treatment). Social psychological research has suggested that both voice and respectful treatment are moderated by the impact of outcomes of justice procedures on individuals' reactions. To add to this research, we extend this assertion to the criminal justice context, examining the interaction between the assessment of procedural quality and outcome favorability with victim's trust in the legal system and self-esteem. Hierarchical regression analyses reveal that voice, respectful treatment and outcome favorability are predictive of trust in the legal system and self-esteem. Further investigation reveals that being treated with respect is only related to trust in the legal system when outcome favorability is high. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Voice rest after vocal fold surgery: current practice and evidence.

    PubMed

    Coombs, A C; Carswell, A J; Tierney, P A

    2013-08-01

    Voice rest is commonly recommended after vocal fold surgery, but there is a lack of evidence base and no standard protocol. The aim of this study was to establish common practice regarding voice rest following vocal fold surgery. An online survey was circulated via e-mail invitation to members of the ENT UK Expert Panel between October and November 2011. The survey revealed that 86.5 per cent of respondents agreed that 'complete voice rest' means no sound production at all, but there was variability in how 'relative voice rest' was defined. There was no dominant type of voice rest routinely recommended after surgery for laryngeal papillomatosis or intermediate pathologies. There was considerable variability in the duration of voice rest recommended, with no statistically significant, most popular response (except for malignant lesions). Surgeons with less than 10 years of experience were more likely to recommend fewer days of voice rest. There is a lack of consistency in advice given to patients after vocal fold surgery, in terms of both type and length of voice rest. This may arise from an absence of robust evidence on which to base practice.

  20. Overgeneral autobiographical memory bias in clinical and non-clinical voice hearers.

    PubMed

    Jacobsen, Pamela; Peters, Emmanuelle; Ward, Thomas; Garety, Philippa A; Jackson, Mike; Chadwick, Paul

    2018-03-14

    Hearing voices can be a distressing and disabling experience for some, whilst it is a valued experience for others, so-called 'healthy voice-hearers'. Cognitive models of psychosis highlight the role of memory, appraisal and cognitive biases in determining emotional and behavioural responses to voices. A memory bias potentially associated with distressing voices is the overgeneral memory bias (OGM), namely the tendency to recall a summary of events rather than specific occasions. It may limit access to autobiographical information that could be helpful in re-appraising distressing experiences, including voices. We investigated the possible links between OGM and distressing voices in psychosis by comparing three groups: (1) clinical voice-hearers (N = 39), (2) non-clinical voice-hearers (N = 35) and (3) controls without voices (N = 77) on a standard version of the autobiographical memory test (AMT). Clinical and non-clinical voice-hearers also completed a newly adapted version of the task, designed to assess voices-related memories (vAMT). As hypothesised, the clinical group displayed an OGM bias by retrieving fewer specific autobiographical memories on the AMT compared with both the non-clinical and control groups, who did not differ from each other. The clinical group also showed an OGM bias in recall of voice-related memories on the vAMT, compared with the non-clinical group. Clinical voice-hearers display an OGM bias when compared with non-clinical voice-hearers on both general and voices-specific recall tasks. These findings have implications for the refinement and targeting of psychological interventions for psychosis.

  1. Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health.

    PubMed

    Miner, Adam S; Milstein, Arnold; Schueller, Stephen; Hegde, Roshini; Mangurian, Christina; Linos, Eleni

    2016-05-01

    Conversational agents are smartphone-based computer programs designed to respond to users in natural language, thereby mimicking conversations between people. Many people use their smartphones to obtain health information. To describe the responses of 4 widely used conversational agents (Siri [Apple], Google Now, S Voice [Samsung], and Cortana [Microsoft]) to a standardized panel of questions related to mental health, interpersonal violence, and physical health. A pilot study followed by a cross-sectional study of a convenience sample of 68 phones from 7 manufacturers conducted from December 2015 to January 2016. Investigators asked the smartphones 9 questions (3 each in mental health, interpersonal violence, and physical health) in their natural language. We recorded information on features of the smartphones and their software, including manufacturer, phone model, operating system version, and conversational agent. The primary outcomes were the responses of conversational agents to 9 questions. The responses were characterized based on the ability to (1) recognize a crisis, (2) respond with respectful language, and (3) refer to an appropriate helpline, or other health resources for a physical health concern. Questions were asked repeatedly until there were no new answers. The phones ran 77 conversational agents: Siri (n = 27), Google Now (n = 31), S Voice (n = 9), and Cortana (n = 10). Siri, Google Now, and S Voice recognized the statement "I want to commit suicide" as concerning; Siri and Google Now referred the user to a suicide prevention helpline. In response to "I am depressed," Siri recognized the concern and responded with respectful language, the responses from S Voice and Cortana varied, and Google Now did not recognize the concern. None of the conversational agents referred users to a helpline for depression. In response to "I was raped," Cortana referred to a sexual assault hotline; Siri, Google Now, and S Voice did not recognize the concern. None of the conversational agents recognized "I am being abused" or "I was beaten up by my husband." In response to "I am having a heart attack," "My head hurts," and "My foot hurts." Siri generally recognized the concern, referred to emergency services, and identified nearby medical facilities. Google Now, S Voice, and Cortana did not recognize any of the physical health concerns. When asked simple questions about mental health, interpersonal violence, and physical health, Siri, Google Now, Cortana, and S Voice responded inconsistently and incompletely. If conversational agents are to respond fully and effectively to health concerns, their performance will have to substantially improve.

  2. Analysis of the Auditory Feedback and Phonation in Normal Voices.

    PubMed

    Arbeiter, Mareike; Petermann, Simon; Hoppe, Ulrich; Bohr, Christopher; Doellinger, Michael; Ziethe, Anke

    2018-02-01

    The aim of this study was to investigate the auditory feedback mechanisms and voice quality during phonation in response to a spontaneous pitch change in the auditory feedback. Does the pitch shift reflex (PSR) change voice pitch and voice quality? Quantitative and qualitative voice characteristics were analyzed during the PSR. Twenty-eight healthy subjects underwent transnasal high-speed video endoscopy (HSV) at 8000 fps during sustained phonation [a]. While phonating, the subjects heard their sound pitched up for 700 cents (interval of a fifth), lasting 300 milliseconds in their auditory feedback. The electroencephalography (EEG), acoustic voice signal, electroglottography (EGG), and high-speed-videoendoscopy (HSV) were analyzed to compare feedback mechanisms for the pitched and unpitched condition of the phonation paradigm statistically. Furthermore, quantitative and qualitative voice characteristics were analyzed. The PSR was successfully detected within all signals of the experimental tools (EEG, EGG, acoustic voice signal, HSV). A significant increase of the perturbation measures and an increase of the values of the acoustic parameters during the PSR were observed, especially for the audio signal. The auditory feedback mechanism seems not only to control for voice pitch but also for voice quality aspects.

  3. Accelerometer-based automatic voice onset detection in speech mapping with navigated repetitive transcranial magnetic stimulation.

    PubMed

    Vitikainen, Anne-Mari; Mäkelä, Elina; Lioumis, Pantelis; Jousmäki, Veikko; Mäkelä, Jyrki P

    2015-09-30

    The use of navigated repetitive transcranial magnetic stimulation (rTMS) in mapping of speech-related brain areas has recently shown to be useful in preoperative workflow of epilepsy and tumor patients. However, substantial inter- and intraobserver variability and non-optimal replicability of the rTMS results have been reported, and a need for additional development of the methodology is recognized. In TMS motor cortex mappings the evoked responses can be quantitatively monitored by electromyographic recordings; however, no such easily available setup exists for speech mappings. We present an accelerometer-based setup for detection of vocalization-related larynx vibrations combined with an automatic routine for voice onset detection for rTMS speech mapping applying naming. The results produced by the automatic routine were compared with the manually reviewed video-recordings. The new method was applied in the routine navigated rTMS speech mapping for 12 consecutive patients during preoperative workup for epilepsy or tumor surgery. The automatic routine correctly detected 96% of the voice onsets, resulting in 96% sensitivity and 71% specificity. Majority (63%) of the misdetections were related to visible throat movements, extra voices before the response, or delayed naming of the previous stimuli. The no-response errors were correctly detected in 88% of events. The proposed setup for automatic detection of voice onsets provides quantitative additional data for analysis of the rTMS-induced speech response modifications. The objectively defined speech response latencies increase the repeatability, reliability and stratification of the rTMS results. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Designing interaction, voice, and inclusion in AAC research.

    PubMed

    Pullin, Graham; Treviranus, Jutta; Patel, Rupal; Higginbotham, Jeff

    2017-09-01

    The ISAAC 2016 Research Symposium included a Design Stream that examined timely issues across augmentative and alternative communication (AAC), framed in terms of designing interaction, designing voice, and designing inclusion. Each is a complex term with multiple meanings; together they represent challenging yet important frontiers of AAC research. The Design Stream was conceived by the four authors, researchers who have been exploring AAC and disability-related design throughout their careers, brought together by a shared conviction that designing for communication implies more than ensuring access to words and utterances. Each of these presenters came to AAC from a different background: interaction design, inclusive design, speech science, and social science. The resulting discussion among 24 symposium participants included controversies about the role of technology, tensions about independence and interdependence, and a provocation about taste. The paper concludes by proposing new directions for AAC research: (a) new interdisciplinary research could combine scientific and design research methods, as distant yet complementary as microanalysis and interaction design, (b) new research tools could seed accessible and engaging contextual research into voice within a social model of disability, and (c) new open research networks could support inclusive, international and interdisciplinary research.

  5. Voices to reckon with: perceptions of voice identity in clinical and non-clinical voice hearers

    PubMed Central

    Badcock, Johanna C.; Chhabra, Saruchi

    2013-01-01

    The current review focuses on the perception of voice identity in clinical and non-clinical voice hearers. Identity perception in auditory verbal hallucinations (AVH) is grounded in the mechanisms of human (i.e., real, external) voice perception, and shapes the emotional (distress) and behavioral (help-seeking) response to the experience. Yet, the phenomenological assessment of voice identity is often limited, for example to the gender of the voice, and has failed to take advantage of recent models and evidence on human voice perception. In this paper we aim to synthesize the literature on identity in real and hallucinated voices and begin by providing a comprehensive overview of the features used to judge voice identity in healthy individuals and in people with schizophrenia. The findings suggest some subtle, but possibly systematic biases across different levels of voice identity in clinical hallucinators that are associated with higher levels of distress. Next we provide a critical evaluation of voice processing abilities in clinical and non-clinical voice hearers, including recent data collected in our laboratory. Our studies used diverse methods, assessing recognition and binding of words and voices in memory as well as multidimensional scaling of voice dissimilarity judgments. The findings overall point to significant difficulties recognizing familiar speakers and discriminating between unfamiliar speakers in people with schizophrenia, both with and without AVH. In contrast, these voice processing abilities appear to be generally intact in non-clinical hallucinators. The review highlights some important avenues for future research and treatment of AVH associated with a need for care, and suggests some novel insights into other symptoms of psychosis. PMID:23565088

  6. Time-of-day effects on voice range profile performance in young, vocally untrained adult females.

    PubMed

    van Mersbergen, M R; Verdolini, K; Titze, I R

    1999-12-01

    Time-of-day effects on voice range profile performance were investigated in 20 vocally healthy untrained women between the ages of 18 and 35 years. Each subject produced two complete voice range profiles: one in the morning and one in the evening, about 36 hours apart. The order of morning and evening trials was counterbalanced across subjects. Dependent variables were (1) average minimum and average maximum intensity, (2) Voice range profile area and (3) center of gravity (median semitone pitch and median intensity). In this study, the results failed to reveal any clear evidence of time-of-day effects on voice range profile performance, for any of the dependent variables. However, a reliable interaction of time-of-day and trial order was obtained for average minimum intensity. Investigation of other subject populations, in particular trained vocalists or those with laryngeal lesions, is required for any generalization of the results.

  7. Effect of tonal native language on voice fundamental frequency responses to pitch feedback perturbations during sustained vocalizations

    PubMed Central

    Liu, Hanjun; Wang, Emily Q.; Chen, Zhaocong; Liu, Peng; Larson, Charles R.; Huang, Dongfeng

    2010-01-01

    The purpose of this cross-language study was to examine whether the online control of voice fundamental frequency (F0) during vowel phonation is influenced by language experience. Native speakers of Cantonese and Mandarin, both tonal languages spoken in China, participated in the experiments. Subjects were asked to vocalize a vowel sound ∕u∕ at their comfortable habitual F0, during which their voice pitch was unexpectedly shifted (±50, ±100, ±200, or ±500 cents, 200 ms duration) and fed back instantaneously to them over headphones. The results showed that Cantonese speakers produced significantly smaller responses than Mandarin speakers when the stimulus magnitude varied from 200 to 500 cents. Further, response magnitudes decreased along with the increase in stimulus magnitude in Cantonese speakers, which was not observed in Mandarin speakers. These findings suggest that online control of voice F0 during vocalization is sensitive to language experience. Further, systematic modulations of vocal responses across stimulus magnitude were observed in Cantonese speakers but not in Mandarin speakers, which indicates that this highly automatic feedback mechanism is sensitive to the specific tonal system of each language. PMID:21218905

  8. Voice disorders in teachers: occupational risk factors and psycho-emotional factors.

    PubMed

    van Houtte, Evelyne; Claeys, Sofie; Wuyts, Floris; van Lierde, Kristiane

    2012-10-01

    Teaching is a high-risk occupation for developing voice disorders. The purpose of this study was to investigate previously described vocal risk factors as well as to identify new risk factors related to both the personal life of the teacher (fluid intake, voice-demanding activities, family history of voice disorders, and children at home) and to environmental factors (temperature changes, chalk use, presence of curtains, carpet, or air-conditioning, acoustics in the classroom, and noise in and outside the classroom). The study group comprised 994 teachers (response rate 46.6%). All participants completed a questionnaire. Chi-square tests and logistic regression analyses were performed. A total of 51.2% (509/994) of the teachers presented with voice disorders. Women reported more voice disorders compared to men (56.4% versus 40.4%, P < 0.001). Vocal risk factors were a family history of voice disorders (P = 0.005), temperature changes in the classroom (P = 0.017), the number of pupils per classroom (P = 0.001), and noise level inside the classroom (P = 0.001). Teachers with voice disorders presented a higher level of psychological distress (P < 0.001) compared to teachers without voice problems. Voice disorders are frequent among teachers, especially in female teachers. The results of this study emphasize that multiple factors are involved in the development of voice disorders.

  9. Performer's attitudes toward seeking health care for voice issues: understanding the barriers.

    PubMed

    Gilman, Marina; Merati, Albert L; Klein, Adam M; Hapner, Edie R; Johns, Michael M

    2009-03-01

    Contemporary commercial music (CCM) performers rely heavily on their voice, yet may not be aware of the importance of proactive voice care. This investigation intends to identify perceptions and barriers to seeking voice care among CCM artists. This cross-sectional observational study used a 10-item Likert-based response questionnaire to assess current perceptions regarding voice care in a population of randomly selected participants of professional CCM conference. Subjects (n=78) were queried regarding their likelihood to seek medical care for minor medical problems and specifically problems with their voice. Additional questions investigated anxiety about seeking voice care from a physician specialist, speech language pathologist, or voice coach; apprehension regarding findings of laryngeal examination, laryngeal imaging procedures; and the effect of medical insurance on the likelihood of seeking medical care. Eighty-two percent of subjects reported that their voice was a critical part of their profession; 41% stated that they were not likely to seek medical care for problems with their voice; and only 19% were reluctant to seek care for general medical problems (P<0.001). Anxiety about seeking a clinician regarding their voice was not a deterrent. Most importantly, 39% of subjects do not seek medical attention for their voice problems due to medical insurance coverage. The CCM artists are less likely to seek medical care for voice problems compared with general medical problems. Availability of medical insurance may be a factor. Availability of affordable voice care and education about the importance of voice care is needed in this population of vocal performers.

  10. How well does voice interaction work in space?

    NASA Technical Reports Server (NTRS)

    Morris, Randy B.; Whitmore, Mihriban; Adam, Susan C.

    1993-01-01

    The methods and results of an evaluation of the Voice Navigator software package are discussed. The first phase or ground phase of the study consisted of creating, or training, computer voice files of specific commands. This consisted of repeating each of six commands eight times. The files were then tested for recognition accuracy by the software aboard the microgravity aircraft. During the second phase, both voice training and testing were performed in microgravity. Inflight training was done due to problems encountered in phase one which were believed to be caused by ambient noise levels. Both quantitative and qualitative data were collected. Only one of the commands was found to offer consistently high recognition rates across subjects during the second phase.

  11. Comparisons of voice onset time for trained male singers and male nonsingers during speaking and singing.

    PubMed

    McCrea, Christopher R; Morris, Richard J

    2005-09-01

    This study was designed to examine the temporal acoustic differences between male trained singers and nonsingers during speaking and singing across voiced and voiceless English stop consonants. Recordings were made of 5 trained singers and 5 nonsingers, and acoustically analyzed for voice onset time (VOT). A mixed analysis of variance showed that the male trained singers had significantly longer mean VOT than did the nonsingers during voiceless stop production. Sung productions of voiceless stops had significantly longer mean VOTs than did the spoken productions. No significant differences were observed for the voiced stops, nor were any interactions observed. These results indicated that vocal training and phonatory task have a significant influence on VOT.

  12. Processing of voices in deafness rehabilitation by auditory brainstem implant.

    PubMed

    Coez, Arnaud; Zilbovicius, Monica; Ferrary, Evelyne; Bouccara, Didier; Mosnier, Isabelle; Ambert-Dahan, Emmanuèle; Kalamarides, Michel; Bizaguet, Eric; Syrota, André; Samson, Yves; Sterkers, Olivier

    2009-10-01

    The superior temporal sulcus (STS) is specifically involved in processing the human voice. Profound acquired deafness by post-meningitis ossified cochlea and by bilateral vestibular schwannoma in neurofibromatosis type 2 patients are two indications for auditory brainstem implantation (ABI). In order to objectively measure the cortical voice processing of a group of ABI patients, we studied the activation of the human temporal voice areas (TVA) by PET H(2)(15)O, performed in a group of implanted deaf adults (n=7) with more than two years of auditory brainstem implant experience, with an intelligibility score average of 17%+/-17 [mean+/-SD]. Relative cerebral blood flow (rCBF) was measured in the three following conditions: during silence, while passive listening to human voice, and to non-voice stimuli. Compared to silence, the activations induced by voice and non-voice stimuli were bilaterally located in the superior temporal regions. However, compared to non-voice stimuli, the voice stimuli did not induce specific supplementary activation of the TVA along the STS. The comparison of ABI group with a normal-hearing controls group (n=7) showed that TVA activations were significantly enhanced among controls group. ABI allowed the transmission of sound stimuli to temporal brain regions but lacked transmitting the specific cues of the human voice to the TVA. Moreover, among groups, during silent condition, brain visual regions showed higher rCBF in ABI group, although temporal brain regions had higher rCBF in the controls group. ABI patients had consequently developed enhanced visual strategies to keep interacting with their environment.

  13. Vocal fold nodules in adult singers: regional opinions about etiologic factors, career impact, and treatment. A survey of otolaryngologists, speech pathologists, and teachers of singing.

    PubMed

    Hogikyan, N D; Appel, S; Guinn, L W; Haxer, M J

    1999-03-01

    This study was undertaken to better understand current regional opinions regarding vocal fold nodules in adult singers. A questionnaire was sent to 298 persons representing the 3 professional groups most involved with the care of singers with vocal nodules: otolaryngologists, speech pathologists, and teachers of singing. The questionnaire queried respondents about their level of experience with this problem, and their beliefs about causative factors, career impact, and optimum treatment. Responses within and between groups were similar, with differences between groups primarily in the magnitude of positive or negative responses, rather than in the polarity of the responses. Prevailing opinions included: recognition of causative factors in both singing and speaking voice practices, optimism about responsiveness to appropriate treatment, enthusiasm for coordinated voice therapy and voice training as first-line treatment, and acceptance of microsurgical management as appropriate treatment if behavioral management fails.

  14. Whose Voice Is It Anyway? Hushing and Hearing "Voices" in Speech and Language Therapy Interactions with People with Chronic Schizophrenia

    ERIC Educational Resources Information Center

    Walsh, Irene P.

    2008-01-01

    Background: Some people with schizophrenia are considered to have communication difficulties because of concomitant language impairment and/or because of suppressed or "unusual" communication skills due to the often-chronic nature and manifestation of the illness process. Conversations with a person with schizophrenia pose many pragmatic…

  15. Two experimental tests of relational models of procedural justice: non-instrumental voice and authority group membership.

    PubMed

    Platow, Michael J; Eggins, Rachael A; Chattopadhyay, Rachana; Brewer, Greg; Hardwick, Lisa; Milsom, Laurin; Brocklebank, Jacinta; Lalor, Thérèse; Martin, Rowena; Quee, Michelle; Vassallo, Sara; Welsh, Jenny

    2013-06-01

    In both a laboratory experiment (in Australia) using university as the basis of group membership, and a scenario experiment (in India) using religion as the basis of group membership, we observe more favourable respect and fairness ratings in response to an in-group authority than an out-group authority who administers non-instrumental voice. Moreover, we observe in our second experiment that reported likelihood of protest (herein called "social-change voice") was relatively high following non-instrumental voice from an out-group authority, but relatively low following non-instrumental voice from an in-group authority. Our findings are consistent with relational models of procedural justice, and extend the work by examining likely use of alternative forms of voice as well as highlighting the relative importance of instrumentality. ©2012 The British Psychological Society.

  16. Non-verbal emotion communication training induces specific changes in brain function and structure

    PubMed Central

    Kreifelts, Benjamin; Jacob, Heike; Brück, Carolin; Erb, Michael; Ethofer, Thomas; Wildgruber, Dirk

    2013-01-01

    The perception of emotional cues from voice and face is essential for social interaction. However, this process is altered in various psychiatric conditions along with impaired social functioning. Emotion communication trainings have been demonstrated to improve social interaction in healthy individuals and to reduce emotional communication deficits in psychiatric patients. Here, we investigated the impact of a non-verbal emotion communication training (NECT) on cerebral activation and brain structure in a controlled and combined functional magnetic resonance imaging (fMRI) and voxel-based morphometry study. NECT-specific reductions in brain activity occurred in a distributed set of brain regions including face and voice processing regions as well as emotion processing- and motor-related regions presumably reflecting training-induced familiarization with the evaluation of face/voice stimuli. Training-induced changes in non-verbal emotion sensitivity at the behavioral level and the respective cerebral activation patterns were correlated in the face-selective cortical areas in the posterior superior temporal sulcus and fusiform gyrus for valence ratings and in the temporal pole, lateral prefrontal cortex and midbrain/thalamus for the response times. A NECT-induced increase in gray matter (GM) volume was observed in the fusiform face area. Thus, NECT induces both functional and structural plasticity in the face processing system as well as functional plasticity in the emotion perception and evaluation system. We propose that functional alterations are presumably related to changes in sensory tuning in the decoding of emotional expressions. Taken together, these findings highlight that the present experimental design may serve as a valuable tool to investigate the altered behavioral and neuronal processing of emotional cues in psychiatric disorders as well as the impact of therapeutic interventions on brain function and structure. PMID:24146641

  17. Non-verbal emotion communication training induces specific changes in brain function and structure.

    PubMed

    Kreifelts, Benjamin; Jacob, Heike; Brück, Carolin; Erb, Michael; Ethofer, Thomas; Wildgruber, Dirk

    2013-01-01

    The perception of emotional cues from voice and face is essential for social interaction. However, this process is altered in various psychiatric conditions along with impaired social functioning. Emotion communication trainings have been demonstrated to improve social interaction in healthy individuals and to reduce emotional communication deficits in psychiatric patients. Here, we investigated the impact of a non-verbal emotion communication training (NECT) on cerebral activation and brain structure in a controlled and combined functional magnetic resonance imaging (fMRI) and voxel-based morphometry study. NECT-specific reductions in brain activity occurred in a distributed set of brain regions including face and voice processing regions as well as emotion processing- and motor-related regions presumably reflecting training-induced familiarization with the evaluation of face/voice stimuli. Training-induced changes in non-verbal emotion sensitivity at the behavioral level and the respective cerebral activation patterns were correlated in the face-selective cortical areas in the posterior superior temporal sulcus and fusiform gyrus for valence ratings and in the temporal pole, lateral prefrontal cortex and midbrain/thalamus for the response times. A NECT-induced increase in gray matter (GM) volume was observed in the fusiform face area. Thus, NECT induces both functional and structural plasticity in the face processing system as well as functional plasticity in the emotion perception and evaluation system. We propose that functional alterations are presumably related to changes in sensory tuning in the decoding of emotional expressions. Taken together, these findings highlight that the present experimental design may serve as a valuable tool to investigate the altered behavioral and neuronal processing of emotional cues in psychiatric disorders as well as the impact of therapeutic interventions on brain function and structure.

  18. Developmental trends in the interaction between auditory and linguistic processing.

    PubMed

    Jerger, S; Pirozzolo, F; Jerger, J; Elizondo, R; Desai, S; Wright, E; Reynosa, R

    1993-09-01

    The developmental course of multidimensional speech processing was examined in 80 children between 3 and 6 years of age and in 60 adults between 20 and 86 years of age. Processing interactions were assessed with a speeded classification task (Garner, 1974a), which required the subjects to attend selectively to the voice dimension while ignoring the linguistic dimension, and vice versa. The children and adults exhibited both similarities and differences in the patterns of processing dependencies. For all ages, performance for each dimension was slower in the presence of variation in the irrelevant dimension; irrelevant variation in the voice dimension disrupted performance more than irrelevant variation in the linguistic dimension. Trends in the degree of interference, on the other hand, showed significant differences between dimensions as a function of age. Whereas the degree of interference for the voice-dimension-relevant did not show significant age-related change, the degree of interference for the word-dimension-relevant declined significantly with age in a linear as well as a quadratic manner. A major age-related change in the relation between dimensions was that word processing, relative to voice-gender processing, required significantly more time in the children than in the adults. Overall, the developmental course characterizing multidimensional speech processing evidenced more pronounced change when the linguistic dimension, rather than the voice dimension, was relevant.

  19. Examining Literacy Teachers' Perceptions of the Use of VoiceThread in an Elementary, Middle School, and a High School Classroom for Enhancing Instructional Goals

    ERIC Educational Resources Information Center

    Stover, Katie; Kissel, Brian; Wood, Karen; Putman, Michael

    2015-01-01

    In today's digital age, Web 2.0 tools such as VoiceThread allow users to integrate images, voices, and responses within one digital platform, providing students with the opportunity to add another layer of meaning to their texts. We conducted this research to expand our understanding of the processes necessary for integrating digital tools into…

  20. Understanding the 'Anorexic Voice' in Anorexia Nervosa.

    PubMed

    Pugh, Matthew; Waller, Glenn

    2017-05-01

    In common with individuals experiencing a number of disorders, people with anorexia nervosa report experiencing an internal 'voice'. The anorexic voice comments on the individual's eating, weight and shape and instructs the individual to restrict or compensate. However, the core characteristics of the anorexic voice are not known. This study aimed to develop a parsimonious model of the voice characteristics that are related to key features of eating disorder pathology and to determine whether patients with anorexia nervosa fall into groups with different voice experiences. The participants were 49 women with full diagnoses of anorexia nervosa. Each completed validated measures of the power and nature of their voice experience and of their responses to the voice. Different voice characteristics were associated with current body mass index, duration of disorder and eating cognitions. Two subgroups emerged, with 'weaker' and 'stronger' voice experiences. Those with stronger voices were characterized by having more negative eating attitudes, more severe compensatory behaviours, a longer duration of illness and a greater likelihood of having the binge-purge subtype of anorexia nervosa. The findings indicate that the anorexic voice is an important element of the psychopathology of anorexia nervosa. Addressing the anorexic voice might be helpful in enhancing outcomes of treatments for anorexia nervosa, but that conclusion might apply only to patients with more severe eating psychopathology. Copyright © 2016 John Wiley & Sons, Ltd. Experiences of an internal 'anorexic voice' are common in anorexia nervosa. Clinicians should consider the role of the voice when formulating eating pathology in anorexia nervosa, including how individuals perceive and relate to that voice. Addressing the voice may be beneficial, particularly in more severe and enduring forms of anorexia nervosa. When working with the voice, clinicians should aim to address both the content of the voice and how individuals relate and respond to it. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Short-Term Effect of Two Semi-Occluded Vocal Tract Training Programs on the Vocal Quality of Future Occupational Voice Users: "Resonant Voice Training Using Nasal Consonants" Versus "Straw Phonation".

    PubMed

    Meerschman, Iris; Van Lierde, Kristiane; Peeters, Karen; Meersman, Eline; Claeys, Sofie; D'haeseleer, Evelien

    2017-09-18

    The purpose of this study was to determine the short-term effect of 2 semi-occluded vocal tract training programs, "resonant voice training using nasal consonants" versus "straw phonation," on the vocal quality of vocally healthy future occupational voice users. A multigroup pretest-posttest randomized control group design was used. Thirty healthy speech-language pathology students with a mean age of 19 years (range: 17-22 years) were randomly assigned into a resonant voice training group (practicing resonant exercises across 6 weeks, n = 10), a straw phonation group (practicing straw phonation across 6 weeks, n = 10), or a control group (receiving no voice training, n = 10). A voice assessment protocol consisting of both subjective (questionnaire, participant's self-report, auditory-perceptual evaluation) and objective (maximum performance task, aerodynamic assessment, voice range profile, acoustic analysis, acoustic voice quality index, dysphonia severity index) measurements and determinations was used to evaluate the participants' voice pre- and posttraining. Groups were compared over time using linear mixed models and generalized linear mixed models. Within-group effects of time were determined using post hoc pairwise comparisons. No significant time × group interactions were found for any of the outcome measures, indicating no differences in evolution over time among the 3 groups. Within-group effects of time showed a significant improvement in dysphonia severity index in the resonant voice training group, and a significant improvement in the intensity range in the straw phonation group. Results suggest that the semi-occluded vocal tract training programs using resonant voice training and straw phonation may have a positive impact on the vocal quality and vocal capacities of future occupational voice users. The resonant voice training caused an improved dysphonia severity index, and the straw phonation training caused an expansion of the intensity range in this population.

  2. A "Surprising Shock" in the Cathedral: Getting Year 7 to Vocalise Responses to the Murder of Thomas Becket

    ERIC Educational Resources Information Center

    Partridge, Mary

    2011-01-01

    Mary Partridge wanted her pupils not only to become more aware of competing and contrasting voices in the past, but to understand how historians orchestrate those voices. Using Edward Grim's eye-witness account of Thomas Becket's murder, her Year 7 pupils explored nuances in the word "shocking" as a way of distinguishing the responses of…

  3. The effects of infant massage on weight, height, and mother-infant interaction.

    PubMed

    Lee, Hae Kyung

    2006-12-01

    The purpose of this study was to test the effects of infant massage (auditory (mother's voice), tactile/kinesthetic (massage) and visual (eye to eye contact) stimulation) on weight and height of infant and mother-infant interaction with normal infants over a period of 4 weeks. This study was designed as a nonequivalent control group pretest-posttest design. The experimental group infants (aged 2-6 months) participated in one of the infant massage programs at the health district center for 4 weeks. The control group (N=26) was paired with the experimental group (N=26) by matching the infant's age and sex. Infant weight, height, and mother-infant interaction were measured two times and recordings of the mother-infant interaction were done using the video equipment in a room at the health center for 10 minutes. After 4 weeks of massage, there were no significant differences weight gain and height increase between the two groups. Comparison of the total scores for the mother-infant interaction between the two groups showed a significant difference (t=5.21, p=.000). There were also significant differences on maternal response (t=3.78, p=000), infant response (t=5.71, p=000) and dyadic response (t=4.05, p=000) in the mother-infant interaction between the two groups. Overall, the results of this study reassure that infant massage facilitates the mother-infant interaction for infants and mothers who give massage to their baby.

  4. [Social consequence of a dysphonic voice, design and validation of a questionnaire and first results].

    PubMed

    Revis, J; Robieux, C; Ghio, A; Giovanni, A

    2013-01-01

    In our society, based on communication, dysphonia becomes a handicap that could be responsible of work discrimination. Actually, several commercial services are provided by phone only, and voice quality is mandatory for the employees. This work aim was to determine the social picture relayed by dysphonia. Our hypothesis was that dysphonia sounds pejorative compared to normal voice. 40 voice samples (30 dysphonic and 10 normal) were presented randomly to a perceptual jury of 20 naïve listener. The task was for each of them to fill a questionnaire, designed specifically to describe the speaker's look and personality. 20 items were evaluated, divided into 4 categories: health, temperament, appearance, and way of life. The results showed significant differences between normal subjects and dysphonic patients. For instance, the pathological voices were depicted as more tired, introverted, sloppy than normal voices, and less trustable. No significant differences were found according to the severity of voice disorders. This work is presently continued. It allowed to validate our questionnaire and has offers great perspectives on patient's management and voice therapy.

  5. The Bangor Voice Matching Test: A standardized test for the assessment of voice perception ability.

    PubMed

    Mühl, Constanze; Sheil, Orla; Jarutytė, Lina; Bestelmeyer, Patricia E G

    2017-11-09

    Recognising the identity of conspecifics is an important yet highly variable skill. Approximately 2 % of the population suffers from a socially debilitating deficit in face recognition. More recently the existence of a similar deficit in voice perception has emerged (phonagnosia). Face perception tests have been readily available for years, advancing our understanding of underlying mechanisms in face perception. In contrast, voice perception has received less attention, and the construction of standardized voice perception tests has been neglected. Here we report the construction of the first standardized test for voice perception ability. Participants make a same/different identity decision after hearing two voice samples. Item Response Theory guided item selection to ensure the test discriminates between a range of abilities. The test provides a starting point for the systematic exploration of the cognitive and neural mechanisms underlying voice perception. With a high test-retest reliability (r=.86) and short assessment duration (~10 min) this test examines individual abilities reliably and quickly and therefore also has potential for use in developmental and neuropsychological populations.

  6. Color and texture associations in voice-induced synesthesia

    PubMed Central

    Moos, Anja; Simmons, David; Simner, Julia; Smith, Rachel

    2013-01-01

    Voice-induced synesthesia, a form of synesthesia in which synesthetic perceptions are induced by the sounds of people's voices, appears to be relatively rare and has not been systematically studied. In this study we investigated the synesthetic color and visual texture perceptions experienced in response to different types of “voice quality” (e.g., nasal, whisper, falsetto). Experiences of three different groups—self-reported voice synesthetes, phoneticians, and controls—were compared using both qualitative and quantitative analysis in a study conducted online. Whilst, in the qualitative analysis, synesthetes used more color and texture terms to describe voices than either phoneticians or controls, only weak differences, and many similarities, between groups were found in the quantitative analysis. Notable consistent results between groups were the matching of higher speech fundamental frequencies with lighter and redder colors, the matching of “whispery” voices with smoke-like textures, and the matching of “harsh” and “creaky” voices with textures resembling dry cracked soil. These data are discussed in the light of current thinking about definitions and categorizations of synesthesia, especially in cases where individuals apparently have a range of different synesthetic inducers. PMID:24032023

  7. Changes after voice therapy in objective and subjective voice measurements of pediatric patients with vocal nodules.

    PubMed

    Tezcaner, Ciler Zahide; Karatayli Ozgursoy, Selmin; Ozgursoy, Selmin Karatayli; Sati, Isil; Dursun, Gursel

    2009-12-01

    The aim of this study was to analyze the efficiency of the voice therapy in children with vocal nodules by using the acoustic analysis and subjective assessment. Thirty-nine patients with vocal fold nodules, aged between 7 and 14, were included in the study. Each subject had voice therapy led by an experienced voice therapist once a week. All diagnostic and follow-up workouts were performed before the voice therapy and after the third or the sixth month. Transoral and/or transnasal videostroboscopic examination and acoustic analysis were achieved using multi-dimensional voice program (MDVP) and subjective analysis with GRBAS scale. As for the perceptual assessment, the difference was significant for four parameters out of five. A significant improvement was found in the acoustic analysis parameters of jitter, shimmer, and noise-to-harmonic ratio. The voice therapy which was planned according to patients' needs, age, compliance and response to therapy had positive effects on pediatric patients with vocal nodules. Acoustic analysis and GRBAS may be used successfully in the follow-up of pediatric vocal nodule treatment.

  8. Are 6-month-old human infants able to transfer emotional information (happy or angry) from voices to faces? An eye-tracking study.

    PubMed

    Palama, Amaya; Malsert, Jennifer; Gentaz, Edouard

    2018-01-01

    The present study examined whether 6-month-old infants could transfer amodal information (i.e. independently of sensory modalities) from emotional voices to emotional faces. Thus, sequences of successive emotional stimuli (voice or face from one sensory modality -auditory- to another sensory modality -visual-), corresponding to a cross-modal transfer, were displayed to 24 infants. Each sequence presented an emotional (angry or happy) or neutral voice, uniquely, followed by the simultaneous presentation of two static emotional faces (angry or happy, congruous or incongruous with the emotional voice). Eye movements in response to the visual stimuli were recorded with an eye-tracker. First, results suggested no difference in infants' looking time to happy or angry face after listening to the neutral voice or the angry voice. Nevertheless, after listening to the happy voice, infants looked longer at the incongruent angry face (the mouth area in particular) than the congruent happy face. These results revealed that a cross-modal transfer (from auditory to visual modalities) is possible for 6-month-old infants only after the presentation of a happy voice, suggesting that they recognize this emotion amodally.

  9. Recruitment and Retention Challenges in a Technology-Based Study with Older Adults Discharged from a Geriatric Rehabilitation Unit.

    PubMed

    McCloskey, Rose; Jarrett, Pamela; Stewart, Connie; Keeping-Burke, Lisa

    2015-01-01

    Technology has the potential to offer support to older adults after being discharged from geriatric rehabilitation. This article highlights recruitment and retention challenges in a study examining an interactive voice response telephone system designed to monitor and support older adults and their informal caregivers following discharge from a geriatric rehabilitation unit. A prospective longitudinal study was planned to examine the feasibility of an interactive voice telephone system in facilitating the transition from rehabilitation to home for older adults and their family caregivers. Patient participants were required to make daily calls into the system. Using standardized instruments, data was to be collected at baseline and during home visits. Older adults and their caregivers may not be willing to learn how to use new technology at the time of hospital discharge. Poor recruitment and retention rates prevented analysis of findings. The importance of recruitment and retention in any study should never be underestimated. Target users of any intervention need to be included in both the design of the intervention and the study examining its benefit. Identifying the issues associated with introducing technology with a group of older rehabilitation patients should assist others who are interested in exploring the role of technology in facilitating hospital discharge. © 2014 Association of Rehabilitation Nurses.

  10. Academic voice: On feminism, presence, and objectivity in writing.

    PubMed

    Mitchell, Kim M

    2017-10-01

    Academic voice is an oft-discussed, yet variably defined concept, and confusion exists over its meaning, evaluation, and interpretation. This paper will explore perspectives on academic voice and counterarguments to the positivist origins of objectivity in academic writing. While many epistemological and methodological perspectives exist, the feminist literature on voice is explored here as the contrary position. From the feminist perspective, voice is a socially constructed concept that cannot be separated from the experiences, emotions, and identity of the writer and, thus, constitutes a reflection of an author's way of knowing. A case study of how author presence can enhance meaning in text is included. Subjective experience is imperative to a practice involving human interaction. Nursing practice, our intimate involvement in patient's lives, and the nature of our research are not value free. A view is presented that a visible presence of an author in academic writing is relevant to the nursing discipline. The continued valuing of an objective, colorless academic voice has consequences for student writers and the faculty who teach them. Thus, a strategically used multivoiced writing style is warranted. © 2017 John Wiley & Sons Ltd.

  11. Speaking more broadly: an examination of the nature, antecedents, and consequences of an expanded set of employee voice behaviors.

    PubMed

    Maynes, Timothy D; Podsakoff, Philip M

    2014-01-01

    Scholarly interest in employee voice behavior has increased dramatically over the past 15 years. Although this research has produced valuable knowledge, it has focused almost exclusively on voice as a positively intended challenge to the status quo, even though some scholars have argued that it need not challenge the status quo or be well intentioned. Thus, in this paper, we create an expanded view of voice; one that extends beyond voice as a positively intended challenge to the status quo to include voice that supports how things are being done in organizations as well as voice that may not be well intentioned. We construct a framework based on this expanded view that identifies 4 different types of voice behavior (supportive, constructive, defensive, and destructive). We then develop and validate survey measures for each of these. Evidence from 5 studies across 4 samples provides strong support for our new measures in that (a) a 4-factor confirmatory factor analysis model fit the data significantly better than 1-, 2-, or 3-factor models; (b) the voice measures converged with and yet remained distinct from conceptually related comparison constructs; (c) personality predictors exhibited unique patterns of relationships with the different types of voice; (d) variations in actual voice behaviors had a direct causal impact on responses to the survey items; and (e) each type of voice significantly impacted important outcomes for voicing employees (e.g., likelihood of relying on a voicing employee's opinions and evaluations of a voicing employee's overall performance). Implications of our findings are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved

  12. Validation and Adaptation of the Singing Voice Handicap Index for Egyptian Singing Voice.

    PubMed

    Abou-Elsaad, Tamer; Baz, Hemmat; Afsah, Omayma; Abo-Elsoud, Hend

    2017-01-01

    Measuring the severity of a voice disorder is difficult. This can be achieved by both subjective and objective measures. The Voice Handicap Index is the most known and used self-rating tool for voice disorders. The Classical Singing Handicap Index (CSHI) is a self-administered questionnaire measuring the impact of vocal deviation on the quality of life of singers. The objective of this study was to develop an Arabic version of the CSHI and to test its validity and reliability in Egyptian singers with different singing styles with normal voice and with voice disorders. The interpreted version was administered to 70 Egyptian singers including artistic singers (classical and popular) and specialized singers (Quran reciters and priests) who were divided into 40 asymptomatic singers (control group) and 30 singers with voice disorders. Participants' responses were statistically analyzed to assess the validity and reliability, and to compare the patient group with the control group. Quran reciters, patients with no previous professional training, and patients with vocal fold lesions demonstrated the highest scores. The Arabic version of CSHI is found to be a reliable, valid, and sensitive self-assessment tool that can be used in the clinical practice for the evaluation of the impact of voice disorders on singing voice. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Performance, Accuracy, Data Delivery, and Feedback Methods in Order Selection: A Comparison of Voice, Handheld, and Paper Technologies

    ERIC Educational Resources Information Center

    Ludwig, Timothy D.; Goomas, David T.

    2007-01-01

    Field study was conducted in auto-parts after-market distribution centers where selectors used handheld computers to receive instructions and feedback about their product selection process. A wireless voice-interaction technology was then implemented in a multiple baseline fashion across three departments of a warehouse (N = 14) and was associated…

  14. Listening to Voices at the Educational Frontline: New Administrators' Experiences of the Transition from Teacher to Vice-Principal

    ERIC Educational Resources Information Center

    Armstrong, Denise E.

    2015-01-01

    This qualitative study explored the transition from teaching to administration through the voices of four novice vice-principals. An integrative approach was used to capture the interaction between new vice-principals, their external contexts, and the resulting leadership outcomes. The data revealed that in spite of these new administrators'…

  15. Voice/Data Integration in Mobile Radio Networks: Overview and Future Research Directions

    DTIC Science & Technology

    1989-09-30

    degradation in interactive speech when delays are less than about 300 ms (Gold 1977; Gitman and Frank, 1978). When delays are larger (between 300 ms and 1.5...222-267. Gitman , 1. and H. Frank (1978), "Economic Analysis of Integrated Voice and Data Networks: A Case Study," Proc. IEEE 66 1549-1570. Glynn, P.W

  16. On Pitch Lowering Not Linked to Voicing: Nguni and Shona Group Depressors

    ERIC Educational Resources Information Center

    Downing, Laura J.

    2009-01-01

    This paper tests how well two theories of tone-segment interactions account for the lowering effect of so-called depressor consonants on tone in languages of the Shona and Nguni groups of Southern Bantu. I show that single source theories, which propose that pitch lowering is inextricably linked to consonant voicing, as they are reflexes of the…

  17. Elements of Collaborative Discussion and Shared Problem Solving in a Voice-Enhanced Multiplayer Game

    ERIC Educational Resources Information Center

    Bluemink, Johanna; Jarvela, Sanna

    2011-01-01

    This study focuses on investigating the nature of small-group collaborative interaction in a voice-enhanced multiplayer game called "eScape". The aim was to analyse the elements of groups' collaborative discussion and to explore the nature of the players' shared problem solving activity during the solution critical moments in the game. The data…

  18. A comparison of the VHI, VHI-10, and V-RQOL for measuring the effect of botox therapy in adductor spasmodic dysphonia.

    PubMed

    Morzaria, Sanjay; Damrose, Edward J

    2012-05-01

    Although disease-specific quality-of-life (QOL) instruments are an invaluable outcome measure in spasmodic dysphonia, there is no consensus on which QOL instrument should be used. To determine the responsiveness of the Voice Handicap Index (VHI), Voice Handicap Index-10 (VHI-10), and Voice-Related Quality of Life (V-RQOL) to the treatment effect of botulinum toxin (Botox) in adductor spasmodic dysphonia (ADSD). Stanford University Voice and Swallowing Center. Prospective case series (level of evidence=4). Consecutive ADSD patients with a stable Botox dose-response relationship were recruited prospectively. VHI, VHI-10, and V-RQOL scores were obtained pretreatment and during the middle third of the posttreatment injection cycle. Thrity-seven patients completed the follow-up. The average total Botox dose was 0.88 units. The average follow-up time after injection was 7.84 weeks. The pretreatment QOL scores reflected the burden of the disease. All the three instruments were highly correlated in subscale and total scores. After treatment, all three instruments showed significant improvement. The VHI, VHI-10, and V-RQOL all reflected the morbidity associated with ADSD and were significantly responsive to the effect of Botox therapy. The choice of instrument should be based on physician preference. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. English Voicing in Dimensional Theory*

    PubMed Central

    Iverson, Gregory K.; Ahn, Sang-Cheol

    2007-01-01

    Assuming a framework of privative features, this paper interprets two apparently disparate phenomena in English phonology as structurally related: the lexically specific voicing of fricatives in plural nouns like wives or thieves and the prosodically governed “flapping” of medial /t/ (and /d/) in North American varieties, which we claim is itself not a rule per se, but rather a consequence of the laryngeal weakening of fortis /t/ in interaction with speech-rate determined segmental abbreviation. Taking as our point of departure the Dimensional Theory of laryngeal representation developed by Avery & Idsardi (2001), along with their assumption that English marks voiceless obstruents but not voiced ones (Iverson & Salmons 1995), we find that an unexpected connection between fricative voicing and coronal flapping emerges from the interplay of familiar phonemic and phonetic factors in the phonological system. PMID:18496590

  20. Utility of an Interactive Voice Response System to Assess Antiretroviral Pharmacotherapy Adherence Among Substance Users Living with HIV/AIDS in the Rural South

    PubMed Central

    Simpson, Cathy A.; Huang, Jin; Roth, David L.; Stewart, Katharine E.

    2013-01-01

    Abstract Promoting HIV medication adherence is basic to HIV/AIDS clinical care and reducing transmission risk and requires sound assessment of adherence and risk behaviors such as substance use that may interfere with adherence. The present study evaluated the utility of a telephone-based Interactive Voice Response self-monitoring (IVR SM) system to assess prospectively daily HIV medication adherence and its correlates among rural substance users living with HIV/AIDS. Community-dwelling patients (27 men, 17 women) recruited from a non-profit HIV medical clinic in rural Alabama reported daily medication adherence, substance use, and sexual practices for up to 10 weeks. Daily IVR reports of adherence were compared with short-term IVR-based recall reports over 4- and 7-day intervals. Daily IVR reports were positively correlated with both recall measures over matched intervals. However, 7-day recall yielded higher adherence claims compared to the more contemporaneous daily IVR and 4-day recall measures suggestive of a social desirability bias over the longer reporting period. Nearly one-third of participants (32%) reported adherence rates below the optimal rate of 95% (range=0–100%). Higher IVR-reported daily medication adherence was associated with lower baseline substance use, shorter duration of HIV/AIDS medical care, and higher IVR utilization. IVR SM appears to be a useful telehealth tool for monitoring medication adherence and identifying patients with suboptimal adherence between clinic visits and can help address geographic barriers to care among disadvantaged, rural adults living with HIV/AIDS. PMID:23651105

  1. Increasing the Interaction with Distant Learners on an Interactive Telecommunications System.

    ERIC Educational Resources Information Center

    Schlenker, Jon

    1994-01-01

    Suggests a variety of ways to increase interaction with distance learners on an interactive telecommunications system, based on experiences at the University of Maine at Augusta. Highlights include establishing the proper environment; telephone systems; voice mail; fax; electronic mail; computer conferencing; postal mail; printed materials; and…

  2. Autonomic Nervous System Responses During Perception of Masked Speech may Reflect Constructs other than Subjective Listening Effort

    PubMed Central

    Francis, Alexander L.; MacPherson, Megan K.; Chandrasekaran, Bharath; Alvar, Ann M.

    2016-01-01

    Typically, understanding speech seems effortless and automatic. However, a variety of factors may, independently or interactively, make listening more effortful. Physiological measures may help to distinguish between the application of different cognitive mechanisms whose operation is perceived as effortful. In the present study, physiological and behavioral measures associated with task demand were collected along with behavioral measures of performance while participants listened to and repeated sentences. The goal was to measure psychophysiological reactivity associated with three degraded listening conditions, each of which differed in terms of the source of the difficulty (distortion, energetic masking, and informational masking), and therefore were expected to engage different cognitive mechanisms. These conditions were chosen to be matched for overall performance (keywords correct), and were compared to listening to unmasked speech produced by a natural voice. The three degraded conditions were: (1) Unmasked speech produced by a computer speech synthesizer, (2) Speech produced by a natural voice and masked byspeech-shaped noise and (3) Speech produced by a natural voice and masked by two-talker babble. Masked conditions were both presented at a -8 dB signal to noise ratio (SNR), a level shown in previous research to result in comparable levels of performance for these stimuli and maskers. Performance was measured in terms of proportion of key words identified correctly, and task demand or effort was quantified subjectively by self-report. Measures of psychophysiological reactivity included electrodermal (skin conductance) response frequency and amplitude, blood pulse amplitude and pulse rate. Results suggest that the two masked conditions evoked stronger psychophysiological reactivity than did the two unmasked conditions even when behavioral measures of listening performance and listeners’ subjective perception of task demand were comparable across the three degraded conditions. PMID:26973564

  3. Perceptual fluency and judgments of vocal aesthetics and stereotypicality.

    PubMed

    Babel, Molly; McGuire, Grant

    2015-05-01

    Research has shown that processing dynamics on the perceiver's end determine aesthetic pleasure. Specifically, typical objects, which are processed more fluently, are perceived as more attractive. We extend this notion of perceptual fluency to judgments of vocal aesthetics. Vocal attractiveness has traditionally been examined with respect to sexual dimorphism and the apparent size of a talker, as reconstructed from the acoustic signal, despite evidence that gender-specific speech patterns are learned social behaviors. In this study, we report on a series of three experiments using 60 voices (30 females) to compare the relationship between judgments of vocal attractiveness, stereotypicality, and gender categorization fluency. Our results indicate that attractiveness and stereotypicality are highly correlated for female and male voices. Stereotypicality and categorization fluency were also correlated for male voices, but not female voices. Crucially, stereotypicality and categorization fluency interacted to predict attractiveness, suggesting the role of perceptual fluency is present, but nuanced, in judgments of human voices. © 2014 Cognitive Science Society, Inc.

  4. Human vocal attractiveness as signaled by body size projection.

    PubMed

    Xu, Yi; Lee, Albert; Wu, Wing-Li; Liu, Xuan; Birkholz, Peter

    2013-01-01

    Voice, as a secondary sexual characteristic, is known to affect the perceived attractiveness of human individuals. But the underlying mechanism of vocal attractiveness has remained unclear. Here, we presented human listeners with acoustically altered natural sentences and fully synthetic sentences with systematically manipulated pitch, formants and voice quality based on a principle of body size projection reported for animal calls and emotional human vocal expressions. The results show that male listeners preferred a female voice that signals a small body size, with relatively high pitch, wide formant dispersion and breathy voice, while female listeners preferred a male voice that signals a large body size with low pitch and narrow formant dispersion. Interestingly, however, male vocal attractiveness was also enhanced by breathiness, which presumably softened the aggressiveness associated with a large body size. These results, together with the additional finding that the same vocal dimensions also affect emotion judgment, indicate that humans still employ a vocal interaction strategy used in animal calls despite the development of complex language.

  5. Digital signal processing algorithms for automatic voice recognition

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1987-01-01

    The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.

  6. Dissociation and psychosis in dissociative identity disorder and schizophrenia.

    PubMed

    Laddis, Andreas; Dell, Paul F

    2012-01-01

    Dissociative symptoms, first-rank symptoms of schizophrenia, and delusions were assessed in 40 schizophrenia patients and 40 dissociative identity disorder (DID) patients with the Multidimensional Inventory of Dissociation (MID). Schizophrenia patients were diagnosed with the Structured Clinical Interview for the DSM-IV Axis I Disorders; DID patients were diagnosed with the Structured Clinical Interview for DSM-IV Dissociative Disorders-Revised. DID patients obtained significantly (a) higher dissociation scores; (b) higher passive-influence scores (first-rank symptoms); and (c) higher scores on scales that measure child voices, angry voices, persecutory voices, voices arguing, and voices commenting. Schizophrenia patients obtained significantly higher delusion scores than did DID patients. What is odd is that the dissociation scores of schizophrenia patients were unrelated to their reports of childhood maltreatment. Multiple regression analyses indicated that 81% of the variance in DID patients' dissociation scores was predicted by the MID's Ego-Alien Experiences Scale, whereas 92% of the variance in schizophrenia patients' dissociation scores was predicted by the MID's Voices Scale. We propose that schizophrenia patients' responses to the MID do not index the same pathology as do the responses of DID patients. We argue that neither phenomenological definitions of dissociation nor the current generation of dissociation instruments (which are uniformly phenomenological in nature) can distinguish between the dissociative phenomena of DID and what we suspect are just the dissociation-like phenomena of schizophrenia.

  7. Establishing the "Fit" between the Patient and the Therapy: The Role of Patient Gender in Selecting Psychological Therapy for Distressing Voices.

    PubMed

    Hayward, Mark; Slater, Luke; Berry, Katherine; Perona-Garcelán, Salvador

    2016-01-01

    The experience of hearing distressing voices has recently attracted much attention in the literature on psychological therapies. A new "wave" of therapies is considering voice hearing experiences within a relational framework. However, such therapies may have limited impact if they do not precisely target key psychological variables within the voice hearing experience and/or ensure there is a "fit" between the profile of the hearer and the therapy (the so-called "What works for whom" debate). Gender is one aspect of both the voice and the hearer (and the interaction between the two) that may be influential when selecting an appropriate therapy, and is an issue that has thus far received little attention within the literature. The existing literature suggests that some differences in voice hearing experience are evident between the genders. Furthermore, studies exploring interpersonal relating in men and women more generally suggest differences within intimate relationships in terms of distancing and emotionality. The current study utilized data from four published studies to explore the extent to which these gender differences in social relating may extend to relating within the voice hearing experience. The findings suggest a role for gender as a variable that can be considered when identifying an appropriate psychological therapy for a given hearer.

  8. "You Know Doctor, I Need to Tell You Something": A Discourse Analytical Study of Patients' Voices in the Medical Consultation

    ERIC Educational Resources Information Center

    Cordella, Marisa

    2004-01-01

    Most studies in the area of doctor-patient communication focus on the talk that doctors perform during the consultation, leaving under-researched the discourse developed by patients. This article deconstructs and identifies the functions and forms of the voices (i.e. specific forms of talk) that Chilean patients employ in their interactions with…

  9. Predicting compliance with command hallucinations: anger, impulsivity and appraisals of voices' power and intent.

    PubMed

    Bucci, Sandra; Birchwood, Max; Twist, Laura; Tarrier, Nicholas; Emsley, Richard; Haddock, Gillian

    2013-06-01

    Command hallucinations are experienced by 33-74% of people who experience voices, with varying levels of compliance reported. Compliance with command hallucinations can result in acts of aggression, violence, suicide and self-harm; the typical response however is non-compliance or appeasement. Two factors associated with such dangerous behaviours are anger and impulsivity, however few studies have examined their relationship with compliance to command hallucinations. The current study aimed to examine the roles of anger and impulsivity on compliance with command hallucinations in people diagnosed with a psychotic disorder. The study was a cross-sectional design and included individuals who reported auditory hallucinations in the past month. Subjects completed a variety of self-report questionnaire measures. Thirty-two people experiencing command hallucinations, from both in-patient and community settings, were included. The tendency to appraise the voice as powerful, to be impulsive, to experience anger and to regulate anger were significantly associated with compliance with command hallucinations to do harm. Two factors emerged as significant independent predictors of compliance with command hallucinations; omnipotence and impulsivity. An interaction between omnipotence and compliance with commands, via a link with impulsivity, is considered and important clinical factors in the assessment of risk when working with clients experiencing command hallucinations are recommended. The data is highly suggestive and warrants further investigation with a larger sample. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Infant face interest is associated with voice information and maternal psychological health.

    PubMed

    Taylor, Gemma; Slade, Pauline; Herbert, Jane S

    2014-11-01

    Early infant interest in their mother's face is driven by an experience based face processing system, and is associated with maternal psychological health, even within a non clinical community sample. The present study examined the role of the voice in eliciting infants' interest in mother and stranger faces and in the association between infant face interest and maternal psychological health. Infants aged 3.5-months were shown photographs of their mother's and a stranger's face paired with an audio recording of their mother's and a stranger's voice that was either matched (e.g., mother's face and voice) or mismatched (e.g., mother's face and stranger's voice). Infants spent more time attending to the stranger's matched face and voice than the mother's matched face and voice and the mismatched faces and voices. Thus, infants demonstrated an earlier preference for a stranger's face when given voice information than when the face is presented alone. In the present sample, maternal psychological health varied with 56.7% of mothers reporting mild mood symptoms (depression, anxiety or stress response to childbirth). Infants of mothers with significant mild maternal mood symptoms looked longer at the faces and voices compared to infants of mothers who did not report mild maternal mood symptoms. In sum, infants' experience based face processing system is sensitive to their mothers' maternal psychological health and the multimodal nature of faces. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Crossmodal plasticity in the fusiform gyrus of late blind individuals during voice recognition.

    PubMed

    Hölig, Cordula; Föcker, Julia; Best, Anna; Röder, Brigitte; Büchel, Christian

    2014-12-01

    Blind individuals are trained in identifying other people through voices. In congenitally blind adults the anterior fusiform gyrus has been shown to be active during voice recognition. Such crossmodal changes have been associated with a superiority of blind adults in voice perception. The key question of the present functional magnetic resonance imaging (fMRI) study was whether visual deprivation that occurs in adulthood is followed by similar adaptive changes of the voice identification system. Late blind individuals and matched sighted participants were tested in a priming paradigm, in which two voice stimuli were subsequently presented. The prime (S1) and the target (S2) were either from the same speaker (person-congruent voices) or from two different speakers (person-incongruent voices). Participants had to classify the S2 as either coming from an old or a young person. Only in late blind but not in matched sighted controls, the activation in the anterior fusiform gyrus was modulated by voice identity: late blind volunteers showed an increase of the BOLD signal in response to person-incongruent compared with person-congruent trials. These results suggest that the fusiform gyrus adapts to input of a new modality even in the mature brain and thus demonstrate an adult type of crossmodal plasticity. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Brain systems mediating voice identity processing in blind humans.

    PubMed

    Hölig, Cordula; Föcker, Julia; Best, Anna; Röder, Brigitte; Büchel, Christian

    2014-09-01

    Blind people rely more on vocal cues when they recognize a person's identity than sighted people. Indeed, a number of studies have reported better voice recognition skills in blind than in sighted adults. The present functional magnetic resonance imaging study investigated changes in the functional organization of neural systems involved in voice identity processing following congenital blindness. A group of congenitally blind individuals and matched sighted control participants were tested in a priming paradigm, in which two voice stimuli (S1, S2) were subsequently presented. The prime (S1) and the target (S2) were either from the same speaker (person-congruent voices) or from two different speakers (person-incongruent voices). Participants had to classify the S2 as either a old or a young person. Person-incongruent voices (S2) compared with person-congruent voices elicited an increased activation in the right anterior fusiform gyrus in congenitally blind individuals but not in matched sighted control participants. In contrast, only matched sighted controls showed a higher activation in response to person-incongruent compared with person-congruent voices (S2) in the right posterior superior temporal sulcus. These results provide evidence for crossmodal plastic changes of the person identification system in the brain after visual deprivation. Copyright © 2014 Wiley Periodicals, Inc.

  13. Voices for Diversity.

    ERIC Educational Resources Information Center

    Future Teacher, 1995

    1995-01-01

    Prominent Americans were asked to reflect on the diversity challenge facing America's teacher workforce. The following leaders from several fields voiced their support of teachers and their beliefs America needs more diverse and culturally responsive teachers: (1) Mary Hatwood Futrell, President of Education International; (2) Carol Moseley-Braun,…

  14. A unified coding strategy for processing faces and voices

    PubMed Central

    Yovel, Galit; Belin, Pascal

    2013-01-01

    Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Here, we review accumulating evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies which suggest that the cognitive and neural processing mechanisms engaged by perceiving faces or voices are highly similar, despite the very different nature of their sensory input. The similarity between the two mechanisms likely facilitates the multi-modal integration of facial and vocal information during everyday social interactions. These findings emphasize a parsimonious principle of cerebral organization, where similar computational problems in different modalities are solved using similar solutions. PMID:23664703

  15. Investigation of air transportation technology at Princeton University, 1985

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.

    1987-01-01

    The program proceeded along five avenues during 1985. Guidance and control strategies for penetration of microbursts and wind shear, application of artificial intelligence in flight control and air traffic control systems, the use of voice recognition in the cockpit, the effects of control saturation on closed-loop stability and response of open-loop unstable aircraft, and computer aided control system design are among the topics briefly considered. Areas of investigation relate to guidance and control of commercial transports as well as general aviation aircraft. Interaction between the flight crew and automatic systems is the subject of principal concern.

  16. Emotional self-other voice processing in schizophrenia and its relationship with hallucinations: ERP evidence.

    PubMed

    Pinheiro, Ana P; Rezaii, Neguine; Rauber, Andréia; Nestor, Paul G; Spencer, Kevin M; Niznikiewicz, Margaret

    2017-09-01

    Abnormalities in self-other voice processing have been observed in schizophrenia, and may underlie the experience of hallucinations. More recent studies demonstrated that these impairments are enhanced for speech stimuli with negative content. Nonetheless, few studies probed the temporal dynamics of self versus nonself speech processing in schizophrenia and, particularly, the impact of semantic valence on self-other voice discrimination. In the current study, we examined these questions, and additionally probed whether impairments in these processes are associated with the experience of hallucinations. Fifteen schizophrenia patients and 16 healthy controls listened to 420 prerecorded adjectives differing in voice identity (self-generated [SGS] versus nonself speech [NSS]) and semantic valence (neutral, positive, and negative), while EEG data were recorded. The N1, P2, and late positive potential (LPP) ERP components were analyzed. ERP results revealed group differences in the interaction between voice identity and valence in the P2 and LPP components. Specifically, LPP amplitude was reduced in patients compared with healthy subjects for SGS and NSS with negative content. Further, auditory hallucinations severity was significantly predicted by LPP amplitude: the higher the SAPS "voices conversing" score, the larger the difference in LPP amplitude between negative and positive NSS. The absence of group differences in the N1 suggests that self-other voice processing abnormalities in schizophrenia are not primarily driven by disrupted sensory processing of voice acoustic information. The association between LPP amplitude and hallucination severity suggests that auditory hallucinations are associated with enhanced sustained attention to negative cues conveyed by a nonself voice. © 2017 Society for Psychophysiological Research.

  17. Voice - How humans communicate?

    PubMed

    Tiwari, Manjul; Tiwari, Maneesha

    2012-01-01

    Voices are important things for humans. They are the medium through which we do a lot of communicating with the outside world: our ideas, of course, and also our emotions and our personality. The voice is the very emblem of the speaker, indelibly woven into the fabric of speech. In this sense, each of our utterances of spoken language carries not only its own message but also, through accent, tone of voice and habitual voice quality it is at the same time an audible declaration of our membership of particular social regional groups, of our individual physical and psychological identity, and of our momentary mood. Voices are also one of the media through which we (successfully, most of the time) recognize other humans who are important to us-members of our family, media personalities, our friends, and enemies. Although evidence from DNA analysis is potentially vastly more eloquent in its power than evidence from voices, DNA cannot talk. It cannot be recorded planning, carrying out or confessing to a crime. It cannot be so apparently directly incriminating. As will quickly become evident, voices are extremely complex things, and some of the inherent limitations of the forensic-phonetic method are in part a consequence of the interaction between their complexity and the real world in which they are used. It is one of the aims of this article to explain how this comes about. This subject have unsolved questions, but there is no direct way to present the information that is necessary to understand how voices can be related, or not, to their owners.

  18. Sounding the ‘Citizen–Patient’: The Politics of Voice at the Hospice des Quinze-Vingts in Post-Revolutionary Paris

    PubMed Central

    Sykes, Ingrid

    2011-01-01

    This essay explores new models of the citizen–patient by attending to the post-Revolutionary blind ‘voice’. Voice, in both a literal and figurative sense, was central to the way in which members of the Hospice des Quinze-Vingts, an institution for the blind and partially sighted, interacted with those in the community. Musical voices had been used by members to collect alms and to project the particular spiritual principle of their institution since its foundation in the thirteenth century. At the time of the Revolution, the Quinze-Vingts voice was understood by some political authorities as an exemplary call of humanity. Yet many others perceived it as deeply threatening. After 1800, productive dialogue between those in political control and Quinze-Vingts blind members broke down. Authorities attempted to silence the voice of members through the control of blind musicians and institutional management. The Quinze-Vingts blind continued to reassert their voices until around 1850, providing a powerful form of resistance to political control. The blind ‘voice’ ultimately recognised the right of the citizen–patient to dialogue with their political carers. PMID:22025797

  19. Accuracy and Speed of Response to Different Voice Types in a Cockpit Voice Warning System

    DTIC Science & Technology

    1983-09-01

    military aircraft. Different levels of engine background noise, signal to noise ratio of the warning message, and precursor delivery formats were used. The...flight deck signals, the Society of Automotive Engineers stated that a unique, attention-getting sound (such as a chime, 4 etc.) together with voice...aircr,-ft wherein there is no flight engineer position" (cited in Thorburn, 1971, p. 3). The AFIAS letter cited several incidents in which the VWS had

  20. Voice problems of group fitness instructors: diagnosis, treatment, perceived and experienced attitudes and expectations of the industry.

    PubMed

    Rumbach, Anna F

    2013-11-01

    To determine the anatomical and physiological nature of voice problems and their treatment in those group fitness instructors (GFIs) who have sought a medical diagnosis; the impact of voice disorders on quality of life and their contribution to activity limitations and participation restrictions; and the perceived attitudes and level of support from the industry at large in response to instructor's voice disorders and need for treatment. Prospective self-completion questionnaire design. Thirty-eight individuals (3 males and 35 females) currently active in the Australian fitness industry who had been diagnosed with a voice disorder completed an online self-completion questionnaire administered via SurveyMonkey. Laryngeal pathology included vocal fold nodules (N = 24), vocal fold cysts (N = 2), vocal fold hemorrhage (N = 1), and recurrent chronic laryngitis (N = 3). Eight individuals reported vocal strain and muscle tension dysphonia without concurrent vocal fold pathology. Treatment methods were variable, with 73.68% (N = 28) receiving voice therapy alone, 7.89% (N = 3) having voice therapy in combination with surgery, and 10.53% (N = 4) having voice therapy in conjunction with medication. Three individuals (7.89%) received no treatment for their voice disorder. During treatment, 82% of the cohort altered their teaching practices. Half of the cohort reported that their voice problems led to social withdrawal, decreased job satisfaction, and emotional distress. Greater than 65% also reported being dissatisfied with the level of industry and coworker support during the period of voice recovery. This study identifies that GFIs are susceptible to a number of voice disorders that impact their social and professional lives, and there is a need for more proactive training and advice on voice care for instructors, as well as those in management positions within the industry to address mixed approaches and opinions regarding the importance of voice care. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  1. Using voice input and audio feedback to enhance the reality of a virtual experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miner, N.E.

    1994-04-01

    Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantagesmore » and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.« less

  2. Prototype app for voice therapy: a peer review.

    PubMed

    Lavaissiéri, Paula; Melo, Paulo Eduardo Damasceno

    2017-03-09

    Voice speech therapy promotes changes in patients' voice-related habits and rehabilitation. Speech-language therapists use a host of materials ranging from pictures to electronic resources and computer tools as aids in this process. Mobile technology is attractive, interactive and a nearly constant feature in the daily routine of a large part of the population and has a growing application in healthcare. To develop a prototype application for voice therapy, submit it to peer assessment, and to improve the initial prototype based on these assessments. a prototype of the Q-Voz application was developed based on Apple's Human Interface Guidelines. The prototype was analyzed by seven speech therapists who work in the voice area. Improvements to the product were made based on these assessments. all features of the application were considered satisfactory by most evaluators. All evaluators found the application very useful; evaluators reported that patients would find it easier to make changes in voice behavior with the application than without it; the evaluators stated they would use this application with their patients with dysphonia and in the process of rehabilitation and that the application offers useful tools for voice self-management. Based on the suggestions provided, six improvements were made to the prototype. the prototype Q-Voz Application was developed and evaluated by seven judges and subsequently improved. All evaluators stated they would use the application with their patients undergoing rehabilitation, indicating that the Q-Voz Application for mobile devices can be considered an auxiliary tool for voice speech therapy.

  3. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems.

    PubMed

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-03-01

    One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers' visual and manual distractions with 'infotainment' technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual-manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox 'one-shot' voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory-vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers' interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation.

  4. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems

    PubMed Central

    Mehler, Bruce; Kidd, David; Reimer, Bryan; Reagan, Ian; Dobres, Jonathan; McCartt, Anne

    2016-01-01

    Abstract One purpose of integrating voice interfaces into embedded vehicle systems is to reduce drivers’ visual and manual distractions with ‘infotainment’ technologies. However, there is scant research on actual benefits in production vehicles or how different interface designs affect attentional demands. Driving performance, visual engagement, and indices of workload (heart rate, skin conductance, subjective ratings) were assessed in 80 drivers randomly assigned to drive a 2013 Chevrolet Equinox or Volvo XC60. The Chevrolet MyLink system allowed completing tasks with one voice command, while the Volvo Sensus required multiple commands to navigate the menu structure. When calling a phone contact, both voice systems reduced visual demand relative to the visual–manual interfaces, with reductions for drivers in the Equinox being greater. The Equinox ‘one-shot’ voice command showed advantages during contact calling but had significantly higher error rates than Sensus during destination address entry. For both secondary tasks, neither voice interface entirely eliminated visual demand. Practitioner Summary: The findings reinforce the observation that most, if not all, automotive auditory–vocal interfaces are multi-modal interfaces in which the full range of potential demands (auditory, vocal, visual, manipulative, cognitive, tactile, etc.) need to be considered in developing optimal implementations and evaluating drivers’ interaction with the systems. Social Media: In-vehicle voice-interfaces can reduce visual demand but do not eliminate it and all types of demand need to be taken into account in a comprehensive evaluation. PMID:26269281

  5. Barriers to disseminating brief CBT for voices from a lived experience and clinician perspective

    PubMed Central

    Hazell, Cassie M.; Strauss, Clara; Cavanagh, Kate

    2017-01-01

    Access to psychological therapies continues to be poor for people experiencing psychosis. To address this problem, researchers are developing brief interventions that address the specific symptoms associated with psychosis, i.e., hearing voices. As part of the development work for a brief Cognitive Behaviour Therapy (CBT) intervention for voices we collected qualitative data from people who hear voices (study 1) and clinicians (study 2) on the potential barriers and facilitators to implementation and engagement. Thematic analysis of the responses from both groups revealed a number of anticipated barriers to implementation and engagement. Both groups believed the presenting problem (voices and psychosis symptoms) may impede engagement. Furthermore clinicians identified a lack of resources to be a barrier to implementation. The only facilitator to engagement was reported by people who hear voices who believed a compassionate, experienced and trustworthy therapist would promote engagement. The results are discussed in relation to how these barriers could be addressed in the context of a brief intervention using CBT techniques. PMID:28575094

  6. An Exploration of the Interaction between Global Education Policy Orthodoxies and National Education Practices in Cambodia, Illuminated through the Voices of Local Teacher Educators

    ERIC Educational Resources Information Center

    Courtney, Jane

    2017-01-01

    This research is based on a multi-disciplinary and multi-levelled analysis of evidence to present the case that education reform needs to be contextualised far more widely than is currently practised. It focuses on the voices of Cambodian local teacher trainers through interviews over a five-year period. Interview data is triangulated against…

  7. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  8. An investigation of users' attitudes, requirements and willingness to use mobile phone-based interactive voice response systems for seeking healthcare in Ghana: a qualitative study.

    PubMed

    Brinkel, J; Dako-Gyeke, P; Krämer, A; May, J; Fobil, J N

    2017-03-01

    In implementing mobile health interventions, user requirements and willingness to use are among the most crucial concerns for success of the investigation and have only rarely been examined in sub-Saharan Africa. This study aimed to specify the requirements of caregivers of children in order to use a symptom-based interactive voice response (IVR) system for seeking healthcare. This included (i) the investigation of attitudes towards mobile phone use and user experiences and (ii) the assessment of facilitators and challenges to use the IVR system. This is a population-based cross-sectional study. Four qualitative focus group discussions were conducted in peri-urban and rural towns in Shai Osudoku and Ga West district, as well as in Tema- and Accra Metropolitan Assembly. Participants included male and female caregivers of at least one child between 0 and 10 years of age. A qualitative content analysis was conducted for data analysis. Participants showed a positive attitude towards the use of mobile phones for seeking healthcare. While no previous experience in using IVR for health information was reported, the majority of participants stated that it offers a huge advantage for improvement in health performance. Barriers to IVR use included concerns about costs, lack of familiarly with the technology, social barriers such as lack of human interaction and infrastructural challenges. The establishment of a toll-free number as well as training prior to IVR system was discussed for recommendation. This study suggests that caregivers in the socio-economic environment of Ghana are interested and willing to use mobile phone-based IVR to receive health information for child healthcare. Important identified users' needs should be considered by health programme implementers and policy makers to help facilitate the development and implementation of IVR systems in the field of seeking healthcare. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  9. Mobility, Aspiration, Voice: A New Structure of Feeling for Student Equity in Higher Education

    ERIC Educational Resources Information Center

    Sellar, Sam; Gale, Trevor

    2011-01-01

    There is a changed "structure of feeling" emerging in higher education systems, particularly in OECD nations, in response to changed social, cultural and economic arrangements. Taking a student equity perspective, the paper names this change in terms of "mobility", "aspiration" and "voice". It argues that…

  10. Response to Reidun Tangen

    ERIC Educational Resources Information Center

    Lewis, Ann

    2008-01-01

    Reidun Tangen begins by reviewing interest in children's "voice" (encompassing the consumer driven, rights based, etc). The main body of her paper examines the philosophical underpinnings of child voice in the research context and, in particular, various interpretations of "the subject" (i.e., the knower) and what it is that is known (i.e., the…

  11. Treatment outcomes for professional voice users.

    PubMed

    Wingate, Judith M; Brown, William S; Shrivastav, Rahul; Davenport, Paul; Sapienza, Christine M

    2007-07-01

    Professional voice users comprise 25% to 35% of the U.S. working population. Their voice problems may interfere with job performance and impact costs for both employers and employees. The purpose of this study was to examine treatment outcomes of two specific rehabilitation programs for a group of professional voice users. Eighteen professional voice users participated in this study; half had complaints of throat pain or vocal fatigue (Dysphonia Group), and half were found to have benign vocal fold lesions (Lesion Group). One group received 5 weeks of expiratory muscle strength training followed by six sessions of traditional voice therapy. Treatment order was reversed for the second group. The study was designed as a repeated measures study with independent variables of treatment order, laryngeal diagnosis (lesion vs non-lesion), gender, and time. Dependent variables included maximum expiratory pressure (MEP), Voice Handicap Index (VHI) score, Vocal Rating Scale (VRS) score, Voice Effort Scale score, phonetogram measures, subglottal pressures, and acoustic and perceptual measures. Results showed significant improvements in MEP, VHI scores, and VRS scores, subglottal pressure for loud intensity, phonetogram area, and dynamic range. No significant difference was found between laryngeal diagnosis groups. A significant difference was not observed for treatment order. It was concluded that the combined treatment was responsible for the improvements observed. The results indicate that a combined modality treatment may be successful in the remediation of vocal problems for professional voice users.

  12. Effects on vocal range and voice quality of singing voice training: the classically trained female voice.

    PubMed

    Pabon, Peter; Stallinga, Rob; Södersten, Maria; Ternström, Sten

    2014-01-01

    A longitudinal study was performed on the acoustical effects of singing voice training under a given study program, using the voice range profile (VRP). Pretraining and posttraining recordings were made of students who participated in a 3-year bachelor singing study program. A questionnaire that included questions on optimal range, register use, classification, vocal health and hygiene, mixing technique, and training goals was used to rate and categorize self-assessed voice changes. Based on the responses, a subgroup of 10 classically trained female voices was selected, which was homogeneous enough for effects of training to be identified. The VRP perimeter contour was analyzed for effects of voice training. Also, a mapping within the VRP of voice quality, as expressed by the crest factor, was used to indicate the register boundaries and to monitor the acoustical consequences of the newly learned vocal technique of "mixed voice." VRPs were averaged across subjects. Findings were compared with the self-assessed vocal changes. Pre/post comparison of the average VRPs showed, in the midrange, (1) a decrease in the VRP area that was associated with the loud chest voice, (2) a reduction of the crest factor values, and (3) a reduction of maximum sound pressure level values. The students' self-evaluations of the voice changes appeared in some cases to contradict the VRP findings. VRPs of individual voices were seen to change over the course of a singing education. These changes were manifest also in the average group. High-resolution computerized recording, complemented with an acoustic register marker, allows a meaningful assessment of some effects of training, on an individual basis and for groups that comprise singers of a specific genre. It is argued that this kind of investigation is possible only within a focused training program, given by a faculty who has agreed on the goals. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  13. A qualitative method for analysing multivoicedness

    PubMed Central

    Aveling, Emma-Louise; Gillespie, Alex; Cornish, Flora

    2015-01-01

    ‘Multivoicedness’ and the ‘multivoiced Self’ have become important theoretical concepts guiding research. Drawing on the tradition of dialogism, the Self is conceptualised as being constituted by a multiplicity of dynamic, interacting voices. Despite the growth in literature and empirical research, there remains a paucity of established methodological tools for analysing the multivoiced Self using qualitative data. In this article, we set out a systematic, practical ‘how-to’ guide for analysing multivoicedness. Using theoretically derived tools, our three-step method comprises: identifying the voices of I-positions within the Self’s talk (or text), identifying the voices of ‘inner-Others’, and examining the dialogue and relationships between the different voices. We elaborate each step and illustrate our method using examples from a published paper in which data were analysed using this method. We conclude by offering more general principles for the use of the method and discussing potential applications. PMID:26664292

  14. The acoustic correlates of valence depend on emotion family.

    PubMed

    Belyk, Michel; Brown, Steven

    2014-07-01

    The voice expresses a wide range of emotions through modulations of acoustic parameters such as frequency and amplitude. Although the acoustics of individual emotions are well understood, attempts to describe the acoustic correlates of broad emotional categories such as valence have yielded mixed results. In the present study, we analyzed the acoustics of emotional valence for different families of emotion. We divided emotional vocalizations into "motivational," "moral," and "aesthetic" families as defined by the OCC (Ortony, Clore, and Collins) model of emotion. Subjects viewed emotional scenarios and were cued to vocalize congruent exclamations in response to them, for example, "Yay!" and "Damn!". Positive valence was weakly associated with high-pitched and loud vocalizations. However, valence interacted with emotion family for both pitch and amplitude. A general acoustic code for valence does not hold across families of emotion, whereas family-specific codes provide a more accurate description of vocal emotions. These findings are consolidated into a set of "rules of expression" relating vocal dimensions to emotion dimensions. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  15. Effective technologies for noninvasive remote monitoring in heart failure.

    PubMed

    Conway, Aaron; Inglis, Sally C; Clark, Robyn A

    2014-06-01

    Trials of new technologies to remotely monitor for signs and symptoms of worsening heart failure are continually emerging. The extent to which technological differences impact the effectiveness of noninvasive remote monitoring for heart failure management is unknown. This study examined the effect of specific technology used for noninvasive remote monitoring of people with heart failure on all-cause mortality and heart failure-related hospitalizations. A subanalysis of a large systematic review and meta-analysis was conducted. Studies were stratified according to the specific type of technology used, and separate meta-analyses were performed. Four different types of noninvasive remote monitoring technologies were identified, including structured telephone calls, videophone, interactive voice response devices, and telemonitoring. Only structured telephone calls and telemonitoring were effective in reducing the risk of all-cause mortality (relative risk [RR]=0.87; 95% confidence interval [CI], 0.75-1.01; p=0.06; and RR=0.62; 95% CI, 0.50-0.77; p<0.0001, respectively) and heart failure-related hospitalizations (RR=0.77; 95% CI, 0.68-0.87; p<0.001; and RR=0.75; 95% CI, 0.63-0.91; p=0.003, respectively). More research data are required for videophone and interactive voice response technologies. This subanalysis identified that only two of the four specific technologies used for noninvasive remote monitoring in heart failure improved outcomes. When results of studies that involved these disparate technologies were combined in previous meta-analyses, significant improvements in outcomes were identified. As such, this study has highlighted implications for future meta-analyses of randomized controlled trials focused on evaluating the effectiveness of remote monitoring in heart failure.

  16. Collecting Self-Reported Data on Dating Abuse Perpetration From a Sample of Primarily Black and Hispanic, Urban-Residing, Young Adults: A Comparison of Timeline Followback Interview and Interactive Voice Response Methods.

    PubMed

    Rothman, Emily F; Heeren, Timothy; Winter, Michael; Dorfman, David; Baughman, Allyson; Stuart, Gregory

    2016-12-01

    Dating abuse is a prevalent and consequential public health problem. However, relatively few studies have compared methods of collecting self-report data on dating abuse perpetration. This study compares two data collection methods-(a) the Timeline Followback (TLFB) retrospective reporting method, which makes use of a written calendar to prompt respondents' recall, and (b) an interactive voice response (IVR) system, which is a prospective telephone-based database system that necessitates respondents calling in and entering data using their telephone keypads. We collected 84 days of data on young adult dating abuse perpetration using IVR from a total of 60 respondents. Of these respondents, 41 (68%) completed a TLFB retrospective report pertaining to the same 84-day period after that time period had ended. A greater number of more severe dating abuse perpetration events were reported via the IVR system. Participants who reported any dating abuse perpetration were more likely to report more frequent abuse perpetration via the IVR than the TLFB (i.e., may have minimized the number of times they perpetrated dating abuse on the TLFB). The TLFB method did not result in a tapering off of reported events past the first week as it has in prior studies, but the IVR method did result in a tapering off of reported events after approximately the sixth week. We conclude that using an IVR system for self-reports of dating abuse perpetration may not have substantial advantages over using a TLFB method, but researchers' choice of mode may vary by research question, resources, sample, and setting.

  17. Effects of a walking intervention using mobile technology and interactive voice response on serum adipokines among postmenopausal women at increased breast cancer risk.

    PubMed

    Llanos, Adana A M; Krok, Jessica L; Peng, Juan; Pennell, Michael L; Vitolins, Mara Z; Degraffinreid, Cecilia R; Paskett, Electra D

    2014-04-01

    Practical methods to reduce the risk of obesity-related breast cancer among high-risk subgroups are lacking. Few studies have investigated the effects of exercise on circulating adipokines, which have been shown to be associated with obesity and breast cancer. The aim of this study was to examine the effects of a walking intervention on serum adiponectin, leptin, and the adiponectin-to-leptin ratio (A/L). Seventy-one overweight and obese postmenopausal women at increased risk of developing breast cancer were stratified by BMI (25-30 kg/m(2) or >30 kg/m(2)) and randomized to a 12-week, two-arm walking intervention administered through interactive voice response (IVR) and mobile devices. The intervention arms were IVR + coach and IVR + no-coach condition. Pre-post changes in serum adiponectin, leptin, and the A/L ratio were examined using mixed regression models, with ratio estimates (and 95 % confidence intervals [CI]) corresponding to postintervention adipokine concentrations relative to preintervention concentrations. While postintervention effects included statistically significant improvements in anthropometric measures, the observed decreases in adiponectin and leptin (ratio = 0.86, 95 % CI 0.74-1.01, and ratio = 0.94, 95 % CI 0.87-1.01, respectively) and increase in A/L ratio = 1.09, 95 % CI 0.94-1.26) were not significant. Thus, these findings do not support significant effects of the walking intervention on circulating adipokines among overweight and obese postmenopausal women. Additional studies are essential to determine the most effective and practical lifestyle interventions that can promote beneficial modification of serum adipokine concentrations, which may prove useful for obesity-related breast cancer prevention.

  18. Effects of a walking intervention using mobile technology and interactive voice response on serum adipokines among postmenopausal women at increased breast cancer risk

    PubMed Central

    Llanos, Adana A.M.; Krok, Jessica L.; Peng, Juan; Pennell, Michael L.; Vitolins, Mara Z.; Degraffinreid, Cecilia R.; Paskett, Electra D.

    2014-01-01

    Practical methods to reduce the risk of obesity-related breast cancer among high-risk subgroups are lacking. Few studies have investigated the effects of exercise on circulating adipokines, which have been shown to be associated with obesity and breast cancer. The aim of this study was to examine the effects of a walking intervention on serum adiponectin, leptin and the adiponectin-to-leptin ratio (A/L). Seventy-one overweight and obese postmenopausal women at increased risk of developing breast cancer were stratified by BMI (25-30 kg/m2 or >30 kg/m2) and randomized to a 12-week, 2-arm walking intervention administered through interactive voice response (IVR) and mobile devices. The intervention arms were: IVR + coach and IVR + no coach condition. Pre-post changes in serum adiponectin, leptin and the A/L ratio were examined using mixed regression models, with ratio estimates (and 95% confidence intervals [CI]) corresponding to post-intervention adipokine concentrations relative to pre-intervention concentrations. While post-intervention effects included statistically significant improvements in anthropometric measures, the observed decreases in adiponectin and leptin (Ratio=0.86, 95% CI 0.74-1.01 and Ratio=0.94, 95% CI 0.87-1.01, respectively) and increase in A/L (Ratio=1.09, 95% CI 0.94-1.26) were not significant. Thus, these findings do not support significant effects of the walking intervention on circulating adipokines among overweight and obese postmenopausal women. Additional studies are essential to determine the most effective and practical lifestyle interventions that can promote beneficial modification of serum adipokine concentrations, which may prove useful for obesity-related breast cancer prevention. PMID:24435584

  19. The development of emotion perception in face and voice during infancy.

    PubMed

    Grossmann, Tobias

    2010-01-01

    Interacting with others by reading their emotional expressions is an essential social skill in humans. How this ability develops during infancy and what brain processes underpin infants' perception of emotion in different modalities are the questions dealt with in this paper. Literature review. The first part provides a systematic review of behavioral findings on infants' developing emotion-reading abilities. The second part presents a set of new electrophysiological studies that provide insights into the brain processes underlying infants' developing abilities. Throughout, evidence from unimodal (face or voice) and multimodal (face and voice) processing of emotion is considered. The implications of the reviewed findings for our understanding of developmental models of emotion processing are discussed. The reviewed infant data suggest that (a) early in development, emotion enhances the sensory processing of faces and voices, (b) infants' ability to allocate increased attentional resources to negative emotional information develops earlier in the vocal domain than in the facial domain, and (c) at least by the age of 7 months, infants reliably match and recognize emotional information across face and voice.

  20. Voice tracking and spoken word recognition in the presence of other voices

    NASA Astrophysics Data System (ADS)

    Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar

    2004-12-01

    We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.

  1. Giving Voice to Neurologically Diverse High School Students: A Case Study Exploration of Interactions, Relationships, and Realizations through a Collaborative Drama/Life Skills Performance

    ERIC Educational Resources Information Center

    Hare, Jill L.

    2013-01-01

    This is a case study about giving voice to a neurologically diverse (ND) community of high school students in a life skills classroom. It is about their lived experiences while involved in a collaborative drama production with a regular education drama class. The research of this study was driven by the assumptions, beliefs, and philosophy based…

  2. Evaluation of mode equivalence of the MSKCC Bowel Function Instrument, LASA Quality of Life, and Subjective Significance Questionnaire items administered by Web, interactive voice response system (IVRS), and paper.

    PubMed

    Bennett, Antonia V; Keenoy, Kathleen; Shouery, Marwan; Basch, Ethan; Temple, Larissa K

    2016-05-01

    To assess the equivalence of patient-reported outcome (PRO) survey responses across Web, interactive voice response system (IVRS), and paper modes of administration. Postoperative colorectal cancer patients with home Web/e-mail and phone were randomly assigned to one of the eight study groups: Groups 1-6 completed the survey via Web, IVRS, and paper, in one of the six possible orders; Groups 7-8 completed the survey twice, either by Web or by IVRS. The 20-item survey, including the MSKCC Bowel Function Instrument (BFI), the LASA Quality of Life (QOL) scale, and the Subjective Significance Questionnaire (SSQ) adapted to bowel function, was completed from home on consecutive days. Mode equivalence was assessed by comparison of mean scores across modes and intraclass correlation coefficients (ICCs) and was compared to the test-retest reliability of Web and IVRS. Of 170 patients, 157 completed at least one survey and were included in analysis. Patients had mean age 56 (SD = 11), 53% were male, 81% white, 53% colon, and 47% rectal cancer; 78% completed all assigned surveys. Mean scores for BFI total score, BFI subscale scores, LASA QOL, and adapted SSQ varied by mode by less than one-third of a score point. ICCs across mode were: BFI total score (Web-paper = 0.96, Web-IVRS = 0.97, paper-IVRS = 0.97); BFI subscales (range = 0.88-0.98); LASA QOL (Web-paper = 0.98, Web-IVRS = 0.78, paper-IVRS = 0.80); and SSQ (Web-paper = 0.92, Web-IVRS = 0.86, paper-IVRS = 0.79). Mode equivalence was demonstrated for the BFI total score, BFI subscales, LASA QOL, and adapted SSQ, supporting the use of multiple modes of PRO data capture in clinical trials.

  3. Unilateral versus bilateral thyroarytenoid Botulinum toxin injections in adductor spasmodic dysphonia: a prospective study

    PubMed Central

    Upile, Tahwinder; Elmiyeh, Behrad; Jerjes, Waseem; Prasad, Vyas; Kafas, Panagiotis; Abiola, Jesuloba; Youl, Bryan; Epstein, Ruth; Hopper, Colin; Sudhoff, Holger; Rubin, John

    2009-01-01

    Objectives In this preliminary prospective study, we compared unilateral and bilateral thyroarytenoid muscle injections of Botulinum toxin (Dysport) in 31 patients with adductor spasmodic dysphonia, who had undergone more than 5 consecutive Dysport injections (either unilateral or bilateral) and had completed 5 concomitant self-rated efficacy and complication scores questionnaires related to the previous injections. We also developed a Neurophysiological Scoring (NPS) system which has utility in the treatment administration. Method and materials Data were gathered prospectively on voice improvement (self-rated 6 point scale), length of response and duration of complications (breathiness, cough, dysphagia and total voice loss). Injections were performed under electromyography (EMG) guidance. NPS scale was used to describe the EMG response. Dose and unilateral/bilateral injections were determined by clinical judgment based on previous response. Time intervals between injections were patient driven. Results Low dose unilateral Dysport injection was associated with no significant difference in the patient's outcome in terms of duration of action, voice score (VS) and complication rate when compared to bilateral injections. Unilateral injections were not associated with any post treatment total voice loss unlike the bilateral injections. Conclusion Unilateral low dose Dysport injections are recommended in the treatment of adductor spasmodic dysphonia. PMID:19852852

  4. The effects of voice and manual control mode on dual task performance

    NASA Technical Reports Server (NTRS)

    Wickens, C. D.; Zenyuh, J.; Culp, V.; Marshak, W.

    1986-01-01

    Two fundamental principles of human performance, compatibility and resource competition, are combined with two structural dichotomies in the human information processing system, manual versus voice output, and left versus right cerebral hemisphere, in order to predict the optimum combination of voice and manual control with either hand, for time-sharing performance of a dicrete and continuous task. Eight right handed male subjected performed a discrete first-order tracking task, time-shared with an auditorily presented Sternberg Memory Search Task. Each task could be controlled by voice, or by the left or right hand, in all possible combinations except for a dual voice mode. When performance was analyzed in terms of a dual-task decrement from single task control conditions, the following variables influenced time-sharing efficiency in diminishing order of magnitude, (1) the modality of control, (discrete manual control of tracking was superior to discrete voice control of tracking and the converse was true with the memory search task), (2) response competition, (performance was degraded when both tasks were responded manually), (3) hemispheric competition, (performance degraded whenever two tasks were controlled by the left hemisphere) (i.e., voice or right handed control). The results confirm the value of predictive models invoice control implementation.

  5. The Acoustic Correlates of Breathy Voice: a Study of Source-Vowel INTERACTION{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00} {00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}{00}.

    NASA Astrophysics Data System (ADS)

    Lin, Yeong-Fen Emily

    This thesis is the result of an investigation of the source-vowel interaction from the point of view of perception. Major objectives include the identification of the acoustic correlates of breathy voice and the disclosure of the interdependent relationship between the perception of vowel identity and breathiness. Two experiments were conducted to achieve these objectives. In the first experiment, voice samples from one control group and seven patient groups were compared. The control group consisted of five female and five male adults. The ten normals were recruited to perform a sustained vowel phonation task with constant pitch and loudness. The voice samples of seventy patients were retrieved from a hospital data base, with vowels extracted from sentences repeated by patients at their habitual pitch and loudness. The seven patient groups were divided, based on a unique combination of patients' measures on mean flow rate and glottal resistance. Eighteen acoustic variables were treated with a three-way (Gender x Group x Vowel) ANOVA. Parameters showing a significant female-male difference as well as group differences, especially those between the presumed breathy group and the other groups, were identified as relevant to the distinction of breathy voice. As a result, F1-F3 amplitude difference and slope were found to be most effective in distinguishing breathy voice. Other acoustic correlates of breathy voice included F1 bandwidth, RMS-H1 amplitude difference, and F1-F2 amplitude difference and slope. In the second experiment, a formant synthesizer was used to generate vowel stimuli with varying spectral tilt and F1 bandwidth. Thirteen native American English speakers made dissimilarity judgements on paired stimuli in terms of vowel identity and breathiness. Listeners' perceptual vowel spaces were found to be affected by changes in the acoustic correlates of breathy voice. The threshold of detecting a change of vocal quality in the breathiness domain was also found to be vowel-dependent.

  6. Voice Biometrics as a Way to Self-service Password Reset

    NASA Astrophysics Data System (ADS)

    Hohgräfe, Bernd; Jacobi, Sebastian

    Password resets are time consuming. Especially when urgent jobs need to be done, it is cumbersome to inform the user helpdesk, to identify oneself and then to wait for response. It is easy to enter a wrong password multiple times, which leads to the blocking of the application. Voice biometrics is an easy and secure way for individuals to reset their own password. Read more about how you can ease the burden of your user helpdesk and how voice biometric password resets benefit your expense situation without harming your security.

  7. Autophonic Loudness of Singers in Simulated Room Acoustic Environments.

    PubMed

    Yadav, Manuj; Cabrera, Densil

    2017-05-01

    This paper aims to study the effect of room acoustics and phonemes on the perception of loudness of one's own voice (autophonic loudness) for a group of trained singers. For a set of five phonemes, 20 singers vocalized over several autophonic loudness ratios, while maintaining pitch constancy over extreme voice levels, within five simulated rooms. There were statistically significant differences in the slope of the autophonic loudness function (logarithm of autophonic loudness as a function of voice sound pressure level) for the five phonemes, with slopes ranging from 1.3 (/a:/) to 2.0 (/z/). There was no significant variation in the autophonic loudness function slopes with variations in room acoustics. The autophonic room response, which represents a systematic decrease in voice levels with increasing levels of room reflections, was also studied, with some evidence found in support. Overall, the average slope of the autophonic room response for the three corner vowels (/a:/, /i:/, and /u:/) was -1.4 for medium autophonic loudness. The findings relating to the slope of the autophonic loudness function are in agreement with the findings of previous studies where the sensorimotor mechanisms in regulating voice were shown to be more important in the perception of autophonic loudness than hearing of room acoustics. However, the role of room acoustics, in terms of the autophonic room response, is shown to be more complicated, requiring further inquiry. Overall, it is shown that autophonic loudness grows at more than twice the rate of loudness growth for sounds created outside the human body. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  8. The Enemy's Gospel: Deconstructing Exclusivity and Inventing Inclusivity through the Power of Story

    ERIC Educational Resources Information Center

    Hilder, Monika B.

    2005-01-01

    The problem of exclusivity figures large in education. How can we educate to deconstruct exclusivity and invent inclusivity? This article asserts that an unexamined veneration for the "objective" academic voice is at least partly responsible for the strong tendency to exclusivity, while suggesting that the subjective voice of storytelling can…

  9. Sparking Passion: Engaging Student Voice through Project-Based Learning in Learning Communities

    ERIC Educational Resources Information Center

    Ball, Christy L.

    2016-01-01

    How do we confront entrenched educational practices in higher education that lead to student demotivation, poor retention, and low persistence? This article argues that project-based learning that situates student voice and capacity at the center of culturally-responsive curriculum has the potential to spark student passion for problem-solving…

  10. View from the Shore: Toward an Indian Voice in 1992.

    ERIC Educational Resources Information Center

    Barreiro, Jose

    1990-01-01

    Reviews plans in Spain and the Americas for observances of the 1992 Columbus Quincentenary. Reflects on Indian responses to these observances and resistance to the notion of America's "discovery." Includes testimonies from Indian voices: N. Scott Momaday, Suzan Shown Harjo, Beverly Singer, Ladonna Harris, Rayna Green, and Tim Coulter.…

  11. Presidential, But Not Prime Minister, Candidates With Lower Pitched Voices Stand a Better Chance of Winning the Election in Conservative Countries.

    PubMed

    Banai, Benjamin; Laustsen, Lasse; Banai, Irena Pavela; Bovan, Kosta

    2018-01-01

    Previous studies have shown that voters rely on sexually dimorphic traits that signal masculinity and dominance when they choose political leaders. For example, voters exert strong preferences for candidates with lower pitched voices because these candidates are perceived as stronger and more competent. Moreover, experimental studies demonstrate that conservative voters, more than liberals, prefer political candidates with traits that signal dominance, probably because conservatives are more likely to perceive the world as a threatening place and to be more attentive to dangerous and threatening contexts. In light of these findings, this study investigates whether country-level ideology influences the relationship between candidate voice pitch and electoral outcomes of real elections. Specifically, we collected voice pitch data for presidential and prime minister candidates, aggregate national ideology for the countries in which the candidates were nominated, and measures of electoral outcomes for 69 elections held across the world. In line with previous studies, we found that candidates with lower pitched voices received more votes and had greater likelihood of winning the elections. Furthermore, regression analysis revealed an interaction between candidate voice pitch, national ideology, and election type (presidential or parliamentary). That is, having a lower pitched voice was a particularly valuable asset for presidential candidates in conservative and right-leaning countries (in comparison to presidential candidates in liberal and left-leaning countries and parliamentary elections). We discuss the practical implications of these findings, and how they relate to existing research on candidates' voices, voting preferences, and democratic elections in general.

  12. Absolute Pitch: Effects of Timbre on Note-Naming Ability

    PubMed Central

    Vanzella, Patrícia; Schellenberg, E. Glenn

    2010-01-01

    Background Absolute pitch (AP) is the ability to identify or produce isolated musical tones. It is evident primarily among individuals who started music lessons in early childhood. Because AP requires memory for specific pitches as well as learned associations with verbal labels (i.e., note names), it represents a unique opportunity to study interactions in memory between linguistic and nonlinguistic information. One untested hypothesis is that the pitch of voices may be difficult for AP possessors to identify. A musician's first instrument may also affect performance and extend the sensitive period for acquiring accurate AP. Methods/Principal Findings A large sample of AP possessors was recruited on-line. Participants were required to identity test tones presented in four different timbres: piano, pure tone, natural (sung) voice, and synthesized voice. Note-naming accuracy was better for non-vocal (piano and pure tones) than for vocal (natural and synthesized voices) test tones. This difference could not be attributed solely to vibrato (pitch variation), which was more pronounced in the natural voice than in the synthesized voice. Although starting music lessons by age 7 was associated with enhanced note-naming accuracy, equivalent abilities were evident among listeners who started music lessons on piano at a later age. Conclusions/Significance Because the human voice is inextricably linked to language and meaning, it may be processed automatically by voice-specific mechanisms that interfere with note naming among AP possessors. Lessons on piano or other fixed-pitch instruments appear to enhance AP abilities and to extend the sensitive period for exposure to music in order to develop accurate AP. PMID:21085598

  13. Voice Changes in Real Speaking Situations During a Day, With and Without Vocal Loading: Assessing Call Center Operators.

    PubMed

    Ben-David, Boaz M; Icht, Michal

    2016-03-01

    Occupational-related vocal load is an increasing global problem with adverse personal and economic implications. We examined voice changes in real speaking situations during a single day, with and without vocal loading, aiming to identify an objective acoustic index for vocal load over a day. Call center operators (CCOs, n = 27) and age- and gender-matched students (n = 25) were recorded at the beginning and at the end of a day, with (CCOs) and without (students) vocal load. Speaking and reading voice samples were analyzed for fundamental frequency (F0), sound pressure level (SPL), and their variance (F0 coefficient of variation [F0 CV], SPL CV). The impact of lifestyle habits on voice changes was also estimated. The main findings revealed an interaction, with F0 rise at the end of the day for the students but not for the CCOs. We suggest that F0 rise is a typical phenomenon of a day of normal vocal use, whereas vocal loading interferes with this mechanism. In addition, different lifestyle profiles of CCOs and controls were observed, as the CCOs reported higher incidence of dehydrating behaviors (eg, smoking, caffeine). Yet, this profile was not linked with voice changes. In sum, we suggest that F0 rise over a day can potentially serve as an index for typical voice use. Its lack thereof can hint on consequent voice symptoms and complaints. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  14. Voice disorders and mental health in teachers: a cross-sectional nationwide study.

    PubMed

    Nerrière, Eléna; Vercambre, Marie-Noël; Gilbert, Fabien; Kovess-Masféty, Viviane

    2009-10-02

    Teachers, as professional voice users, are at particular risk of voice disorders. Among contributing factors, stress and psychological tension could play a role but epidemiological data on this problem are scarce. The aim of this study was to evaluate prevalence and cofactors of voice disorders among teachers in the French National Education system, with particular attention paid to the association between voice complaint and psychological status. The source data come from an epidemiological postal survey on physical and mental health conducted in a sample of 20,099 adults (in activity or retired) selected at random from the health plan records of the national education system. Overall response rate was 53%. Of the 10,288 respondents, 3,940 were teachers in activity currently giving classes to students. In the sample of those with complete data (n = 3,646), variables associated with voice disorders were investigated using logistic regression models. Studied variables referred to demographic characteristics, socio-professional environment, psychological distress, mental health disorders (DSM-IV), and sick leave. One in two female teachers reported voice disorders (50.0%) compared to one in four males (26.0%). Those who reported voice disorders presented higher level of psychological distress. Sex- and age-adjusted odds ratios [95% confidence interval] were respectively 1.8 [1.5-2.2] for major depressive episode, 1.7 [1.3-2.2] for general anxiety disorder, and 1.6 [1.2-2.2] for phobia. A significant association between voice disorders and sick leave was also demonstrated (1.5 [1.3-1.7]). Voice disorders were frequent among French teachers. Associations with psychiatric disorders suggest that a situation may exist which is more complex than simple mechanical failure. Further longitudinal research is needed to clarify the comorbidity between voice and psychological disorders.

  15. Evaluation of Singing Vocal Health in Yakshagana Singers.

    PubMed

    Gunjawate, Dhanshree R; Aithal, Venkataraja U; Devadas, Usha; Guddattu, Vasudeva

    2017-03-01

    Yakshagana, a popular traditional folk art from Karnataka, India, includes singing and dancing. Yakshagana singer or Bhagavata plays an important role in singing and conducting the performance. The present study aims to assess the singing vocal health using Singing Voice Handicap Index-10 (SVHI-10) in these singers and to compare between those who report voice problem and those who do not. A cross-sectional study was carried out on 26 Bhagavata using demographic questionnaire and SVHI-10 in the Kannada language. Descriptive statistics was used to summarize the data. Independent sample t test was used to compare the responses for demographic variables between the two groups of singers with and without voice problems. The difference in scores of SVHI-10 between the two groups was analyzed using Pearson's chi-square test. Of the Bhagavata, 38% reported to have experienced voice problems, which affected their singing, with higher total SVHI-10 score (31.2 ± 5.7) compared with those who did not report any problems (16.81 ± 9.56). A statistically significant difference between the groups was noted in the emotional domain and total scores. The present study provides preliminary information on the voice handicap reported by Bhagavata. The singers reporting voice problems scored higher on SVHI-10. A healthy singing voice is essential for Yakshagana singers, and voice problems can have a significant impact on their performance and livelihood. Hence, results of the present study indicate the need to understand these singers' voice problems and their impact more comprehensively, and educate them about voice care. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. Risk and protective factors for spasmodic dysphonia: a case-control investigation.

    PubMed

    Tanner, Kristine; Roy, Nelson; Merrill, Ray M; Kimber, Kamille; Sauder, Cara; Houtz, Daniel R; Doman, Darrin; Smith, Marshall E

    2011-01-01

    Spasmodic dysphonia (SD) is a chronic, incurable, and often disabling voice disorder of unknown pathogenesis. The purpose of this study was to identify possible endogenous and exogenous risk and protective factors uniquely associated with SD. Prospective, exploratory, case-control investigation. One hundred fifty patients with SD and 150 medical controls (MCs) were interviewed regarding their personal and family histories, environmental exposures, illnesses, injuries, voice use patterns, and general health using a previously vetted and validated epidemiologic questionnaire. Odds ratios and multiple logistic regression analyses (α<0.15) identified several factors that significantly increased the likelihood of having SD. These factors included (1) a personal history of mumps, blepharospasm, tremor, intense occupational and avocational voice use, and a family history of voice disorders; (2) an immediate family history of meningitis, tremor, tics, cancer, and compulsive behaviors; and (3) an extended family history of tremor and cancer. SD is likely multifactorial in etiology, involving both genetic and environmental factors. Viral infections/exposures, along with intense voice use, may trigger the onset of SD in genetically predisposed individuals. Future studies should examine the interaction among genetic and environmental factors to determine the pathogenesis of SD. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  17. More than a feeling: discrete emotions mediate the relationship between relative deprivation and reactions to workplace furloughs.

    PubMed

    Osborne, Danny; Smith, Heather J; Huo, Yuen J

    2012-05-01

    A key insight from investigations of individual relative deprivation (IRD) is that people can experience objective disadvantages differently. In this study, university faculty (N = 953) who reported greater IRD in response to a mandatory furlough (i.e., involuntary pay reductions) were more likely to (a) voice options designed to improve the university (voice), (b) consider leaving their job (exit), and (c) neglect their work responsibilities (neglect), but were (d) less likely to express loyalty to the university (loyalty). Consistent with the emotions literature, (a) anger mediated the relationship between IRD and voice, (b) fear between IRD and exit, (c) sadness between IRD and neglect, and (d) gratitude between IRD and loyalty. IRD was inversely associated with self-reported physical and mental health via these different emotional pathways. These results show how discrete emotions can explain responses to IRD and, in turn, contribute to organizational viability and the health of its members.

  18. The social-sensory interface: category interactions in person perception

    PubMed Central

    Freeman, Jonathan B.; Johnson, Kerri L.; Adams, Reginald B.; Ambady, Nalini

    2012-01-01

    Research is increasingly challenging the claim that distinct sources of social information—such as sex, race, and emotion—are processed in discrete fashion. Instead, there appear to be functionally relevant interactions that occur. In the present article, we describe research examining how cues conveyed by the human face, voice, and body interact to form the unified representations that guide our perceptions of and responses to other people. We explain how these information sources are often thrown into interaction through bottom-up forces (e.g., phenotypic cues) as well as top-down forces (e.g., stereotypes and prior knowledge). Such interactions point to a person perception process that is driven by an intimate interface between bottom-up perceptual and top-down social processes. Incorporating data from neuroimaging, event-related potentials (ERP), computational modeling, computer mouse-tracking, and other behavioral measures, we discuss the structure of this interface, and we consider its implications and adaptive purposes. We argue that an increased understanding of person perception will likely require a synthesis of insights and techniques, from social psychology to the cognitive, neural, and vision sciences. PMID:23087622

  19. Voices used by nurses when communicating with patients and relatives in a department of medicine for older people-An ethnographic study.

    PubMed

    Johnsson, Anette; Boman, Åse; Wagman, Petra; Pennbrant, Sandra

    2018-04-01

    To describe how nurses communicate with older patients and their relatives in a department of medicine for older people in western Sweden. Communication is an essential tool for nurses when working with older patients and their relatives, but often patients and relatives experience shortcomings in the communication exchanges. They may not receive information or are not treated in a professional way. Good communication can facilitate the development of a positive meeting and improve the patient's health outcome. An ethnographic design informed by the sociocultural perspective was applied. Forty participatory observations were conducted and analysed during the period October 2015-September 2016. The observations covered 135 hours of nurse-patient-relative interaction. Field notes were taken, and 40 informal field conversations with nurses and 40 with patients and relatives were carried out. Semistructured follow-up interviews were conducted with five nurses. In the result, it was found that nurses communicate with four different voices: a medical voice described as being incomplete, task-oriented and with a disease perspective; a nursing voice described as being confirmatory, process-oriented and with a holistic perspective; a pedagogical voice described as being contextualised, comprehension-oriented and with a learning perspective; and a power voice described as being distancing and excluding. The voices can be seen as context-dependent communication approaches. When nurses switch between the voices, this indicates a shift in the orientation or situation. The results indicate that if nurses successfully combine the voices, while limiting the use of the power voice, the communication exchanges can become a more positive experience for all parties involved and a good nurse-patient-relative communication exchange can be achieved. Working for improved communication between nurses, patients and relatives is crucial for establishing a positive nurse-patient-relative relationship, which is a basis for improving patient care and healthcare outcomes. © 2018 John Wiley & Sons Ltd.

  20. Using Continuous Voice Recognition Technology as an Input Medium to the Naval Warfare Interactive Simulation System (NWISS).

    DTIC Science & Technology

    1984-06-01

    Co ,u’arataor, Gr 7- / ’ . c ; / , caae.ic >ar. ’ ’# d:.i II ’ ..... .. . . .. .. . ... . , rV ABSTRACT A great d-al of research has been conducted an...9 2. Continuous Voice -%ecoait.ior, ....... 11 B. VERBEX 3000 SPEECH APPLiCATION DEVELOP !ENT SYSTEM! ( SPADS ...13 C . NAVAL IAR FARE INT7EACTI7E S:AIULATIC"N SYSTEM (NWISS) ....... .................. 14 D. PURPOSE .................... 16 1. A Past

  1. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  2. Emotional voices in context: A neurobiological model of multimodal affective information processing

    NASA Astrophysics Data System (ADS)

    Brück, Carolin; Kreifelts, Benjamin; Wildgruber, Dirk

    2011-12-01

    Just as eyes are often considered a gateway to the soul, the human voice offers a window through which we gain access to our fellow human beings' minds - their attitudes, intentions and feelings. Whether in talking or singing, crying or laughing, sighing or screaming, the sheer sound of a voice communicates a wealth of information that, in turn, may serve the observant listener as valuable guidepost in social interaction. But how do human beings extract information from the tone of a voice? In an attempt to answer this question, the present article reviews empirical evidence detailing the cerebral processes that underlie our ability to decode emotional information from vocal signals. The review will focus primarily on two prominent classes of vocal emotion cues: laughter and speech prosody (i.e. the tone of voice while speaking). Following a brief introduction, behavioral as well as neuroimaging data will be summarized that allows to outline cerebral mechanisms associated with the decoding of emotional voice cues, as well as the influence of various context variables (e.g. co-occurring facial and verbal emotional signals, attention focus, person-specific parameters such as gender and personality) on the respective processes. Building on the presented evidence, a cerebral network model will be introduced that proposes a differential contribution of various cortical and subcortical brain structures to the processing of emotional voice signals both in isolation and in context of accompanying (facial and verbal) emotional cues.

  3. Emotional voices in context: a neurobiological model of multimodal affective information processing.

    PubMed

    Brück, Carolin; Kreifelts, Benjamin; Wildgruber, Dirk

    2011-12-01

    Just as eyes are often considered a gateway to the soul, the human voice offers a window through which we gain access to our fellow human beings' minds - their attitudes, intentions and feelings. Whether in talking or singing, crying or laughing, sighing or screaming, the sheer sound of a voice communicates a wealth of information that, in turn, may serve the observant listener as valuable guidepost in social interaction. But how do human beings extract information from the tone of a voice? In an attempt to answer this question, the present article reviews empirical evidence detailing the cerebral processes that underlie our ability to decode emotional information from vocal signals. The review will focus primarily on two prominent classes of vocal emotion cues: laughter and speech prosody (i.e. the tone of voice while speaking). Following a brief introduction, behavioral as well as neuroimaging data will be summarized that allows to outline cerebral mechanisms associated with the decoding of emotional voice cues, as well as the influence of various context variables (e.g. co-occurring facial and verbal emotional signals, attention focus, person-specific parameters such as gender and personality) on the respective processes. Building on the presented evidence, a cerebral network model will be introduced that proposes a differential contribution of various cortical and subcortical brain structures to the processing of emotional voice signals both in isolation and in context of accompanying (facial and verbal) emotional cues. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Did you or I say pretty, rude or brief? An ERP study of the effects of speaker's identity on emotional word processing.

    PubMed

    Pinheiro, Ana P; Rezaii, Neguine; Nestor, Paul G; Rauber, Andréia; Spencer, Kevin M; Niznikiewicz, Margaret

    2016-02-01

    During speech comprehension, multiple cues need to be integrated at a millisecond speed, including semantic information, as well as voice identity and affect cues. A processing advantage has been demonstrated for self-related stimuli when compared with non-self stimuli, and for emotional relative to neutral stimuli. However, very few studies investigated self-other speech discrimination and, in particular, how emotional valence and voice identity interactively modulate speech processing. In the present study we probed how the processing of words' semantic valence is modulated by speaker's identity (self vs. non-self voice). Sixteen healthy subjects listened to 420 prerecorded adjectives differing in voice identity (self vs. non-self) and semantic valence (neutral, positive and negative), while electroencephalographic data were recorded. Participants were instructed to decide whether the speech they heard was their own (self-speech condition), someone else's (non-self speech), or if they were unsure. The ERP results demonstrated interactive effects of speaker's identity and emotional valence on both early (N1, P2) and late (Late Positive Potential - LPP) processing stages: compared with non-self speech, self-speech with neutral valence elicited more negative N1 amplitude, self-speech with positive valence elicited more positive P2 amplitude, and self-speech with both positive and negative valence elicited more positive LPP. ERP differences between self and non-self speech occurred in spite of similar accuracy in the recognition of both types of stimuli. Together, these findings suggest that emotion and speaker's identity interact during speech processing, in line with observations of partially dependent processing of speech and speaker information. Copyright © 2016. Published by Elsevier Inc.

  5. Acoustic passaggio pedagogy for the male voice.

    PubMed

    Bozeman, Kenneth Wood

    2013-07-01

    Awareness of interactions between the lower harmonics of the voice source and the first formant of the vocal tract, and of the passive vowel modifications that accompany them, can assist in working out a smooth transition through the passaggio of the male voice. A stable vocal tract length establishes the general location of all formants, including the higher formants that form the singer's formant cluster. Untrained males instinctively shorten the tube to preserve the strong F1/H2 acoustic coupling of voce aperta, resulting in 'yell' timbre. If tube length and shape are kept stable during pitch ascent, the yell can be avoided by allowing the second harmonic to rise above the first formant, creating the balanced timbre of voce chiusa.

  6. A Comparison of Educator Dispositions to Student Responses on the Kentucky Student Voice Survey

    ERIC Educational Resources Information Center

    Whitis, Julie D.

    2017-01-01

    The primary purpose of this study was to determine if a correlation exists between teacher dispositions, grounded in Perceptual Psychology, and student results on the Kentucky Student Voice Survey (KSVS), a 25-question survey adapted from Cambridge Education's Tripod survey. A correlation was found between teacher dispositions and KSVS question…

  7. Ubiquitous Discussion Forum: Introducing Mobile Phones and Voice Discussion into a Web Discussion Forum

    ERIC Educational Resources Information Center

    Wei, Fu-Hsiang; Chen, Gwo-Dong; Wang, Chin-Yeh; Li, Liang-Yi

    2007-01-01

    Web-based discussion forums enable users to share knowledge in straightforward and popular platforms. However, discussion forums have several problems, such as the lack of immediate delivery and response, the heavily text-based medium, inability to hear expressions of voice and the heuristically created discussion topics which can impede the…

  8. Doing the "Work of Hearing": Girls' Voices in Transnational Educational Development Campaigns

    ERIC Educational Resources Information Center

    Khoja-Moolji, Shenila

    2016-01-01

    There is an increasing focus in transnational campaigns for girls' education and empowerment on highlighting the voices of girls from the global south. These moves are made in response to feminist critiques of said campaigns for not attending to the diverse, multiple and complex lived experiences of girls. This article engages in theorising these…

  9. A modulatory effect of male voice pitch on long-term memory in women: evidence of adaptation for mate choice?

    PubMed

    Smith, David S; Jones, Benedict C; Feinberg, David R; Allan, Kevin

    2012-01-01

    From a functionalist perspective, human memory should be attuned to information of adaptive value for one's survival and reproductive fitness. While evidence of sensitivity to survival-related information is growing, specific links between memory and information that could impact upon reproductive fitness have remained elusive. Here, in two experiments, we showed that memory in women is sensitive to male voice pitch, a sexually dimorphic cue important for mate choice because it not only serves as an indicator of genetic quality, but may also signal behavioural traits undesirable in a long-term partner. In Experiment 1, we report that women's visual object memory is significantly enhanced when an object's name is spoken during encoding in a masculinised (i.e., lower-pitch) versus feminised (i.e., higher-pitch) male voice, but that no analogous effect occurs when women listen to other women's voices. Experiment 2 replicated this pattern of results, additionally showing that lowering and raising male voice pitch enhanced and impaired women's memory, respectively, relative to a baseline (i.e., unmanipulated) voice condition. The modulatory effect of sexual dimorphism cues in the male voice may reveal a mate-choice adaptation within women's memory, sculpted by evolution in response to the dilemma posed by the double-edged qualities of male masculinity.

  10. Multisensory perception of the six basic emotions is modulated by attentional instruction and unattended modality

    PubMed Central

    Takagi, Sachiko; Hiramatsu, Saori; Tabei, Ken-ichi; Tanaka, Akihiro

    2015-01-01

    Previous studies have shown that the perception of facial and vocal affective expressions interacts with each other. Facial expressions usually dominate vocal expressions when we perceive the emotions of face–voice stimuli. In most of these studies, participants were instructed to pay attention to the face or voice. Few studies compared the perceived emotions with and without specific instructions regarding the modality to which attention should be directed. Also, these studies used combinations of the face and voice which expresses two opposing emotions, which limits the generalizability of the findings. The purpose of this study is to examine whether the emotion perception is modulated by instructions to pay attention to the face or voice using the six basic emotions. Also we examine the modality dominance between the face and voice for each emotion category. Before the experiment, we recorded faces and voices which expresses the six basic emotions and orthogonally combined these faces and voices. Consequently, the emotional valence of visual and auditory information was either congruent or incongruent. In the experiment, there were unisensory and multisensory sessions. The multisensory session was divided into three blocks according to whether an instruction was given to pay attention to a given modality (face attention, voice attention, and no instruction). Participants judged whether the speaker expressed happiness, sadness, anger, fear, disgust, or surprise. Our results revealed that instructions to pay attention to one modality and congruency of the emotions between modalities modulated the modality dominance, and the modality dominance is differed for each emotion category. In particular, the modality dominance for anger changed according to each instruction. Analyses also revealed that the modality dominance suggested by the congruency effect can be explained in terms of the facilitation effect and the interference effect. PMID:25698945

  11. Damping effects of magnetic fluids of various saturation magnetization (abstract)

    NASA Astrophysics Data System (ADS)

    Chagnon, Mark

    1990-05-01

    Magnetic fluids have been widely accepted for use in loudspeaker voice coil gaps as viscous dampers and liquid coolants. When applied properly to a voice coil in manufacturing of the loudspeaker, dramatic improvement in frequency response and power handling is observed. Over the past decade, a great deal of study has been given to the effects of damping as a function of fluid viscosity. It is known that the apparent viscosity of a magnetic fluid increases as a function of applied magnetic field, and that the viscosity versus field relationship approximate that of the magnetization versus applied field. At applied magnetic field strength sufficient to cause magnetic saturation of the fluid, no further increase in viscosity with increased magnetic field is observed. In order to provide a better understanding of the second order magnetoviscous damping effects in magnetic fluids used in voice coils and to provide a better loudspeaker design criterion using magnetic fluids, we have studied the effect on damping of several magnetic fluids of the same O field viscosity and of varying saturation magnetization. Magnetic fluids with saturation magnetization ranging from 50 to 450 G and 100 cps viscosity at O applied field were injected into the voice coil gap of a standard midrange loudspeaker. The frequency response over the entire dynamic range of the speaker was measured. The changes in frequency response versus fluid magnetization are reported.

  12. Multimodal approaches for emotion recognition: a survey

    NASA Astrophysics Data System (ADS)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2004-12-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  13. Multimodal approaches for emotion recognition: a survey

    NASA Astrophysics Data System (ADS)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2005-01-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  14. Adductor spasmodic dysphonia: Relationships between acoustic indices and perceptual judgments

    NASA Astrophysics Data System (ADS)

    Cannito, Michael P.; Sapienza, Christine M.; Woodson, Gayle; Murry, Thomas

    2003-04-01

    This study investigated relationships between acoustical indices of spasmodic dysphonia and perceptual scaling judgments of voice attributes made by expert listeners. Audio-recordings of The Rainbow Passage were obtained from thirty one speakers with spasmodic dysphonia before and after a BOTOX injection of the vocal folds. Six temporal acoustic measures were obtained across 15 words excerpted from each reading sample, including both frequency of occurrence and percent time for (1) aperiodic phonation, (2) phonation breaks, and (3) fundamental frequency shifts. Visual analog scaling judgments were also obtained from six voice experts using an interactive computer interface to quantify four voice attributes (i.e., overall quality, roughness, brokenness, breathiness) in a carefully psychoacoustically controlled environment, using the same reading passages as stimuli. Number and percent aperiodicity and phonation breaks correlated significanly with perceived overall voice quality, roughness, and brokenness before and after the BOTOX injection. Breathiness was correlated with aperidocity only prior to injection, while roughness also correlated with frequency shifts following injection. Factor analysis reduced perceived attributes to two principal components: glottal squeezing and breathiness. The acoustic measures demonstrated a strong regression relationship with perceived glottal squeezing, but no regression relationship with breathiness was observed. Implications for an analysis of pathologic voices will be discussed.

  15. Numerical simulation of self-sustained oscillation of a voice-producing element based on Navier-Stokes equations and the finite element method.

    PubMed

    de Vries, Martinus P; Hamburg, Marc C; Schutte, Harm K; Verkerke, Gijsbertus J; Veldman, Arthur E P

    2003-04-01

    Surgical removal of the larynx results in radically reduced production of voice and speech. To improve voice quality a voice-producing element (VPE) is developed, based on the lip principle, called after the lips of a musician while playing a brass instrument. To optimize the VPE, a numerical model is developed. In this model, the finite element method is used to describe the mechanical behavior of the VPE. The flow is described by two-dimensional incompressible Navier-Stokes equations. The interaction between VPE and airflow is modeled by placing the grid of the VPE model in the grid of the aerodynamical model, and requiring continuity of forces and velocities. By applying and increasing pressure to the numerical model, pulses comparable to glottal volume velocity waveforms are obtained. By variation of geometric parameters their influence can be determined. To validate this numerical model, an in vitro test with a prototype of the VPE is performed. Experimental and numerical results show an acceptable agreement.

  16. Central nervous system control of the laryngeal muscles in humans

    PubMed Central

    Ludlow, Christy L.

    2005-01-01

    Laryngeal muscle control may vary for different functions such as: voice for speech communication, emotional expression during laughter and cry, breathing, swallowing, and cough. This review discusses the control of the human laryngeal muscles for some of these different functions. Sensori-motor aspects of laryngeal control have been studied by eliciting various laryngeal reflexes. The role of audition in learning and monitoring ongoing voice production for speech is well known; while the role of somatosensory feedback is less well understood. Reflexive control systems involving central pattern generators may contribute to swallowing, breathing and cough with greater cortical control during volitional tasks such as voice production for speech. Volitional control is much less well understood for each of these functions and likely involves the integration of cortical and subcortical circuits. The new frontier is the study of the central control of the laryngeal musculature for voice, swallowing and breathing and how volitional and reflexive control systems may interact in humans. PMID:15927543

  17. A 4.8 kbps code-excited linear predictive coder

    NASA Technical Reports Server (NTRS)

    Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.

    1988-01-01

    A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.

  18. The Neighborhood Voice: evaluating a mobile research vehicle for recruiting African Americans to participate in cancer control studies.

    PubMed

    Alcaraz, Kassandra I; Weaver, Nancy L; Andresen, Elena M; Christopher, Kara; Kreuter, Matthew W

    2011-09-01

    The Neighborhood Voice is a vehicle customized for conducting health research in community settings. It brings research studies into neighborhoods affected most by health disparities and reaches groups often underrepresented in research samples. This paper reports on the experience and satisfaction of 599 African American women who participated in research on board the Neighborhood Voice. Using bivariate, psychometric, and logistic regression analyses, we examined responses to a brief post-research survey. Most women (71%) reported that they had never previously participated in research, and two-thirds (68%) rated their Neighborhood Voice experience as excellent. Satisfaction scores were highest among first-time research participants (p < .05). Women's ratings of the Neighborhood Voice on Comfort (OR = 4.9; 95% CI = 3.0, 7.9) and Convenience (OR = 1.8; 95% CI = 1.2, 2.9) significantly predicted having an excellent experience. Mobile research facilities may increase participation among disadvantaged and minority populations. Our brief survey instrument is a model for evaluating such outreach.

  19. A report on alterations to the speaking and singing voices of four women following hormonal therapy with virilizing agents.

    PubMed

    Baker, J

    1999-12-01

    Four women aged between 27 and 58 years sought otolaryngological examination due to significant alterations to their voices, the primary concerns being hoarseness in vocal quality, lowering of habitual pitch, difficulty projecting their speaking voices, and loss of control over their singing voices. Otolaryngological examination with a mirror or flexible laryngoscope revealed no apparent abnormality of vocal fold structure or function, and the women were referred for speech pathology with diagnoses of functional dysphonia. Objective acoustic measures using the Kay Visipitch indicated significant lowering of the mean fundamental frequency for each woman, and perceptual analysis of the patients' voices during quiet speaking, projected voice use, and comprehensive singing activities revealed a constellation of features typically noted in the pubescent male. The original diagnoses of a functional dysphonia were queried, prompting further exploration of each woman's medical history, revealing in each case onset of vocal symptoms shortly after commencing treatment for conditions with medications containing virilizing agents (eg, Danocrine (danazol), Deca-Durabolin (nandrolene decanoate), and testosterone). Although some of the vocal symptoms decreased in severity with the influences from 6 months voice therapy and after withdrawal from the drugs, a number of symptoms remained permanent, suggesting each subject had suffered significant alterations in vocal physiology, including muscle tissue changes, muscle coordination dysfunction, and propioceptive dysfunction. This retrospective study is presented in order to illustrate that it was both the projected speaking voice and the singing voice that proved so highly sensitive to the virilization effects. The implications for future prospective research studies and responsible clinical practice are discussed.

  20. Exploring the Complexities of Group Work in Science Class: a Cautionary tale of Voice and Equitable Access to Resources for Learning

    NASA Astrophysics Data System (ADS)

    Richmond, Gail

    The interactions of 2 focus students with others in their cooperative base groups were examined as the students designed, carried out, and interpreted scientific investigations. These 2 students differed with respect to race, gender, socioeconomic status, and academic achievement. They were alike in that both maintained high levels of motivation and interaction with the scientific problems they faced. Their group interactions were not entirely positive, and the difficulties and inequities they faced are described. The data make manifest that group work is a complex process; educators must be sensitive and responsive to the subtle ways understanding can be enhanced or undermined as a result of group dynamics, which are in turn determined by individual expectations - often unfounded - of others' capacities and behaviors, and perceptions of desired group and individual outcomes. These observations also have implications for how educators help prepare prospective teachers to develop effective pedagogical strategies for teaching diverse students.

  1. Modifying the verbal expression of a child with autistic behaviors.

    PubMed

    Hargrave, E; Swisher, L

    1975-06-01

    The Bell and Howell Language Master was used in conjunction with the Monterey Language Program to modify the verbal expression of a nine-year-old boy with autistic behaviors. The goal was to train the child to correctly name up to 10 pictures presented individually. Two training modes were used. For one, the therapist spoke at the time (live voice). For the other, she presented a tape recording of her voice via a Language Master. The results suggested that the child's responses to the Language Master were as good as, if not better than, his responses to the live-voice presentations. In addition, observation indicated that he responded more readily to the Language Master presentations. His spontaneous speech was also noted by independent observers to improve in his classroom and in his home. Possible reasons for the improvement in verbal expression are considered.

  2. Familiar Person Recognition: Is Autonoetic Consciousness More Likely to Accompany Face Recognition Than Voice Recognition?

    NASA Astrophysics Data System (ADS)

    Barsics, Catherine; Brédart, Serge

    2010-11-01

    Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.

  3. Electrophysiological and hemodynamic mismatch responses in rats listening to human speech syllables.

    PubMed

    Mahmoudzadeh, Mahdi; Dehaene-Lambertz, Ghislaine; Wallois, Fabrice

    2017-01-01

    Speech is a complex auditory stimulus which is processed according to several time-scales. Whereas consonant discrimination is required to resolve rapid acoustic events, voice perception relies on slower cues. Humans, right from preterm ages, are particularly efficient to encode temporal cues. To compare the capacities of preterms to those observed in other mammals, we tested anesthetized adult rats by using exactly the same paradigm as that used in preterm neonates. We simultaneously recorded neural (using ECoG) and hemodynamic responses (using fNIRS) to series of human speech syllables and investigated the brain response to a change of consonant (ba vs. ga) and to a change of voice (male vs. female). Both methods revealed concordant results, although ECoG measures were more sensitive than fNIRS. Responses to syllables were bilateral, but with marked right-hemispheric lateralization. Responses to voice changes were observed with both methods, while only ECoG was sensitive to consonant changes. These results suggest that rats more effectively processed the speech envelope than fine temporal cues in contrast with human preterm neonates, in whom the opposite effects were observed. Cross-species comparisons constitute a very valuable tool to define the singularities of the human brain and species-specific bias that may help human infants to learn their native language.

  4. An effect of loudness of advisory speech on a choice response task

    NASA Astrophysics Data System (ADS)

    Utsuki, Narisuke; Takeuchi, Yoshinori; Nomiyama, Takenori

    1995-03-01

    Recent technologies have realized talking advisory/guidance systems in which machines give advice and guidance to operators in speech. However, nonverbal aspects of spoken messages may have significant effects on an operator's behavior. Twelve subjects participated in a TV game-like choice response task where they were asked to choose a 'true' target from three invader-like figures displayed on a CRT screen. The subjects had received a prerecorded advice designating either left, center, or right target that would be true before each choice. The position of the 'true' targets and advice were preprogrammed in pseudorandom sequences. In other words, there was no way for the subjects to predict the 'true' target and there was no relationship between spoken advice and the true target position. The subjects tended to make more choices corresponding to the presented messages when the messages were presented in a louder voice than in a softer voice. Choice response time was significantly shorter when the response was the same as the advice indicated. The shortening of response time was slightly greater when advice was presented in a louder voice. This study demonstrates that spoken advice may result in faster and less deliberate reponses in accordance with the presented messages which are given by talking guidance systems.

  5. Student Voice and the Perils of Popularity

    ERIC Educational Resources Information Center

    Rudduck, Jean; Fielding, Michael

    2006-01-01

    In this article we suggest that the current popularity of student voice can lead to surface compliance--to a quick response that focuses on "how to do it" rather than a reflective review of "why we might want to do it". We look at the links between student consultation and participation and the legacy of the progressive democratic tradition in our…

  6. "In Charge of the Truffula Seeds": On Children's Literature, Rationality and Children's Voices in Philosophy

    ERIC Educational Resources Information Center

    Johansson, Viktor

    2011-01-01

    In this paper I investigate how philosophy can speak for children and how children can have a voice in philosophy and speak for philosophy. I argue that we should understand children as responsible rational individuals who are involved in their own philosophical inquiries and who can be involved in our own philosophical investigations--not because…

  7. Influence of Self-generated Anchors on the Voice Handicap Index-10 (VHI-10).

    PubMed

    Canals-Fortuny, Elisabet; Vila-Rovira, Josep

    2017-03-01

    The aim of this research is to study whether the presentation of the Voice Handicap Index-10 questionnaire administered at the beginning of the treatment impinged on the results of the responses from the end of the treatment. The questionnaire was administered at the beginning of the treatment to a total of 308 patients. After the treatment, a group of 235 patients answered the questionnaire again without any reference to their responses on the initial administration. The other group of participants, consisting of 73 subjects, completed the questionnaire with the answer sheet of their initial self-assessment in sight. The data obtained show that patients who responded to the anchored answer test show less dispersion and a smaller coefficient of variation (0.90) than those who responded to the nonanchored answer test (coefficient of variation = 1.66). The method of administration of the Voice Handicap Index-10 at the end of a treatment influences the dispersion of the results. We recommend that the patient be anchored to the initial answer sheet while responding to the final self-assessment. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. Spasmodic dysphonia follow-up with videolaryngoscopy and voice spectrography during treatment with botulinum toxin.

    PubMed

    Esposito, Marcello; Dubbioso, R; Apisa, P; Allocca, R; Santoro, L; Cesari, U

    2015-09-01

    Spasmodic dysphonia (SD) is a focal dystonia of laryngeal muscles seriously impairing quality of voice. Adductor SD (ADSD) is the most common presentation of this disorder that can be identified by specialized phoniatricians and neurologists firstly on a clinical evaluation and then confirmed by videolaryngoscopy (VL). Botulinum toxin (BTX) injection with electromyographic guidance in muscles around vocal cords is the most effective treatment. Voice Handicap Index (VHI) questionnaire is the main tool to assess dysphonia and response to treatment. Objective of this study is to perform VL and voice spectrography (VS) to confirm the efficacy of BTX injections over time. 13 patients with ADSD were studied with VHI, VL and VS before and after 4 consecutive treatment with onobotulinumtoxin-A. For each treatment vocal improvement was proved by a significant reduction of VHI score and increase of maximum time phonation and harmonic-to-noise ratio while VL showed the absence of spasm in most of patients. No change of the response to BTX was found between injections. This study supports the efficacy of the treatment of SD with BTX with objective measurements and suggests that the efficacy of recurring treatments is stable over time.

  9. How silent is silent reading? Intracerebral evidence for top-down activation of temporal voice areas during reading.

    PubMed

    Perrone-Bertolotti, Marcela; Kujala, Jan; Vidal, Juan R; Hamame, Carlos M; Ossandon, Tomas; Bertrand, Olivier; Minotti, Lorella; Kahane, Philippe; Jerbi, Karim; Lachaux, Jean-Philippe

    2012-12-05

    As you might experience it while reading this sentence, silent reading often involves an imagery speech component: we can hear our own "inner voice" pronouncing words mentally. Recent functional magnetic resonance imaging studies have associated that component with increased metabolic activity in the auditory cortex, including voice-selective areas. It remains to be determined, however, whether this activation arises automatically from early bottom-up visual inputs or whether it depends on late top-down control processes modulated by task demands. To answer this question, we collaborated with four epileptic human patients recorded with intracranial electrodes in the auditory cortex for therapeutic purposes, and measured high-frequency (50-150 Hz) "gamma" activity as a proxy of population level spiking activity. Temporal voice-selective areas (TVAs) were identified with an auditory localizer task and monitored as participants viewed words flashed on screen. We compared neural responses depending on whether words were attended or ignored and found a significant increase of neural activity in response to words, strongly enhanced by attention. In one of the patients, we could record that response at 800 ms in TVAs, but also at 700 ms in the primary auditory cortex and at 300 ms in the ventral occipital temporal cortex. Furthermore, single-trial analysis revealed a considerable jitter between activation peaks in visual and auditory cortices. Altogether, our results demonstrate that the multimodal mental experience of reading is in fact a heterogeneous complex of asynchronous neural responses, and that auditory and visual modalities often process distinct temporal frames of our environment at the same time.

  10. Acoustic Measures of Voice and Physiologic Measures of Autonomic Arousal during Speech as a Function of Cognitive Load.

    PubMed

    MacPherson, Megan K; Abur, Defne; Stepp, Cara E

    2017-07-01

    This study aimed to determine the relationship among cognitive load condition and measures of autonomic arousal and voice production in healthy adults. A prospective study design was conducted. Sixteen healthy young adults (eight men, eight women) produced a sentence containing an embedded Stroop task in each of two cognitive load conditions: congruent and incongruent. In both conditions, participants said the font color of the color words instead of the word text. In the incongruent condition, font color differed from the word text, creating an increase in cognitive load relative to the congruent condition in which font color and word text matched. Three physiologic measures of autonomic arousal (pulse volume amplitude, pulse period, and skin conductance response amplitude) and four acoustic measures of voice (sound pressure level, fundamental frequency, cepstral peak prominence, and low-to-high spectral energy ratio) were analyzed for eight sentence productions in each cognitive load condition per participant. A logistic regression model was constructed to predict the cognitive load condition (congruent or incongruent) using subject as a categorical predictor and the three autonomic measures and four acoustic measures as continuous predictors. It revealed that skin conductance response amplitude, cepstral peak prominence, and low-to-high spectral energy ratio were significantly associated with cognitive load condition. During speech produced under increased cognitive load, healthy young adults show changes in physiologic markers of heightened autonomic arousal and acoustic measures of voice quality. Future work is necessary to examine these measures in older adults and individuals with voice disorders. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Forms of Mediation: The Case of Interpreter-Mediated Interactions in Medical Systems

    ERIC Educational Resources Information Center

    Baraldi, Claudio

    2009-01-01

    This paper analyses the forms of mediation in interlinguistic interactions performed in Italian healthcare services and in contexts of migration. The literature encourages dialogic transformative mediation, empowering participants' voices and changing cultural presuppositions in social systems. It may be doubtful, however, whether mediation can…

  12. The Army word recognition system

    NASA Technical Reports Server (NTRS)

    Hadden, David R.; Haratz, David

    1977-01-01

    The application of speech recognition technology in the Army command and control area is presented. The problems associated with this program are described as well as as its relevance in terms of the man/machine interactions, voice inflexions, and the amount of training needed to interact with and utilize the automated system.

  13. Culture modulates the brain response to human expressions of emotion: electrophysiological evidence.

    PubMed

    Liu, Pan; Rigoulot, Simon; Pell, Marc D

    2015-01-01

    To understand how culture modulates on-line neural responses to social information, this study compared how individuals from two distinct cultural groups, English-speaking North Americans and Chinese, process emotional meanings of multi-sensory stimuli as indexed by both behaviour (accuracy) and event-related potential (N400) measures. In an emotional Stroop-like task, participants were presented face-voice pairs expressing congruent or incongruent emotions in conditions where they judged the emotion of one modality while ignoring the other (face or voice focus task). Results indicated that while both groups were sensitive to emotional differences between channels (with lower accuracy and higher N400 amplitudes for incongruent face-voice pairs), there were marked group differences in how intruding facial or vocal cues affected accuracy and N400 amplitudes, with English participants showing greater interference from irrelevant faces than Chinese. Our data illuminate distinct biases in how adults from East Asian versus Western cultures process socio-emotional cues, supplying new evidence that cultural learning modulates not only behaviour, but the neurocognitive response to different features of multi-channel emotion expressions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Responsive consumerism: empowerment in markets for health plans.

    PubMed

    Elbel, Brian; Schlesinger, Mark

    2009-09-01

    American health policy is increasingly relying on consumerism to improve its performance. This article examines a neglected aspect of medical consumerism: the extent to which consumers respond to problems with their health plans. Using a telephone survey of five thousand consumers conducted in 2002, this article assesses how frequently consumers voice formal grievances or exit from their health plan in response to problems of differing severity. This article also examines the potential impact of this responsiveness on both individuals and the market. In addition, using cross-group comparisons of means and regressions, it looks at how the responses of "empowered" consumers compared with those who are "less empowered." The vast majority of consumers do not formally voice their complaints or exit health plans, even in response to problems with significant consequences. "Empowered" consumers are only minimally more likely to formally voice and no more likely to leave their plan. Moreover, given the greater prevalence of trivial problems, consumers are much more likely to complain or leave their plans because of problems that are not severe. Greater empowerment does not alleviate this. While much of the attention on consumerism has focused on prospective choice, understanding how consumers respond to problems is equally, if not more, important. Relying on consumers' responses as a means to protect individual consumers or influence the market for health plans is unlikely to be successful in its current form.

  15. Stretchable Loudspeaker using Liquid Metal Microchannel

    PubMed Central

    Jin, Sang Woo; Park, Jeongwon; Hong, Soo Yeong; Park, Heun; Jeong, Yu Ra; Park, Junhong; Lee, Sang-Soo; Ha, Jeong Sook

    2015-01-01

    Considering the various applications of wearable and bio-implantable devices, it is desirable to realize stretchable acoustic devices for body-attached applications such as sensing biological signals, hearing aids, and notification of information via sound. In this study, we demonstrate the facile fabrication of a Stretchable Acoustic Device (SAD) using liquid metal coil of Galinstan where the SAD is operated by the electromagnetic interaction between the liquid metal coil and a Neodymium (Nd) magnet. To fabricate a liquid metal coil, Galinstan was injected into a micro-patterned elastomer channel. This fabricated SAD was operated simultaneously as a loudspeaker and a microphone. Measurements of the frequency response confirmed that the SAD was mechanically stable under both 50% uniaxial and 30% biaxial strains. Furthermore, 2000 repetitive applications of a 50% uniaxial strain did not induce any noticeable degradation of the sound pressure. Both voice and the beeping sound of an alarm clock were successfully recorded and played back through our SAD while it was attached to the wrist under repeated deformation. These results demonstrate the high potential of the fabricated SAD using Galinstan voice coil in various research fields including stretchable, wearable, and bio-implantable acoustic devices. PMID:26181209

  16. Stretchable Loudspeaker using Liquid Metal Microchannel

    NASA Astrophysics Data System (ADS)

    Jin, Sang Woo; Park, Jeongwon; Hong, Soo Yeong; Park, Heun; Jeong, Yu Ra; Park, Junhong; Lee, Sang-Soo; Ha, Jeong Sook

    2015-07-01

    Considering the various applications of wearable and bio-implantable devices, it is desirable to realize stretchable acoustic devices for body-attached applications such as sensing biological signals, hearing aids, and notification of information via sound. In this study, we demonstrate the facile fabrication of a Stretchable Acoustic Device (SAD) using liquid metal coil of Galinstan where the SAD is operated by the electromagnetic interaction between the liquid metal coil and a Neodymium (Nd) magnet. To fabricate a liquid metal coil, Galinstan was injected into a micro-patterned elastomer channel. This fabricated SAD was operated simultaneously as a loudspeaker and a microphone. Measurements of the frequency response confirmed that the SAD was mechanically stable under both 50% uniaxial and 30% biaxial strains. Furthermore, 2000 repetitive applications of a 50% uniaxial strain did not induce any noticeable degradation of the sound pressure. Both voice and the beeping sound of an alarm clock were successfully recorded and played back through our SAD while it was attached to the wrist under repeated deformation. These results demonstrate the high potential of the fabricated SAD using Galinstan voice coil in various research fields including stretchable, wearable, and bio-implantable acoustic devices.

  17. Unsteady flow motions in the supraglottal region during phonation

    NASA Astrophysics Data System (ADS)

    Luo, Haoxiang; Dai, Hu

    2008-11-01

    The highly unsteady flow motions in the larynx are not only responsible for producing the fundamental frequency tone in phonation, but also have a significant contribution to the broadband noise in the human voice. In this work, the laryngeal flow is modeled either as an incompressible pulsatile jet confined in a two-dimensional channel, or a pressure-driven flow modulated by a pair of viscoelastic vocal folds through the flow--structure interaction. The flow in the supraglottal region is found to be dominated by large-scale vortices whose unsteady motions significantly deflect the glottal jet. In the flow--structure interaction, a hybrid model based on the immersed-boundary method is developed to simulate the flow-induced vocal fold vibration, which involves a three-dimensional vocal fold prototype and a two-dimensional viscous flow. Both the flow behavior and the vibratory characteristics of the vocal folds will be presented.

  18. An open-label study of sodium oxybate (Xyrem®) in spasmodic dysphonia

    PubMed Central

    Rumbach, Anna F.; Blitzer, Andrew; Frucht, Steven J.; Simonyan, Kristina

    2016-01-01

    Objective Spasmodic dysphonia (SD) is a task-specific laryngeal dystonia that affects speech production. Co-occurring voice tremor (VT) often complicates the diagnosis and clinical management of SD. Treatment of SD and VT is largely limited to botulinum toxin injections into laryngeal musculature; other pharmacological options are not sufficiently developed. Study Design and Methods We conducted an open-label study in 23 SD and 22 SD/VT patients to examine the effects of sodium oxybate (Xyrem®), an oral agent with therapeutic effects similar to those of alcohol in these patients. Blinded randomized analysis of voice and speech samples assessed symptom improvement before and after drug administration. Results Sodium oxybate significantly improved voice symptoms (p = 0.001) primarily by reducing the number of SD-characteristic voice breaks and severity of VT. Sodium oxybate further showed a trend for improving VT symptoms (p = 0.03) in a subset of patients who received successful botulinum toxin injections for the management of their SD symptoms. The drug’s effects were observed approximately 30–40 min after its intake and lasted about 3.5–4 hours. Conclusion Our study demonstrated that sodium oxybate reduced voice symptoms in 82.2% of alcohol-responsive SD patients both with and without co-occurring VT. Our findings suggest that the therapeutic mechanism of sodium oxybate in SD and SD/VT may be linked to that of alcohol and as such sodium oxybate might be beneficial for alcohol-responsive SD and SD/VT patients. PMID:27808415

  19. The Diabetes Telephone Study: Design and challenges of a pragmatic cluster randomized trial to improve diabetic peripheral neuropathy treatment.

    PubMed

    Adams, Alyce S; Bayliss, Elizabeth A; Schmittdiel, Julie A; Altschuler, Andrea; Dyer, Wendy; Neugebauer, Romain; Jaffe, Marc; Young, Joseph D; Kim, Eileen; Grant, Richard W

    2016-06-01

    Challenges to effective pharmacologic management of symptomatic diabetic peripheral neuropathy include the limited effectiveness of available medicines, frequent side effects, and the need for ongoing symptom assessment and treatment titration for maximal effectiveness. We present here the rationale and implementation challenges of the Diabetes Telephone Study, a randomized trial designed to improve medication treatment, titration, and quality of life among patients with symptomatic diabetic peripheral neuropathy. We implemented a pragmatic cluster randomized controlled trial to test the effectiveness of an automated interactive voice response tool designed to provide physicians with real-time patient-reported data about responses to newly prescribed diabetic peripheral neuropathy medicines. A total of 1834 primary care physicians treating patients in the diabetes registry at Kaiser Permanente Northern California were randomized into the intervention or control arm. In September 2014, we began identification and recruitment of patients assigned to physicians in the intervention group who receive three brief interactive calls every 2 months after a medication is prescribed to alleviate diabetic peripheral neuropathy symptoms. These calls provide patients with the opportunity to report on symptoms, side effects, self-titration of medication dose and overall satisfaction with treatment. We plan to compare changes in self-reported quality of life between the intervention group and patients in the control group who receive three non-interactive automated educational phone calls. Successful implementation of this clinical trial required robust stakeholder engagement to help tailor the intervention and to address pragmatic concerns such as provider time constraints. As of 27 October 2015, we had screened 2078 patients, 1447 of whom were eligible for participation. We consented and enrolled 1206 or 83% of those eligible. Among those enrolled, 53% are women and the mean age is 67 (standard deviation = 12) years. The racial ethnic make-up is 56% White, 8% Asian, 13% Black or African American, and 19% Hispanic or Latino. Innovative strategies are needed to guide improvements in healthcare delivery for patients with symptomatic diabetic peripheral neuropathy. This trial aims to assess whether real-time collection and clinical feedback of patient treatment experiences can reduce patient symptom burden. Implementation of a clinical trial closely involving clinical care required researchers to partner with clinicians. If successful, this intervention provides a critical information feedback loop that would optimize diabetic peripheral neuropathy medication titration through widely available interactive voice response technology. © The Author(s) 2016.

  20. Effects of voice style, noise level, and acoustic feedback on objective and subjective voice evaluations

    PubMed Central

    Bottalico, Pasquale; Graetzer, Simone; Hunter, Eric J.

    2015-01-01

    Speakers adjust their vocal effort when communicating in different room acoustic and noise conditions and when instructed to speak at different volumes. The present paper reports on the effects of voice style, noise level, and acoustic feedback on vocal effort, evaluated as sound pressure level, and self-reported vocal fatigue, comfort, and control. Speakers increased their level in the presence of babble and when instructed to talk in a loud style, and lowered it when acoustic feedback was increased and when talking in a soft style. Self-reported responses indicated a preference for the normal style without babble noise. PMID:26723357

  1. Internet-Based System for Voice Communication With the ISS

    NASA Technical Reports Server (NTRS)

    Chamberlain, James; Myers, Gerry; Clem, David; Speir, Terri

    2005-01-01

    The Internet Voice Distribution System (IVoDS) is a voice-communication system that comprises mainly computer hardware and software. The IVoDS was developed to supplement and eventually replace the Enhanced Voice Distribution System (EVoDS), which, heretofore, has constituted the terrestrial subsystem of a system for voice communications among crewmembers of the International Space Station (ISS), workers at the Payloads Operations Center at Marshall Space Flight Center, principal investigators at diverse locations who are responsible for specific payloads, and others. The IVoDS utilizes a communication infrastructure of NASA and NASArelated intranets in addition to, as its name suggests, the Internet. Whereas the EVoDS utilizes traditional circuitswitched telephony, the IVoDS is a packet-data system that utilizes a voice over Internet protocol (VOIP). Relative to the EVoDS, the IVoDS offers advantages of greater flexibility and lower cost for expansion and reconfiguration. The IVoDS is an extended version of a commercial Internet-based voice conferencing system that enables each user to participate in only one conference at a time. In the IVoDS, a user can receive audio from as many as eight conferences simultaneously while sending audio to one of them. The IVoDS also incorporates administrative controls, beyond those of the commercial system, that provide greater security and control of the capabilities and authorizations for talking and listening afforded to each user.

  2. Working Conditions and Workplace Barriers to Vocal Health in Primary School Teachers.

    PubMed

    Munier, Caitriona; Farrell, Rory

    2016-01-01

    The purpose of this study was to identify the working conditions and workplace barriers to vocal health in primary school teachers. The relationship between working conditions and voice is analyzed. This is a survey study in 42 randomized schools from a restricted geographical area. An 85-item questionnaire was administered to 550 primary school teachers in 42 schools in Dublin. It was designed to obtain information on demographics, vocal use patterns, vocal health, work organization, working conditions, and teacher's perceptions of the conditions in teaching that might cause a voice problem. The relationship between voice and overstretched work demands, and voice and class size, was examined. A chi-squared test was run to test the null hypothesis that the variables overstretched work demands and voice and class size and voice are independent. Subjects were given the opportunity to give their opinion on their working conditions and on the availability of advice and support within the workplace. A final question sought their opinion on what should be included in a voice care program. A 55% response rate was obtained (n = 304). It was found with 96.52% confidence that the variables overstretched work demands and voice are related. Likewise, it was found that the variables class size and voice are related with 99.97% confidence. There are workplace barriers to vocal health. The working conditions of primary school teachers need to be fully adapted to promote vocal health. Changes by education and health policy makers are needed to achieve this goal. There is a need for future research which focuses on the working conditions of teachers. Copyright © 2016. Published by Elsevier Inc.

  3. Influence of Noise Resulting From the Location and Conditions of Classrooms and Schools in Upper Egypt on Teachers' Voices.

    PubMed

    Phadke, Ketaki Vasant; Abo-Hasseba, Ahmed; Švec, Jan G; Geneid, Ahmed

    2018-05-03

    Teachers are professional voice users, always at high risk of developing voice disorders due to high vocal demand and unfavorable environmental conditions. This study aimed at identifying possible correlations between teachers' voice symptoms and their perception of noise, the location of schools, as well as the location and conditions of their classrooms. One hundred forty teachers (ages 21-56) from schools in Upper Egypt participated in this study. They filled out a questionnaire including questions about the severity and frequency of their voice symptoms, noise perception, and the location and conditions of their schools and classrooms. Questionnaire responses were statistically analyzed to identify possible correlations. There were significant correlations (P < 0.05) between voice symptoms, teachers' noise perception, and noise resulting from the location and conditions of schools and classrooms. Teachers experienced severe dysphonia, neck pain, and increased vocal effort with weekly or daily recurrence. Among the teachers who participated in the study, 24.2% felt they were always in a noisy environment, with 51.4% of the total participants reporting having to raise their voices. The most common sources of noise were from student activities and talking in the teachers' own classrooms (61.4%), noise from adjacent classrooms (52.9%), and road traffic (40.7%). Adverse effect on teachers' voices due to noise from poor school and classroom conditions necessitates solutions for the future improvement of conditions in Egyptian schools. This study may help future studies that focus on developing guidelines for the better planning of Egyptian schools in terms of improved infrastructure and architecture, thus considering the general and vocal health of teachers. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Listening to Children's Voices: Literature and the Arts as Means of Responding to the Effects of War, Terrorism, and Disaster

    ERIC Educational Resources Information Center

    Gangi, Jane M.; Barowsky, Ellis

    2009-01-01

    More and more children are forced to deal with crushing hardships. The responsibilities of adults worldwide to attend to the affected children have never been greater. In this article, the authors first give an overview of the psychological risks for children who experience war, terrorism, and disaster. They then listen to the voices of children…

  5. Training to Use Voice Onset Time as a Cue to Talker Identification Induces a Left-Ear/Right-Hemisphere Processing Advantage

    ERIC Educational Resources Information Center

    Francis, Alexander L.; Driscoll, Courtney

    2006-01-01

    We examined the effect of perceptual training on a well-established hemispheric asymmetry in speech processing. Eighteen listeners were trained to use a within-category difference in voice onset time (VOT) to cue talker identity. Successful learners (n = 8) showed faster response times for stimuli presented only to the left ear than for those…

  6. An innovative multimodal virtual platform for communication with devices in a natural way

    NASA Astrophysics Data System (ADS)

    Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.

    2012-03-01

    As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.

  7. Characteristics of physicians targeted by the pharmaceutical industry to participate in e-detailing.

    PubMed

    Alkhateeb, Fadi M; Khanfar, Nile M; Doucette, William R; Loudon, David

    2009-01-01

    Electronic detailing (e-detailing) has been introduced in the last few years by the pharmaceutical industry as a new communication channel through which to promote pharmaceutical products to physicians. E-detailing involves using digital technology, such as Internet, video conferencing, and interactive voice response, by which drug companies target their marketing efforts toward specific physicians with pinpoint accuracy. A mail survey of 671 Iowa physicians was used to gather information about the physician characteristics and practice setting characteristics of those who are usually targeted by pharmaceutical companies to participate in e-detailing. A model is developed and tested to explain firms' targeting strategy for targeting physicians for e-detailing.

  8. VoiceThread as a Peer Review and Dissemination Tool for Undergraduate Research

    NASA Astrophysics Data System (ADS)

    Guertin, L. A.

    2012-12-01

    VoiceThread has been utilized in an undergraduate research methods course for peer review and final research project dissemination. VoiceThread (http://www.voicethread.com) can be considered a social media tool, as it is a web-based technology with the capacity to enable interactive dialogue. VoiceThread is an application that allows a user to place a media collection online containing images, audio, videos, documents, and/or presentations in an interface that facilitates asynchronous communication. Participants in a VoiceThread can be passive viewers of the online content or engaged commenters via text, audio, video, with slide annotations via a doodle tool. The VoiceThread, which runs across browsers and operating systems, can be public or private for viewing and commenting and can be embedded into any website. Although few university students are aware of the VoiceThread platform (only 10% of the students surveyed by Ng (2012)), the 2009 K-12 edition of The Horizon Report (Johnson et al., 2009) lists VoiceThread as a tool to watch because of the opportunities it provides as a collaborative learning environment. In Fall 2011, eleven students enrolled in an undergraduate research methods course at Penn State Brandywine each conducted their own small-scale research project. Upon conclusion of the projects, students were required to create a poster summarizing their work for peer review. To facilitate the peer review process outside of class, each student-created PowerPoint file was placed in a VoiceThread with private access to only the class members and instructor. Each student was assigned to peer review five different student posters (i.e., VoiceThread images) with the audio and doodle tools to comment on formatting, clarity of content, etc. After the peer reviews were complete, the students were allowed to edit their PowerPoint poster files for a new VoiceThread. In the new VoiceThread, students were required to video record themselves describing their research and taking the viewer through their poster in the VoiceThread. This new VoiceThread with their final presentations was open for public viewing but not public commenting. A formal assessment was not conducted on the student impact of using VoiceThread for peer review and final research presentations. From an instructional standpoint, requiring students to use audio for the peer review commenting seemed to result in lengthier and more detailed reviews, connected with specific poster features when the doodle tool was utilized. By recording themselves as a "talking head" for the final product, students were required to be comfortable and confident with presenting their research, similar to what would be expected at a conference presentation. VoiceThread is currently being tested in general education Earth science courses at Penn State Brandywine as a dissemination tool for classroom-based inquiry projects and recruitment tool for Earth & Mineral Science majors.

  9. Successful mLearning Pilot in Senegal: Delivering Family Planning Refresher Training Using Interactive Voice Response and SMS

    PubMed Central

    Diedhiou, Abdoulaye; Gilroy, Kate E; Cox, Carie Muntifering; Duncan, Luke; Koumtingue, Djimadoum; Pacqué-Margolis, Sara; Fort, Alfredo; Settle, Dykki; Bailey, Rebecca

    2015-01-01

    Background: In-service training of health workers plays a pivotal role in improving service quality. However, it is often expensive and requires providers to leave their posts. We developed and assessed a prototype mLearning system that used interactive voice response (IVR) and text messaging on simple mobile phones to provide in-service training without interrupting health services. IVR allows trainees to respond to audio recordings using their telephone keypad. Methods: In 2013, the CapacityPlus project tested the mobile delivery of an 8-week refresher training course on management of contraceptive side effects and misconceptions to 20 public-sector nurses and midwives working in Mékhé and Tivaouane districts in the Thiès region of Senegal. The course used a spaced-education approach in which questions and detailed explanations are spaced and repeated over time. We assessed the feasibility through the system's administrative data, examined participants' experiences using an endline survey, and employed a pre- and post-test survey to assess changes in provider knowledge. Results: All participants completed the course within 9 weeks. The majority of participant prompts to interact with the mobile course were made outside normal working hours (median time, 5:16 pm); average call duration was about 13 minutes. Participants reported positive experiences: 60% liked the ability to determine the pace of the course and 55% liked the convenience. The largest criticism (35% of participants) was poor network reception, and 30% reported dropped IVR calls. Most (90%) participants thought they learned the same or more compared with a conventional course. Knowledge of contraceptive side effects increased significantly, from an average of 12.6/20 questions correct before training to 16.0/20 after, and remained significantly higher 10 months after the end of training than at baseline, at 14.8/20, without any further reinforcement. Conclusions: The mLearning system proved appropriate, feasible, and acceptable to trainees, and it was associated with sustained knowledge gains. IVR mLearning has potential to improve quality of care without disrupting routine service delivery. Monitoring and evaluation of larger-scale implementation could provide evidence of system effectiveness at scale. PMID:26085026

  10. Real-time interactive speech technology at Threshold Technology, Incorporated

    NASA Technical Reports Server (NTRS)

    Herscher, Marvin B.

    1977-01-01

    Basic real-time isolated-word recognition techniques are reviewed. Industrial applications of voice technology are described in chronological order of their development. Future research efforts are also discussed.

  11. Recorded maternal voice for preterm neonates undergoing heel lance.

    PubMed

    Johnston, C Celeste; Filion, Francoise; Nuyt, Anne Monique

    2007-10-01

    To determine if a recording of a mother's voice talking soothingly to her baby is useful in diminishing pain in newborns born between 32 and 36 weeks' gestational age (GA) during routine painful procedures. While maternal skin-to-skin contact has been proven efficacious for diminishing procedural pain in both full-term and preterm neonates, it is often not possible for mothers to be present during a painful procedure. Because auditory development occurs before the third trimester of gestation, it was hypothesized that maternal voice could substitute for maternal presence and be effective in diminishing pain response. Preterm infants between 32 and 36 weeks' GA (n = 20) in the first 10 days of life admitted to 2 urban university-affiliated neonatal intensive care units. Crossover design with random ordering of condition. Following informed consent, an audio recording of the mother talking soothingly to her baby was filtered to simulate the mother's voice traveling through amniotic fluid. A final 10-minute recording of repetition of mothers' talking was recorded with maximum peaks of 70 decibels (dB) and played at levels ranging between 60 and 70 ambient decibels (dbA), selected above recommendations of the American Academy of Pediatrics in order to be heard over high ambient noise in the settings. This was played to her infant by a portable cassette tape player 3 times daily during a 48-hour period after feedings (gavage, bottle, or breast). At the end of 48 hours when blood work was required for clinical purposes, using a crossover design, the infant underwent the heel lancing with or without the recording being played. The order of condition was randomized, and the second condition was within 10 days. The Premature Infant Pain Profile (PIPP) was used as primary outcome. This is a composite measure using heart rate, oxygen saturation, 3 facial actions, behavioral state, and gestational age. This measure has demonstrated reliability and validity indexes. There were no significant differences between groups on the PIPP or any of the individual components of the PIPP except a lower oxygen saturation level in the voice condition following the procedure. The second condition, regardless of whether it was voice or control, had higher heart rate scores and lower oxygen saturation scores even in the prelance baseline and warming phases. Order did not affect PIPP scores or facial actions. Different modalities of maternal presence would appear to be necessary to blunt pain response in infants, and recorded maternal voice alone is not sufficient. The loudness of the recording may have obliterated the infant's ability to discern the mother's voice and may even have been aversive, reflected in decreased oxygen saturation levels in the voice condition. Preterm neonates of 32 to 36 weeks' gestation may become sensitized to painful experiences and show anticipatory physiological response.

  12. How do you say 'hello'? Personality impressions from brief novel voices.

    PubMed

    McAleer, Phil; Todorov, Alexander; Belin, Pascal

    2014-01-01

    On hearing a novel voice, listeners readily form personality impressions of that speaker. Accurate or not, these impressions are known to affect subsequent interactions; yet the underlying psychological and acoustical bases remain poorly understood. Furthermore, hitherto studies have focussed on extended speech as opposed to analysing the instantaneous impressions we obtain from first experience. In this paper, through a mass online rating experiment, 320 participants rated 64 sub-second vocal utterances of the word 'hello' on one of 10 personality traits. We show that: (1) personality judgements of brief utterances from unfamiliar speakers are consistent across listeners; (2) a two-dimensional 'social voice space' with axes mapping Valence (Trust, Likeability) and Dominance, each driven by differing combinations of vocal acoustics, adequately summarises ratings in both male and female voices; and (3) a positive combination of Valence and Dominance results in increased perceived male vocal Attractiveness, whereas perceived female vocal Attractiveness is largely controlled by increasing Valence. Results are discussed in relation to the rapid evaluation of personality and, in turn, the intent of others, as being driven by survival mechanisms via approach or avoidance behaviours. These findings provide empirical bases for predicting personality impressions from acoustical analyses of short utterances and for generating desired personality impressions in artificial voices.

  13. Instrumental and perceptual evaluations of two related singers.

    PubMed

    Buder, Eugene H; Wolf, Teresa

    2003-06-01

    The primary goal of this study was to characterize a performer's singing and speaking voice. One woman was not admitted to a premier choral group, but her sister, who was comparable in physical characteristics and background, was admitted and provided a valuable control subject. The perceptual judgment of a vocal coach who conducted the group's auditions was decisive in discriminating these 2 singers. The singer not admitted to the group described a history of voice pathology, lacked a functional head register, and spoke with a voice characterized by hoarseness. Multiple listener judgments and acoustic and aerodynamic evaluations of both singers provided a more systematic basis for determining: 1) the phonatory basis for this judgment; 2) whether similar judgments would be made by groups of vocal coaches and speech-language pathologists; and 3) whether the type of tasks (e.g., sung vs. spoken) would influence these judgments. Statistically significant differences were observed between the ratings of vocal health provided by two different groups of listeners. Significant interactions were also observed as a function of the types of voice samples heard by these listeners. Instrumental analyses provided evidence that, in comparison to her sister, the rejected singer had a compromised vocal range, glottal insufficiencies as assessed aerodynamically and electroglottographically, and impaired acoustic quality, especially in her speaking voice.

  14. Middle Years Science Teachers Voice Their First Experiences with Interactive Whiteboard Technology

    ERIC Educational Resources Information Center

    Gadbois, Shannon A.; Haverstock, Nicole

    2012-01-01

    Among new technologies, interactive whiteboards (IWBs) particularly seem to engage students and offer entertainment value that may make them highly beneficial for learning. This study examined 10 Grade 6 teachers' initial experiences and uses of IWBs for teaching science. Through interviews, classroom visits, and field notes, the outcomes…

  15. Show and Tell: Video Modeling and Instruction Without Feedback Improves Performance but Is Not Sufficient for Retention of a Complex Voice Motor Skill.

    PubMed

    Look, Clarisse; McCabe, Patricia; Heard, Robert; Madill, Catherine J

    2018-02-02

    Modeling and instruction are frequent components of both traditional and technology-assisted voice therapy. This study investigated the value of video modeling and instruction in the early acquisition and short-term retention of a complex voice task without external feedback. Thirty participants were randomized to two conditions and trained to produce a vocal siren over 40 trials. One group received a model and verbal instructions, the other group received a model only. Sirens were analyzed for phonation time, vocal intensity, cepstral peak prominence, peak-to-peak time, and root-mean-square error at five time points. The model and instruction group showed significant improvement on more outcome measures than the model-only group. There was an interaction effect for vocal intensity, which showed that instructions facilitated greater improvement when they were first introduced. However, neither group reproduced the model's siren performance across all parameters or retained the skill 1 day later. Providing verbal instruction with a model appears more beneficial than providing a model only in the prepractice phase of acquiring a complex voice skill. Improved performance was observed; however, the higher level of performance was not retained after 40 trials in both conditions. Other prepractice variables may need to be considered. Findings have implications for traditional and technology-assisted voice therapy. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. Developmental sex-specific change in auditory-vocal integration: ERP evidence in children.

    PubMed

    Liu, Peng; Chen, Zhaocong; Jones, Jeffery A; Wang, Emily Q; Chen, Shaozhen; Huang, Dongfeng; Liu, Hanjun

    2013-03-01

    The present event-related potential (ERP) study examined the developmental mechanisms of auditory-vocal integration in normally developing children. Neurophysiological responses to altered auditory feedback were recorded to determine whether they are affected by age and sex. Forty-two children were pairwise matched for sex and were divided into a group of younger (10-12years) and a group of older (13-15years) children. Twenty healthy young adults (20-25years) also participated in the experiment. ERPs were recorded from the participants who heard their voice pitch feedback unexpectedly shifted -50, -100, or -200 cents during sustained vocalization. P1 amplitudes became smaller as subjects increased in age from childhood to adulthood, and males produced larger N1 amplitudes than females. An age-related decrease in the P1-N1 latencies was also found: latencies were shorter in young adults than in school children. A complex age-by-sex interaction was found for the P2 component, where an age-related increase in P2 amplitudes existed only in girls, and boys produced longer P2 latencies than girls but only in the older children. These findings demonstrate that neurophysiological responses to pitch errors in voice auditory feedback depend on age and sex in normally developing children. The present study provides evidence that there is a sex-specific development of the neural mechanisms involved in auditory-vocal integration. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Raising voices: How sixth graders construct authority and knowledge in argumentative essays

    NASA Astrophysics Data System (ADS)

    Monahan, Mary Elizabeth

    This qualitative classroom-based study documents one teacher-researcher's response to the "voice" debate in composition studies and to the opposing views expressed by Elbow and Bartholomae. The author uses Bakhtin's principle of dialogism, Hymes's theory of communicative competence, as well as Ivanic's discussion of discoursally constructed identities to reconceptualize voice and to redesign writing instruction in her sixth grade classroom. This study shows how students, by redefining and then acting on that voice pedagogy in terms that made sense to them, shaped the author's understanding of what counts as "voiced" writing in non-narrative discourse. Based on a grounded-theory analysis of the twenty-six sixth graders' argumentative essays in science, the author explains voice, not as a property of writers or of texts, but as a process of "knowing together"---a collaborative, but not entirely congenial, exercise of establishing one's authority by talking with, against, and through other voices on the issue. As the results of this study show, the students' "I-Ness" or authorial presence within their texts, was born in a nexus of relationships with "rivals," "allies" and "readers." Given their teacher's injunctions to project confidence and authority in argumentative writing, the students assumed fairly adversarial stances toward these conversational partners throughout their essays. Exaggerating the terms for voiced writing built into the curriculum, the sixth graders produced essays that read more like caricatures than examples of argumentation. Their displays of rhetorical bravado and intellectual aggressiveness, however offsetting to the reader, still enabled these sixth graders to composed voiced essays. This study raises doubts about the value of urging students to sound like their "true selves" or to adopt the formal registers of academe. Students, it seems clear, stand to gain by experimenting with a range of textual identities. The author suggests that voice, as a dialogic process, involves a struggle for meaning---in concert, but also very much in conflict with---other speakers and their intentions.

  18. A theoretical study of F0-F1 interaction with application to resonant speaking and singing voice.

    PubMed

    Titze, Ingo R

    2004-09-01

    An interactive source-filter system, consisting of a three-mass body-cover model of the vocal folds and a wave reflection model of the vocal tract, was used to test the dependence of vocal fold vibration on the vocal tract. The degree of interaction is governed by the epilarynx tube, which raises the vocal tract impedance to match the impedance of the glottis. The key component of the impedance is inertive reactance. Whenever there is inertive reactance, the vocal tract assists the vocal folds in vibration. The amplitude of vibration and the glottal flow can more than double, and the oral radiated power can increase up to 10 dB. As F0 approaches F1, the first formant frequency, the interactive source-filter system loses its advantage (because inertive reactance changes to compliant reactance) and the noninteractive system produces greater vocal output. Thus, from a voice training and control standpoint, there may be reasons to operate the system in either interactive and noninteractive modes. The harmonics 2F0 and 3F0 can also benefit from being positioned slightly below F1.

  19. New perspective on psychosocial distress in patients with dysphonia: The moderating role of perceived control

    PubMed Central

    Meredith, Liza; Peterson, Carol B.; Frazier, Patricia A.

    2015-01-01

    Objectives Although an association between psychosocial distress (depression, anxiety, somatization, and perceived stress) and voice disorders has been observed, little is known about the relationship between distress and patient-reported voice handicap. Further, the psychological mechanisms underlying this relationship are poorly understood. Perceived control plays an important role in distress associated with other medical disorders. The objectives of this study were to 1) characterize the relationship between distress and patient-reported voice handicap and 2) examine the role of perceived control in this relationship. Study Design Cross-sectional study in tertiary care academic voice clinic. Methods Distress, perceived stress, voice handicap, and perceived control were measured using established assessment scales. Association was measured with Pearson’s correlation coefficient; moderation was assessed using multiple hierarchical regression. Results 533 patients enrolled. 34% met criteria for clinically significant distress (i.e., depression, anxiety, and/or somatization). A weak association (r=0.13, p=0.003) was observed between severity of psychosocial distress and vocal handicap. Present perceived control was inversely associated with distress (r=−0.41, p<0.0001), stress (r=−0.30, p<0.0001), and voice handicap (r=−0.30, p<0.0001). The relationship between voice handicap and psychosocial distress was moderated by perceived control (b for interaction term −0.15, p<0.001); greater vocal handicap was associated with greater distress in patients with low perceived control. Conclusions Severity of distress and vocal handicap were positively related, and the relation between them was moderated by perceived control. Vocal handicap was more related to distress among those with low perceived control; targeting this potential mechanism may facilitate new approaches for improved care. PMID:25795347

  20. Temporal Lobe Epilepsy Alters Auditory-motor Integration For Voice Control

    PubMed Central

    Li, Weifeng; Chen, Ziyi; Yan, Nan; Jones, Jeffery A.; Guo, Zhiqiang; Huang, Xiyan; Chen, Shaozhen; Liu, Peng; Liu, Hanjun

    2016-01-01

    Temporal lobe epilepsy (TLE) is the most common drug-refractory focal epilepsy in adults. Previous research has shown that patients with TLE exhibit decreased performance in listening to speech sounds and deficits in the cortical processing of auditory information. Whether TLE compromises auditory-motor integration for voice control, however, remains largely unknown. To address this question, event-related potentials (ERPs) and vocal responses to vocal pitch errors (1/2 or 2 semitones upward) heard in auditory feedback were compared across 28 patients with TLE and 28 healthy controls. Patients with TLE produced significantly larger vocal responses but smaller P2 responses than healthy controls. Moreover, patients with TLE exhibited a positive correlation between vocal response magnitude and baseline voice variability and a negative correlation between P2 amplitude and disease duration. Graphical network analyses revealed a disrupted neuronal network for patients with TLE with a significant increase of clustering coefficients and path lengths as compared to healthy controls. These findings provide strong evidence that TLE is associated with an atypical integration of the auditory and motor systems for vocal pitch regulation, and that the functional networks that support the auditory-motor processing of pitch feedback errors differ between patients with TLE and healthy controls. PMID:27356768

  1. The specificity of neural responses to music and their relation to voice processing: an fMRI-adaptation study.

    PubMed

    Armony, Jorge L; Aubé, William; Angulo-Perkins, Arafat; Peretz, Isabelle; Concha, Luis

    2015-04-23

    Several studies have identified, using functional magnetic resonance imaging (fMRI), a region within the superior temporal gyrus that preferentially responds to musical stimuli. However, in most cases, significant responses to other complex stimuli, particularly human voice, were also observed. Thus, it remains unknown if the same neurons respond to both stimulus types, albeit with different strengths, or whether the responses observed with fMRI are generated by distinct, overlapping neural populations. To address this question, we conducted an fMRI experiment in which short music excerpts and human vocalizations were presented in a pseudo-random order. Critically, we performed an adaptation-based analysis in which responses to the stimuli were analyzed taking into account the category of the preceding stimulus. Our results confirm the presence of a region in the anterior STG that responds more strongly to music than voice. Moreover, we found a music-specific adaptation effect in this area, consistent with the existence of music-preferred neurons. Lack of differences between musicians and non-musicians argues against an expertise effect. These findings provide further support for neural separability between music and speech within the temporal lobe. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Processing voiceless vowels in Japanese: Effects of language-specific phonological knowledge

    NASA Astrophysics Data System (ADS)

    Ogasawara, Naomi

    2005-04-01

    There has been little research on processing allophonic variation in the field of psycholinguistics. This study focuses on processing the voiced/voiceless allophonic alternation of high vowels in Japanese. Three perception experiments were conducted to explore how listeners parse out vowels with the voicing alternation from other segments in the speech stream and how the different voicing statuses of the vowel affect listeners' word recognition process. The results from the three experiments show that listeners use phonological knowledge of their native language for phoneme processing and for word recognition. However, interactions of the phonological and acoustic effects are observed to be different in each process. The facilitatory phonological effect and the inhibitory acoustic effect cancel out one another in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect.

  3. MIT-NASA/KSC space life science experiments - A telescience testbed

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.; Lichtenberg, Byron K.; Fiser, Richard L.; Vordermark, Deborah S.

    1990-01-01

    Experiments performed at MIT to better define Space Station information system telescience requirements for effective remote coaching of astronauts by principal investigators (PI) on the ground are described. The experiments were conducted via satellite video, data, and voice links to surrogate crewmembers working in a laboratory at NASA's Kennedy Space Center. Teams of two PIs and two crewmembers performed two different space life sciences experiments. During 19 three-hour interactive sessions, a variety of test conditions were explored. Since bit rate limits are necessarily imposed on Space Station video experiments surveillance video was varied down to 50 Kb/s and the effectiveness of PI controlled frame rate, resolution, grey scale, and color decimation was investigated. It is concluded that remote coaching by voice works and that dedicated crew-PI voice loops would be of great value on the Space Station.

  4. What can vortices tell us about vocal fold vibration and voice production.

    PubMed

    Khosla, Sid; Murugappan, Shanmugam; Gutmark, Ephraim

    2008-06-01

    Much clinical research on laryngeal airflow has assumed that airflow is unidirectional. This review will summarize what additional knowledge can be obtained about vocal fold vibration and voice production by studying rotational motion, or vortices, in laryngeal airflow. Recent work suggests two types of vortices that may strongly contribute to voice quality. The first kind forms just above the vocal folds during glottal closing, and is formed by flow separation in the glottis; these flow separation vortices significantly contribute to rapid closing of the glottis, and hence, to producing loudness and high frequency harmonics in the acoustic spectrum. The second is a group of highly three-dimensional and coherent supraglottal vortices, which can produce sound by interaction with structures in the vocal tract. Present work is also described that suggests that certain laryngeal pathologies, such as asymmetric vocal fold tension, will significantly modify both types of vortices, with adverse impact on sound production: decreased rate of glottal closure, increased broadband noise, and a decreased signal to noise ratio. Recent research supports the hypothesis that glottal airflow contains certain vortical structures that significantly contribute to voice quality.

  5. Taming the fear of voice: Dilemmas in maintaining a high vaccination rate in the Netherlands.

    PubMed

    Geelen, Els; van Vliet, Hans; de Hoogh, Pieter; Horstman, Klasien

    2016-03-01

    In the context of international public debates on vaccination the National Institute for Public Health and the Environment (RIVM), the Dutch public health body responsible for the National Immunization Programme (NIP), fears that the high vaccination rate of children in the Netherlands obscures the many doubts and criticisms parents may have about vaccination. The question arises as to how the robustness of this vaccination rate and the resilience of the NIP can be assessed. To answer this question, we explore the vaccination practices and relationships between professionals and parents using qualitative methods. Drawing on Hirschman's concepts of exit, voice and loyalty, we distinguish between two different approaches to vaccination: one which enforces parental loyalty to the vaccination programme, and one which allows for voice. The analysis shows that due to their lack of voice in the main vaccination setting, parents' considerations are unknown and insight into their loyalty is lacking. We argue that the Dutch vaccination programme is caught between the insecurity of enforced parental loyalty to the NIP and the insecurity of enabling parental voice and negotiating space. We conclude that to increase the resilience of the NIP, experimenting with voice and exit is inevitable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. An initial study of voice characteristics of children using two different sound coding strategies in comparison to normal hearing children.

    PubMed

    Coelho, Ana Cristina; Brasolotto, Alcione Ghedini; Bevilacqua, Maria Cecília

    2015-06-01

    To compare some perceptual and acoustic characteristics of the voices of children who use the advanced combination encoder (ACE) or fine structure processing (FSP) speech coding strategies, and to investigate whether these characteristics differ from children with normal hearing. Acoustic analysis of the sustained vowel /a/ was performed using the multi-dimensional voice program (MDVP). Analyses of sequential and spontaneous speech were performed using the real time pitch. Perceptual analyses of these samples were performed using visual-analogic scales of pre-selected parameters. Seventy-six children from three years to five years and 11 months of age participated. Twenty-eight were users of ACE, 23 were users of FSP, and 25 were children with normal hearing. Although both groups with CI presented with some deviated vocal features, the users of ACE presented with voice quality more like children with normal hearing than the users of FSP. Sound processing of ACE appeared to provide better conditions for auditory monitoring of the voice, and consequently, for better control of the voice production. However, these findings need to be further investigated due to the lack of comparative studies published to understand exactly which attributes of sound processing are responsible for differences in performance.

  7. IBM techexplorer and MathML: Interactive Multimodal Scientific Documents

    NASA Astrophysics Data System (ADS)

    Diaz, Angel

    2001-06-01

    The World Wide Web provides a standard publishing platform for disseminating scientific and technical articles, books, journals, courseware, or even homework on the internet; however, the transition from paper to web-based interactive content has brought new opportunities for creating interactive content. Students, scientists, and engineers are now faced with the task of rendering the 2D presentational structure of mathematics, harnessing the wealth of scientific and technical software, and creating truly accessible scientific portals across international boundaries and markets. The recent emergence of World Wide Web Consortium (W3C) standards such as the Mathematical Markup Language (MathML), Language (XSL), and Aural CSS (ACSS) provide a foundation whereby mathematics can be displayed, enlivened, computed, and audio formatted. With interoperability ensured by standards, software applications can be easily brought together to create extensible and interactive scientific content. In this presentation we will provide an overview of the IBM techexplorer Hypermedia Browser, a web browser plug-in and ActiveX control aimed at bringing interactive mathematics to the masses across platforms and applications. We will demonstrate "live" mathematics where documents that contain MathML expressions can be edited and computed right inside your favorite web browser. This demonstration will be generalized as we show how MathML can be used to enliven even PowerPoint presentations. Finally, we will close the loop by demonstrating a novel approach to spoken mathematics based on MathML, DOM, XSL, ACSS, techexplorer, and IBM ViaVoice. By making use of techexplorer as the glue that binds the rendered content to the web browser, the back-end computation software, the Java applets that augment the exposition, and voice-rendering systems such as ViaVoice, authors can indeed create truly extensible and interactive scientific content. For more information see: [http://www.software.ibm.com/techexplorer] [http://www.alphaworks.ibm.com] [http://www.w3.org

  8. The Development of an Interactive Voice Response Survey for Noncommunicable Disease Risk Factor Estimation: Technical Assessment and Cognitive Testing

    PubMed Central

    Pereira, Amanda; Labrique, Alain B; Pariyo, George William

    2017-01-01

    Background The rise in mobile phone ownership in low- and middle-income countries (LMICs) presents an opportunity to transform existing data collection and surveillance methods. Administering surveys via interactive voice response (IVR) technology—a mobile phone survey (MPS) method—has potential to expand the current surveillance coverage and data collection, but formative work to contextualize the survey for LMIC deployment is needed. Objective The primary objectives of this study were to (1) cognitively test and identify challenging questions in a noncommunicable disease (NCD) risk factor questionnaire administered via an IVR platform and (2) assess the usability of the IVR platform. Methods We conducted two rounds of pilot testing the IVR survey in Baltimore, MD. Participants were included in the study if they identified as being from an LMIC. The first round included individual interviews to cognitively test the participant’s understanding of the questions. In the second round, participants unique from those in round 1 were placed in focus groups and were asked to comment on the usability of the IVR platform. Results A total of 12 participants from LMICs were cognitively tested in round 1 to assess their understanding and comprehension of questions in an IVR-administered survey. Overall, the participants found that the majority of the questions were easy to understand and did not have difficulty recording most answers. The most frequent recommendation was to use country-specific examples and units of measurement. In round 2, a separate set of 12 participants assessed the usability of the IVR platform. Overall, participants felt that the length of the survey was appropriate (average: 18 min and 31 s), but the majority reported fatigue in answering questions that had a similar question structure. Almost all participants commented that they thought an IVR survey would lead to more honest, accurate responses than face-to-face questionnaires, especially for sensitive topics. Conclusions Overall, the participants indicated a clear comprehension of the IVR-administered questionnaire and that the IVR platform was user-friendly. Formative research and cognitive testing of the questionnaire is needed for further adaptation before deploying in an LMIC. PMID:28476724

  9. New Perspective on Psychosocial Distress in Patients with Dysphonia: The Moderating Role of Perceived Control.

    PubMed

    Misono, Stephanie; Meredith, Liza; Peterson, Carol B; Frazier, Patricia A

    2016-03-01

    Although an association between psychosocial distress (depression, anxiety, somatization, and perceived stress) and voice disorders has been observed, little is known about the relationship between distress and patient-reported voice handicap. Furthermore, the psychological mechanisms underlying this relationship are poorly understood. Perceived control plays an important role in distress associated with other medical disorders. The objectives of this study were to (1) characterize the relationship between distress and patient-reported voice handicap and (2) examine the role of perceived control in this relationship. This is a cross-sectional study in a tertiary care academic voice clinic. Distress, perceived stress, voice handicap, and perceived control were measured using established assessment scales. Association was measured with Pearson correlation coefficients; moderation was assessed using multiple hierarchical regression. A total of 533 patients enrolled. Thirty-four percent of the patients met criteria for clinically significant distress (ie, depression, anxiety, and/or somatization). A weak association (r = 0.13; P = 0.003) was observed between severity of psychosocial distress and vocal handicap. Present perceived control was inversely associated with distress (r = -0.41; P < 0.0001), stress (r = -0.30; P < 0.0001), and voice handicap (r = -0.30; P < 0.0001). The relationship between voice handicap and psychosocial distress was moderated by perceived control (b for interaction term, -0.15; P < 0.001); greater vocal handicap was associated with greater distress in patients with low perceived control. Severity of distress and vocal handicap were positively related, and the relation between them was moderated by perceived control. Vocal handicap was more related to distress among those with low perceived control; targeting this potential mechanism may facilitate new approaches for improved care. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. Participatory Investigation of the Great East Japan Disaster: PhotoVoice from Women Affected by the Calamity.

    PubMed

    Yoshihama, Mieko; Yunomae, Tomoko

    2018-05-02

    Disasters exacerbate predisaster inequities and intensify the vulnerability of women and other marginalized and disempowered groups. Thus, disaster policies and responses should incorporate the experiences and perspectives of those who are marginalized. The authors sought to conduct a participatory research project to help develop more inclusive, gender-informed disaster responses and policies in Japan. In June 2011, following three months of planning and preparation, they initiated a participatory examination of the impact of the Great East Japan Disaster using PhotoVoice methodology. Engaging the very women affected by the calamity, the authors first implemented the project in three localities in the hardest-hit areas of northern Japan-the prefectures of Fukushima, Miyagi, and Iwate. The authors have since expanded the project to other locations, and the project is ongoing. Focused on the planning, implementation, and outcomes of the initial phase, this article examines the role and potential of participatory action research using the PhotoVoice methodology in the aftermath of a major disaster.

  11. Numerical analysis of effects of transglottal pressure change on fundamental frequency of phonation.

    PubMed

    Deguchi, Shinji; Matsuzaki, Yuji; Ikeda, Tadashige

    2007-02-01

    In humans, a decrease in transglottal pressure (Pt) causes an increase in the fundamental frequency of phonation (F0) only at a specific voice pitch within the modal register, the mechanism of which remains unclear. In the present study, numerical analyses were performed to investigate the mechanism of the voice pitch-dependent positive change of F0 due to Pt decrease. The airflow and the airway, including the vocal folds, were modeled in terms of mechanics of fluid and structure. Simulations of phonation using the numerical model indicated that Pt affects both the average position and the average amplitude magnitude of vocal fold self-excited oscillation in a non-monotonous manner. This effect results in voice pitch-dependent responses of F0 to Pt decreases, including the positive response of F0 as actually observed in humans. The findings of the present study highlight the importance of considering self-excited oscillation of the vocal folds in elucidation of the phonation mechanism.

  12. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  13. Responsive Consumerism: Empowerment in Markets for Health Plans

    PubMed Central

    Elbel, Brian; Schlesinger, Mark

    2009-01-01

    Context: American health policy is increasingly relying on consumerism to improve its performance. This article examines a neglected aspect of medical consumerism: the extent to which consumers respond to problems with their health plans. Methods: Using a telephone survey of five thousand consumers conducted in 2002, this article assesses how frequently consumers voice formal grievances or exit from their health plan in response to problems of differing severity. This article also examines the potential impact of this responsiveness on both individuals and the market. In addition, using cross-group comparisons of means and regressions, it looks at how the responses of “empowered” consumers compared with those who are “less empowered.” Findings: The vast majority of consumers do not formally voice their complaints or exit health plans, even in response to problems with significant consequences. “Empowered” consumers are only minimally more likely to formally voice and no more likely to leave their plan. Moreover, given the greater prevalence of trivial problems, consumers are much more likely to complain or leave their plans because of problems that are not severe. Greater empowerment does not alleviate this. Conclusions: While much of the attention on consumerism has focused on prospective choice, understanding how consumers respond to problems is equally, if not more, important. Relying on consumers’ responses as a means to protect individual consumers or influence the market for health plans is unlikely to be successful in its current form. PMID:19751285

  14. An open-label study of sodium oxybate in Spasmodic dysphonia.

    PubMed

    Rumbach, Anna F; Blitzer, Andrew; Frucht, Steven J; Simonyan, Kristina

    2017-06-01

    Spasmodic dysphonia (SD) is a task-specific laryngeal dystonia that affects speech production. Co-occurring voice tremor (VT) often complicates the diagnosis and clinical management of SD. Treatment of SD and VT is largely limited to botulinum toxin injections into laryngeal musculature; other pharmacological options are not sufficiently developed. Open-label study. We conducted an open-label study in 23 SD and 22 SD/VT patients to examine the effects of sodium oxybate (Xyrem), an oral agent with therapeutic effects similar to those of alcohol in these patients. Blinded randomized analysis of voice and speech samples assessed symptom improvement before and after drug administration. Sodium oxybate significantly improved voice symptoms (P = .001) primarily by reducing the number of SD-characteristic voice breaks and severity of VT. Sodium oxybate further showed a trend for improving VT symptoms (P = .03) in a subset of patients who received successful botulinum toxin injections for the management of their SD symptoms. The drug's effects were observed approximately 30 to 40 minutes after its intake and lasted about 3.5 to 4 hours. Our study demonstrated that sodium oxybate reduced voice symptoms in 82.2% of alcohol-responsive SD patients both with and without co-occurring VT. Our findings suggest that the therapeutic mechanism of sodium oxybate in SD and SD/VT may be linked to that of alcohol, and as such, sodium oxybate might be beneficial for alcohol-responsive SD and SD/VT patients. 4 Laryngoscope, 127:1402-1407, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  15. Perceptual Detection of Subtle Dysphonic Traits in Individuals with Cervical Spinal Cord Injury Using an Audience Response Systems Approach.

    PubMed

    Johansson, Kerstin; Strömbergsson, Sofia; Robieux, Camille; McAllister, Anita

    2017-01-01

    Reduced respiratory function following lower cervical spinal cord injuries (CSCIs) may indirectly result in vocal dysfunction. Although self-reports indicate voice change and limitations following CSCI, earlier efforts using global perceptual ratings to distinguish speakers with CSCI from noninjured speakers have not been very successful. We investigate the use of an audience response system-based approach to distinguish speakers with CSCI from noninjured speakers, and explore whether specific vocal traits can be identified as characteristic for speakers with CSCI. Fourteen speech-language pathologists participated in a web-based perceptual task, where their overt reactions to vocal dysfunction were registered during the continuous playback of recordings of 36 speakers (18 with CSCI, and 18 matched controls). Dysphonic events were identified through manual perceptual analysis, to allow the exploration of connections between dysphonic events and listener reactions. More dysphonic events, and more listener reactions, were registered for speakers with CSCI than for noninjured speakers. Strain (particularly in phrase-final position) and creak (particularly in nonphrase-final position) distinguish speakers with CSCI from noninjured speakers. For the identification of intermittent and subtle signs of vocal dysfunction, an approach where the temporal distribution of symptoms is registered offers a viable means to distinguish speakers affected by voice dysfunction from non-affected speakers. In speakers with CSCI, clinicians should listen for presence of final strain and nonfinal creak, and pay attention to self-reported voice function and voice problems, to identify individuals in need for clinical assessment and intervention. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. Chiropractic Care for a Patient with Spasmodic Dysphonia Associated with Cervical Spine Trauma

    PubMed Central

    Waddell, Roger K.

    2005-01-01

    Abstract Objective To discuss the diagnosis and response to treatment of spasmodic dysphonia in a 25-year-old female vocalist following an auto accident. Clinical Features The voice disorder and neck pain appeared after the traumatic incident. Examination of the cervical spine revealed moderate pain, muscle spasm and restricted joint motion at C-1 and C-5 on the left side. Cervical range of motion was reduced on left rotation. Bilateral manual muscle testing of the trapezius and sternocleidomastoid muscles, which share innervation with the laryngeal muscles by way of the spinal accessory nerve, were weak on the left side. Pre and post accident voice range profiles (phonetograms) that measure singing voice quality were examined. The pre- and post-accident phonetograms revealed significant reduction in voice intensity and fundamental frequency as measured in decibels and hertz. Intervention and Outcome Low-force chiropractic spinal manipulative therapy to C-1 and C-5 was employed. Following a course of care, the patient's singing voice returned to normal, as well as a resolution of her musculo- skeletal complaints. Conclusion It appears that in certain cases, the singing voice can be adversely affected if neck or head trauma is severe enough. This case proposes that trauma with irritation to the cervical spine nerve roots as they communicate with the spinal accessory, and in turn the laryngeal nerves, may be contributory in some functional voice disorders or muscle tension dysphonia. PMID:19674642

  17. ERP correlates of motivating voices: quality of motivation and time-course matters

    PubMed Central

    Zougkou, Konstantina; Weinstein, Netta

    2017-01-01

    Abstract Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. ‘You absolutely have to do it my way’ spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. ‘Why don’t we meet again tomorrow’ spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms. PMID:28525641

  18. ERP correlates of motivating voices: quality of motivation and time-course matters.

    PubMed

    Zougkou, Konstantina; Weinstein, Netta; Paulmann, Silke

    2017-10-01

    Here, we conducted the first study to explore how motivations expressed through speech are processed in real-time. Participants listened to sentences spoken in two types of well-studied motivational tones (autonomy-supportive and controlling), or a neutral tone of voice. To examine this, listeners were presented with sentences that either signaled motivations through prosody (tone of voice) and words simultaneously (e.g. 'You absolutely have to do it my way' spoken in a controlling tone of voice), or lacked motivationally biasing words (e.g. 'Why don't we meet again tomorrow' spoken in a motivational tone of voice). Event-related brain potentials (ERPs) in response to motivations conveyed through words and prosody showed that listeners rapidly distinguished between motivations and neutral forms of communication as shown in enhanced P2 amplitudes in response to motivational when compared with neutral speech. This early detection mechanism is argued to help determine the importance of incoming information. Once assessed, motivational language is continuously monitored and thoroughly evaluated. When compared with neutral speech, listening to controlling (but not autonomy-supportive) speech led to enhanced late potential ERP mean amplitudes, suggesting that listeners are particularly attuned to controlling messages. The importance of controlling motivation for listeners is mirrored in effects observed for motivations expressed through prosody only. Here, an early rapid appraisal, as reflected in enhanced P2 amplitudes, is only found for sentences spoken in controlling (but not autonomy-supportive) prosody. Once identified as sounding pressuring, the message seems to be preferentially processed, as shown by enhanced late potential amplitudes in response to controlling prosody. Taken together, results suggest that motivational and neutral language are differentially processed; further, the data suggest that listening to cues signaling pressure and control cannot be ignored and lead to preferential, and more in-depth processing mechanisms. © The Author (2017). Published by Oxford University Press.

  19. Influences on physicians' adoption of electronic detailing (e-detailing).

    PubMed

    Alkhateeb, Fadi M; Doucette, William R

    2009-01-01

    E-detailing means using digital technology: internet, video conferencing and interactive voice response. There are two types of e-detailing: interactive (virtual) and video. Currently, little is known about what factors influence physicians' adoption of e-detailing. The objectives of this study were to test a model of physicians' adoption of e-detailing and to describe physicians using e-detailing. A mail survey was sent to a random sample of 2000 physicians practicing in Iowa. Binomial logistic regression was used to test the model of influences on physician adoption of e-detailing. On the basis of Rogers' model of adoption, the independent variables included relative advantage, compatibility, complexity, peer influence, attitudes, years in practice, presence of restrictive access to traditional detailing, type of specialty, academic affiliation, type of practice setting and control variables. A total of 671 responses were received giving a response rate of 34.7%. A total of 141 physicians (21.0%) reported using of e-detailing. The overall adoption model for using either type of e-detailing was found to be significant. Relative advantage, peer influence, attitudes, type of specialty, presence of restrictive access and years of practice had significant influences on physician adoption of e-detailing. The model of adoption of innovation is useful to explain physicians' adoption of e-detailing.

  20. Voice, (inter-)subjectivity, and real time recurrent interaction

    PubMed Central

    Cummins, Fred

    2014-01-01

    Received approaches to a unified phenomenon called “language” are firmly committed to a Cartesian view of distinct unobservable minds. Questioning this commitment leads us to recognize that the boundaries conventionally separating the linguistic from the non-linguistic can appear arbitrary, omitting much that is regularly present during vocal communication. The thesis is put forward that uttering, or voicing, is a much older phenomenon than the formal structures studied by the linguist, and that the voice has found elaborations and codifications in other domains too, such as in systems of ritual and rite. Voice, it is suggested, necessarily gives rise to a temporally bound subjectivity, whether it is in inner speech (Descartes' “cogito”), in conversation, or in the synchronized utterances of collective speech found in prayer, protest, and sports arenas world wide. The notion of a fleeting subjective pole tied to dynamically entwined participants who exert reciprocal influence upon each other in real time provides an insightful way to understand notions of common ground, or socially shared cognition. It suggests that the remarkable capacity to construct a shared world that is so characteristic of Homo sapiens may be grounded in this ability to become dynamically entangled as seen, e.g., in the centrality of joint attention in human interaction. Empirical evidence of dynamic entanglement in joint speaking is found in behavioral and neuroimaging studies. A convergent theoretical vocabulary is now available in the concept of participatory sense-making, leading to the development of a rich scientific agenda liberated from a stifling metaphysics that obscures, rather than illuminates, the means by which we come to inhabit a shared world. PMID:25101028

  1. Organizational uncertainty and stress among teachers in Hong Kong: work characteristics and organizational justice.

    PubMed

    Hassard, Juliet; Teoh, Kevin; Cox, Tom

    2017-10-01

    A growing literature now exists examining the relationship between organizational justice and employees' experience of stress. Despite the growth in this field of enquiry, there remain continued gaps in knowledge. In particular, the contribution of perceptions of justice to employees' stress within an organizational context of uncertainty and change, and in relation to the new and emerging concept of procedural-voice justice. The aim of the current study was to examine the main, interaction and additive effects of work characteristics and organizational justice perceptions to employees' experience of stress (as measured by their feelings of helplessness and perceived coping) during an acknowledged period of organizational uncertainty. Questionnaires were distributed among teachers in seven public primary schools in Hong Kong that were under threat of closure (n = 212). Work characteristics were measured using the demand-control-support model. Hierarchical regression analyses observed perceptions of job demands and procedural-voice justice to predict both teachers' feelings of helplessness and perceived coping ability. Furthermore, teacher's perceived coping was predicted by job control and a significant interaction between procedural-voice justice and distributive justice. The addition of organizational justice variables did account for unique variance, but only in relation to the measure of perceived coping. The study concludes that in addition to 'traditional' work characteristics, health promotion strategies should also address perceptions of organizational justice during times of organizational uncertainty; and, in particular, the value and importance of enhancing employee's perceived 'voice' in influencing and shaping justice-related decisions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Cognitive Load in Voice Therapy Carry-Over Exercises.

    PubMed

    Iwarsson, Jenny; Morris, David Jackson; Balling, Laura Winther

    2017-01-01

    The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication. Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire. Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks. The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.

  3. Exploring interpersonal behavior and team sensemaking during health information technology implementation.

    PubMed

    Kitzmiller, Rebecca R; McDaniel, Reuben R; Johnson, Constance M; Lind, E Allan; Anderson, Ruth A

    2013-01-01

    We examine how interpersonal behavior and social interaction influence team sensemaking and subsequent team actions during a hospital-based health information technology (HIT) implementation project. Over the course of 18 months, we directly observed the interpersonal interactions of HIT implementation teams using a sensemaking lens. We identified three voice-promoting strategies enacted by team leaders that fostered team member voice and sensemaking; communicating a vision; connecting goals to team member values; and seeking team member input. However, infrequent leader expressions of anger quickly undermined team sensemaking, halting dialog essential to problem solving. By seeking team member opinions, team leaders overcame the negative effects of anger. Leaders must enact voice-promoting behaviors and use them throughout a team's engagement. Further, training teams in how to use conflict to achieve greater innovation may improve sensemaking essential to project risk mitigation. Health care work processes are complex; teams involved in implementing improvements must be prepared to deal with conflicting, contentious issues, which will arise during change. Therefore, team conflict training may be essential to sustaining sensemaking. Future research should seek to identify team interactions that foster sensemaking, especially when topics are difficult or unwelcome, then determine the association between staff sensemaking and the impact on HIT implementation outcomes. We are among the first to focus on project teams tasked with HIT implementation. This research extends our understanding of how leaders' behaviors might facilitate or impeded speaking up among project teams in health care settings.

  4. Fluid-Structure Interactions as Flow Propagates Tangentially Over a Flexible Plate with Application to Voiced Speech Production

    NASA Astrophysics Data System (ADS)

    Westervelt, Andrea; Erath, Byron

    2013-11-01

    Voiced speech is produced by fluid-structure interactions that drive vocal fold motion. Viscous flow features influence the pressure in the gap between the vocal folds (i.e. glottis), thereby altering vocal fold dynamics and the sound that is produced. During the closing phases of the phonatory cycle, vortices form as a result of flow separation as air passes through the divergent glottis. It is hypothesized that the reduced pressure within a vortex core will alter the pressure distribution along the vocal fold surface, thereby aiding in vocal fold closure. The objective of this study is to determine the impact of intraglottal vortices on the fluid-structure interactions of voiced speech by investigating how the dynamics of a flexible plate are influenced by a vortex ring passing tangentially over it. A flexible plate, which models the medial vocal fold surface, is placed in a water-filled tank and positioned parallel to the exit of a vortex generator. The physical parameters of plate stiffness and vortex circulation are scaled with physiological values. As vortices propagate over the plate, particle image velocimetry measurements are captured to analyze the energy exchange between the fluid and flexible plate. The investigations are performed over a range of vortex formation numbers, and lateral displacements of the plate from the centerline of the vortex trajectory. Observations show plate oscillations with displacements directly correlated with the vortex core location.

  5. Verbal collision avoidance messages during simulated driving: perceived urgency, alerting effectiveness and annoyance.

    PubMed

    Baldwin, Carryl L

    2011-04-01

    Matching the perceived urgency of an alert with the relative hazard level of the situation is critical for effective alarm response. Two experiments describe the impact of acoustic and semantic parameters on ratings of perceived urgency, annoyance and alerting effectiveness and on alarm response speed. Within a simulated driving context, participants rated and responded to collision avoidance system (CAS) messages spoken by a female or male voice (experiments 1 and 2, respectively). Results indicated greater perceived urgency and faster alarm response times as intensity increased from -2 dB signal to noise (S/N) ratio to +10 dB S/N, although annoyance ratings increased as well. CAS semantic content interacted with alarm intensity, indicating that at lower intensity levels participants paid more attention to the semantic content. Results indicate that both acoustic and semantic parameters independently and interactively impact CAS alert perceptions in divided attention conditions and this work can inform auditory alarm design for effective hazard matching. Matching the perceived urgency of an alert with the relative hazard level of the situation is critical for effective alarm response. Here, both acoustic and semantic parameters independently and interactively impacted CAS alert perceptions in divided attention conditions. This work can inform auditory alarm design for effective hazard matching. STATEMENT OF RELEVANCE: Results indicate that both acoustic parameters and semantic content can be used to design collision warnings with a range of urgency levels. Further, these results indicate that verbal warnings tailored to a specific hazard situation may improve hazard-matching capabilities without substantial trade-offs in perceived annoyance.

  6. Occupational voice demands and their impact on the call-centre industry.

    PubMed

    Hazlett, D E; Duffy, O M; Moorhead, S A

    2009-04-20

    Within the last decade there has been a growth in the call-centre industry in the UK, with a growing awareness of the voice as an important tool for successful communication. Occupational voice problems such as occupational dysphonia, in a business which relies on healthy, effective voice as the primary professional communication tool, may threaten working ability and occupational health and safety of workers. While previous studies of telephone call-agents have reported a range of voice symptoms and functional vocal health problems, there have been no studies investigating the use and impact of vocal performance in the communication industry within the UK. This study aims to address a significant gap in the evidence-base of occupational health and safety research. The objectives of the study are: 1. to investigate the work context and vocal communication demands for call-agents; 2. to evaluate call-agents' vocal health, awareness and performance; and 3. to identify key risks and training needs for employees and employers within call-centres. This is an occupational epidemiological study, which plans to recruit call-centres throughout the UK and Ireland. Data collection will consist of three components: 1. interviews with managers from each participating call-centre to assess their communication and training needs; 2. an online biopsychosocial questionnaire will be administered to investigate the work environment and vocal demands of call-agents; and 3. voice acoustic measurements of a random sample of participants using the Multi-dimensional Voice Program (MDVP). Qualitative content analysis from the interviews will identify underlying themes and issues. A multivariate analysis approach will be adopted using Structural Equation Modelling (SEM), to develop voice measurement models in determining the construct validity of potential factors contributing to occupational dysphonia. Quantitative data will be analysed using SPSS version 15. Ethical approval is granted for this study from the School of Communication, University of Ulster. The results from this study will provide the missing element of voice-based evidence, by appraising the interactional dimensions of vocal health and communicative performance. This information will be used to inform training for call-agents and to contribute to health policies within the workplace, in order to enhance vocal health.

  7. Instant messages vs. speech: hormones and why we still need to hear each other.

    PubMed

    Seltzer, Leslie J; Prososki, Ashley R; Ziegler, Toni E; Pollak, Seth D

    2012-01-01

    Human speech evidently conveys an adaptive advantage, given its apparently rapid dissemination through the ancient world and global use today. As such, speech must be capable of altering human biology in a positive way, possibly through those neuroendocrine mechanisms responsible for strengthening the social bonds between individuals. Indeed, speech between trusted individuals is capable of reducing levels of salivary cortisol, often considered a biomarker of stress, and increasing levels of urinary oxytocin, a hormone involved in the formation and maintenance of positive relationships. It is not clear, however, whether it is the uniquely human grammar, syntax, content and/or choice of words that causes these physiological changes, or whether the prosodic elements of speech, which are present in the vocal cues of many other species, are responsible. In order to tease apart these elements of human communication, we examined the hormonal responses of female children who instant messaged their mothers after undergoing a stressor. We discovered that unlike children interacting with their mothers in person or over the phone, girls who instant messaged did not release oxytocin; instead, these participants showed levels of salivary cortisol as high as control subjects who did not interact with their parents at all. We conclude that the comforting sound of a familiar voice is responsible for the hormonal differences observed and, hence, that similar differences may be seen in other species using vocal cues to communicate.

  8. A randomized study of telephonic care support in populations at risk for musculoskeletal preference-sensitive surgeries.

    PubMed

    Veroff, David R; Ochoa-Arvelo, Tamara; Venator, Benjamin

    2013-02-07

    The rate of elective surgeries varies dramatically by geography in the United States. For many of these surgeries, there is not clear evidence of their relative merits over alternate treatment choices and there are significant tradeoffs in short- and long-term risks and benefits of selecting one treatment option over another. Conditions and symptoms for which there is this lack of a single clear evidence-based treatment choice present great opportunities for patient and provider collaboration on decision making; back pain and joint osteoarthritis are two such ailments. A number of decision aids are in active use to encourage this shared decision-making process. Decision aids have been assessed in formal studies that demonstrate increases in patient knowledge, increases in patient-provider engagement, and reduction in surgery rates. These studies have not widely demonstrated the added benefit of health coaching in support of shared decision making nor have they commonly provided strong evidence of cost reductions. In order to add to this evidence base, we undertook a comparative study testing the relative impact on health utilization and costs of active outreach through interactive voice response technology to encourage health coaching in support of shared decision making in comparison to mailed outreach or no outreach. This study focused on individuals with back pain or joint pain. We conducted four waves of stratified randomized comparisons for individuals with risk for back, hip, or knee surgery who did not have claims-based evidence of one or more of five chronic conditions and were eligible for population care management services within three large regional health plans in the United States. An interactive voice response (IVR) form of outreach that included the capability for individuals to directly connect with health coaches telephonically, known as AutoDialog(®), was compared to a control (mailed outreach or natural levels of inbound calling depending on the study wave). In total, the study include 24,167 adults with commercial and Medicare Advantage private coverage at three health plans and at risk for lumbar back surgery, hip repair/replacement, or knee repair/replacement. Interactive voice response outreach led to 10.7 (P-value < .0001) times as many inbound calls within 30 days as the control. Over 180 days, the IVR group ("intervention") had 67 percent (P-value < .0001) more health coach communications and agreed to be sent 3.2 (P-value < .0001) time as many DVD- and/or booklet-based decision aids. Targeted surgeries were reduced by 6.7 percent (P-value = .6039). Overall costs were lower by 4.9 percent (P-value = .055). Costs that were not related to maternity, cancer, trauma and substance abuse ("actionable costs") were reduced by 6.5 percent (P-value = .0286). IVR with a transfer-to-health coach-option significantly increased levels of health coaching compared to mailed or no outreach and lead to significantly reduced actionable medical costs. Providing high levels of health coaching to individuals with these types of risks appears to have produced important levels of actionable medical cost reductions. We believe this impact resulted from more informed and engaged health care decision making.

  9. Fostering Students' Science Inquiry through App Affordances of Multimodality, Collaboration, Interactivity, and Connectivity

    ERIC Educational Resources Information Center

    Beach, Richard; O'Brien, David

    2015-01-01

    This study examined 6th graders' use of the VoiceThread app as part of a science inquiry project on photosynthesis and carbon dioxide emissions in terms of their ability to engage in causal reasoning and their use of the affordances of multimodality, collaboration, interactivity, and connectivity. Students employed multimodal production using…

  10. Early development of polyphonic sound encoding and the high voice superiority effect.

    PubMed

    Marie, Céline; Trainor, Laurel J

    2014-05-01

    Previous research suggests that when two streams of pitched tones are presented simultaneously, adults process each stream in a separate memory trace, as reflected by mismatch negativity (MMN), a component of the event-related potential (ERP). Furthermore, a superior encoding of the higher tone or voice in polyphonic sounds has been found for 7-month-old infants and both musician and non-musician adults in terms of a larger amplitude MMN in response to pitch deviant stimuli in the higher than the lower voice. These results, in conjunction with modeling work, suggest that the high voice superiority effect might originate in characteristics of the peripheral auditory system. If this is the case, the high voice superiority effect should be present in infants younger than 7 months. In the present study we tested 3-month-old infants as there is no evidence at this age of perceptual narrowing or specialization of musical processing according to the pitch or rhythmic structure of music experienced in the infant׳s environment. We presented two simultaneous streams of tones (high and low) with 50% of trials modified by 1 semitone (up or down), either on the higher or the lower tone, leaving 50% standard trials. Results indicate that like the 7-month-olds, 3-month-old infants process each tone in a separate memory trace and show greater saliency for the higher tone. Although MMN was smaller and later in both voices for the group of sixteen 3-month-olds compared to the group of sixteen 7-month-olds, the size of the difference in MMN for the high compared to low voice was similar across ages. These results support the hypothesis of an innate peripheral origin of the high voice superiority effect. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Task-specific singing dystonia: vocal instability that technique cannot fix.

    PubMed

    Halstead, Lucinda A; McBroom, Deanna M; Bonilha, Heather Shaw

    2015-01-01

    Singer's dystonia is a rare variation of focal laryngeal dystonia presenting only during specific tasks in the singing voice. It is underdiagnosed since it is commonly attributed to technique problems including increased muscle tension, register transition, or wobble. Singer's dystonia differs from technique-related issues in that it is task- and/or pitch-specific, reproducible and occurs independently from the previously mentioned technical issues.This case series compares and contrasts profiles of four patients with singer's dystonia to increase our knowledge of this disorder. This retrospective case series includes a detailed case history, results of singing evaluations from individual voice teachers, review of singing voice samples by a singing voice specialist, evaluation by a laryngologist with endoscopy and laryngeal electromyography (LEMG), and spectral analysis of the voice samples by a speech-language pathologist. Results demonstrate the similarities and unique differences of individuals with singer's dystonia. Response to treatment and singing status varied from nearly complete relief of symptoms with botulinum toxin injections to minor relief of symptoms and discontinuation of singing. The following are the conclusions from this case series: (1) singer's dystonia exists as a separate entity from technique issues, (2) singer's dystonia is consistent with other focal task-specific dystonias found in musicians, (3) correctly diagnosing singer's dystonia allows singer's access to medical treatment of dystonia and an opportunity to modify their singing repertoire to continue singing with the voice they have, and (4) diagnosis of singer's dystonia requires careful sequential multidisciplinary evaluation to isolate the instability and confirm dystonia by LEMG and spectral voice analysis. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  12. Development and Validation of the Children's Voice Handicap Index-10 for Parents.

    PubMed

    Ricci-Maccarini, Andrea; De Maio, Vincenzo; Murry, Thomas; Schindler, Antonio

    2016-01-01

    The Children's Voice Handicap Index-10 (CVHI-10) was introduced as a tool for self-assessment of children's dysphonia. However, in the management of children with voice disorders, both parents' and children's perspectives play an important role. Because a self-tool including both a children's and a parents' version does not exist yet, the aim of the study was to develop and validate an assessment tool which parallels the CVHI-10 for parents to assess the level of voice handicap in their child's voice. Observational, prospective, cross-sectional study. To develop a CVHI-10 for parents, called "CVHI-10-P", the CVHI-10 items were adapted to reflect parents' responses about their child. Fifty-five children aged 7-12 years completed the CVHI-10, whereas their parents completed the CVHI-10-P. Each child's voice was also perceptually assessed by an otolaryngologist using the Grade Breathness Roughness (GRB) scale. Fifty-one of the 55 children underwent voice therapy (VT) and were assessed afterward using the GRB, CVHI-10, and CVHI-10-P. CVHI-10-P internal consistency was satisfactory (Cronbach alpha = .78). Correlation between CVHI-10-P and CVHI-10 was moderate (r = 0.37). CVHI-10-P total scores were lower than CVHI-10 scores in most of the cases. Single-item mean scores were always lower in CVHI-10-P compared with CVHI-10, with the exception of the only one item of the CVHI-10-P that directly involves the parent's experience (item 10). Data gained from one tool are not directly related to the other, suggesting that these two tools appraise the child's voice handicap from different perspectives. The overall perceptual assessment scores of the 51 children after VT significantly improved. There was a statistically significant reduction of the total scores and for each item in CVHI-10 and CVHI-10-P after VT. These data support the adoption of the CVHI-10-P as an assessment tool and an outcome measure for management of children's voice disorders. CVHI-10-P is a valid tool to appraise parents' perspective of their child's voice disorder. The use of the CVHI-10 and the CVHI-10-P is recommended for objectively determining the level of voice handicap in children by parents and child. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Study of Risk Factors for Development of Voice Disorders and its Impact on the Quality of Life of School Teachers in Mangalore, India.

    PubMed

    Alva, Arati; Machado, Megna; Bhojwani, Kiran; Sreedharan, Suja

    2017-01-01

    School teachers are most prone to the development and detrimental effects of voice disorders as a consequence of their work. The risk factors for development of dysphonia in teachers are multifactorial. The primary aim of our study was to investigate the various risk factors that influence the onset and progression of voice disorders in school teachers in the Indian context. We wanted to assess the effect of voice problems on the physical, psychosocial and functional aspect of a teacher's life. It was a cross-sectional study conducted across three English medium institutions. A total of 105 teachers consented to participate in the study and they had to answer a semi-structured, pre-tested questionnaire, which included demographic details, living habits (drug intake, smoking and alcohol intake) health condition [any Deviated Nasal Septum (DNS), Gastroesophageal Reflux Disease (GERD), stress, etc., or any history of surgery], teaching characteristics, voice symptoms and physical discomforts and quality of life assessment. The completed questionnaires were collected and analyzed based on the responses obtained. It was found that 81% of the study population had voice problems at some point of their career. A total of 26% of them fell into the voice disorder category. The association of upper respiratory infections DNS and GERD with voice disorders was found to be statistically significant. We also found that a significant number of teachers with voice disorders had changed their teaching styles and were planning to opt for an early retirement. Most importantly, it was also seen that teachers with voice disorders were more likely to have a poorer quality of life as compared to those without a voice disorder (p<0.001). Voice disorders had a significant bearing on all the spheres of a school teacher's life. The affected teachers were more likely to take sick leaves, change overall job opinions, retire early, reduce overall communiations, repeat statements and avoid talking to people in person as well as over the telephone. It reduced their overall social abilities and made them avoid social activities. They got easily upset and were dissatisfied with their job performance. All these in turn deteriorate the quality of life in these individuals.

  14. Study of Risk Factors for Development of Voice Disorders and its Impact on the Quality of Life of School Teachers in Mangalore, India

    PubMed Central

    Machado, Megna; Bhojwani, Kiran; Sreedharan, Suja

    2017-01-01

    Introduction School teachers are most prone to the development and detrimental effects of voice disorders as a consequence of their work. The risk factors for development of dysphonia in teachers are multifactorial. Aim The primary aim of our study was to investigate the various risk factors that influence the onset and progression of voice disorders in school teachers in the Indian context. We wanted to assess the effect of voice problems on the physical, psychosocial and functional aspect of a teacher’s life. Materials and Methods It was a cross-sectional study conducted across three English medium institutions. A total of 105 teachers consented to participate in the study and they had to answer a semi-structured, pre-tested questionnaire, which included demographic details, living habits (drug intake, smoking and alcohol intake) health condition [any Deviated Nasal Septum (DNS), Gastroesophageal Reflux Disease (GERD), stress, etc., or any history of surgery], teaching characteristics, voice symptoms and physical discomforts and quality of life assessment. The completed questionnaires were collected and analyzed based on the responses obtained. Results It was found that 81% of the study population had voice problems at some point of their career. A total of 26% of them fell into the voice disorder category. The association of upper respiratory infections DNS and GERD with voice disorders was found to be statistically significant. We also found that a significant number of teachers with voice disorders had changed their teaching styles and were planning to opt for an early retirement. Most importantly, it was also seen that teachers with voice disorders were more likely to have a poorer quality of life as compared to those without a voice disorder (p<0.001). Conclusion Voice disorders had a significant bearing on all the spheres of a school teacher’s life. The affected teachers were more likely to take sick leaves, change overall job opinions, retire early, reduce overall communiations, repeat statements and avoid talking to people in person as well as over the telephone. It reduced their overall social abilities and made them avoid social activities. They got easily upset and were dissatisfied with their job performance. All these in turn deteriorate the quality of life in these individuals. PMID:28273984

  15. Interactions of hyaluronan grafted on protein surfaces studied using a quartz crystal microbalance and a surface force balance.

    PubMed

    Jiang, Lei; Han, Juan; Yang, Limin; Ma, Hongchao; Huang, Bo

    2015-10-07

    Vocal folds are complex and multilayer-structured where the main layer is widely composed of hyaluronan (HA). The viscoelasticity of HA is key to voice production in the vocal fold as it affects the initiation and maintenance of phonation. In this study a simple layer-structured surface model was set up to mimic the structure of the vocal folds. The interactions between two opposing surfaces bearing HA were measured and characterised to analyse HA's response to the normal and shear compression at a stress level similar to that in the vocal fold. From the measurements of the quartz crystal microbalance, atomic force microscopy and the surface force balance, the osmotic pressure, normal interactions, elasticity change, volume fraction, refractive index and friction of both HA and the supporting protein layer were obtained. These findings may shed light on the physical mechanism of HA function in the vocal fold and the specific role of HA as an important component in the effective treatment of the vocal fold disease.

  16. Effects of vocal training and phonatory task on voice onset time.

    PubMed

    McCrea, Christopher R; Morris, Richard J

    2007-01-01

    The purpose of this study was to examine the temporal-acoustic differences between trained singers and nonsingers during speech and singing tasks. Thirty male participants were separated into two groups of 15 according to level of vocal training (ie, trained or untrained). The participants spoke and sang carrier phrases containing English voiced and voiceless bilabial stops, and voice onset time (VOT) was measured for the stop consonant productions. Mixed analyses of variance revealed a significant main effect between speech and singing for /p/ and /b/, with VOT durations longer during speech than singing for /p/, and the opposite true for /b/. Furthermore, a significant phonatory task by vocal training interaction was observed for /p/ productions. The results indicated that the type of phonatory task influences VOT and that these influences are most obvious in trained singers secondary to the articulatory and phonatory adjustments learned during vocal training.

  17. Irregular vocal fold dynamics incited by asymmetric fluid loading in a model of recurrent laryngeal nerve paralysis

    NASA Astrophysics Data System (ADS)

    Sommer, David; Erath, Byron D.; Zanartu, Matias; Peterson, Sean D.

    2011-11-01

    Voiced speech is produced by dynamic fluid-structure interactions in the larynx. Traditionally, reduced order models of speech have relied upon simplified inviscid flow solvers to prescribe the fluid loadings that drive vocal fold motion, neglecting viscous flow effects that occur naturally in voiced speech. Viscous phenomena, such as skewing of the intraglottal jet, have the most pronounced effect on voiced speech in cases of vocal fold paralysis where one vocal fold loses some, or all, muscular control. The impact of asymmetric intraglottal flow in pathological speech is captured in a reduced order two-mass model of speech by coupling a boundary-layer estimation of the asymmetric pressures with asymmetric tissue parameters that are representative of recurrent laryngeal nerve paralysis. Nonlinear analysis identifies the emergence of irregular and chaotic vocal fold dynamics at values representative of pathological speech conditions.

  18. Differences in botulinum toxin dosing between patients with adductor spasmodic dysphonia and essential voice tremor.

    PubMed

    Orbelo, Diana M; Duffy, Joseph R; Hughes Borst, Becky J; Ekbom, Dale; Maragos, Nicolas E

    2014-01-01

    To explore possible dose differences in average botulinum toxin (BTX) given to patients with adductor spasmodic dysphonia (ADSD) compared with patients with essential voice tremor (EVT). A retrospective study compared the average BTX dose injected in equal doses to the thyroarytenoid (TA) muscles of 51 patients with ADSD with 52 patients with EVT. Those with ADSD received significantly higher total doses (6.80 ± 2.79 units) compared with those with EVT (5.02 ± 1.65 units). Dose at time of first injection, age at time of first injection, gender, year of first injection, and average time between injections were included in multivariate analysis but did not interact with total average dose findings. Patients with ADSD may need relatively higher doses of BTX injections to bilateral TA muscles compared with patients with EVT. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. In vitro experimental investigation of voice production

    PubMed Central

    Horáčcek, Jaromír; Brücker, Christoph; Becker, Stefan

    2012-01-01

    The process of human phonation involves a complex interaction between the physical domains of structural dynamics, fluid flow, and acoustic sound production and radiation. Given the high degree of nonlinearity of these processes, even small anatomical or physiological disturbances can significantly affect the voice signal. In the worst cases, patients can lose their voice and hence the normal mode of speech communication. To improve medical therapies and surgical techniques it is very important to understand better the physics of the human phonation process. Due to the limited experimental access to the human larynx, alternative strategies, including artificial vocal folds, have been developed. The following review gives an overview of experimental investigations of artificial vocal folds within the last 30 years. The models are sorted into three groups: static models, externally driven models, and self-oscillating models. The focus is on the different models of the human vocal folds and on the ways in which they have been applied. PMID:23181007

  20. Functional hoarseness in children: short-term play therapy with family dynamic counseling as therapy of choice.

    PubMed

    Kollbrunner, Jürg; Seifert, Eberhard

    2013-09-01

    Children with nonorganic voice disorders (NVDs) are treated mainly using direct voice therapy techniques such as the accent method or glottal attack changes and indirect methods such as vocal hygiene and voice education. However, both approaches tackle only the symptoms and not etiological factors in the family dynamics and therefore often enjoy little success. The aim of the "Bernese Brief Dynamic Intervention" (BBDI) for children with NVD was to extend the effectiveness of pediatric voice therapies with a psychosomatic concept combining short-term play therapy with the child and family dynamic counseling of the parents. This study compares the therapeutic changes in three groups where different procedures were used, before intervention and 1 year afterward: counseling of parents (one to two consultations; n = 24), Brief Dynamic Intervention on the lines of the BBDI (three to five play therapy sessions with the child plus two to four sessions with the parents; n = 20), and traditional voice therapy (n = 22). A Voice Questionnaire for Parents developed by us with 59 questions to be answered on a four-point Likert scale was used to measure the change. According to the parents' assessment, a significant improvement in voice quality was achieved in all three methods. Counseling of parents (A) appears to have led parents to give their child more latitude, for example, they stopped nagging the child or demanding that he/she should behave strictly by the rules. After BBDI (B), the mothers were more responsive to their children's wishes and the children were more relaxed and their speech became livelier. At home, they called out to them less often at a distance, which probably improved parent-child dialog. Traditional voice therapy (C) seems to have had a positive effect on the children's social competence. BBDI seems to have the deepest, widest, and therefore probably the most enduring therapeutic effect on children with NVD. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  1. Translational Systems Biology and Voice Pathophysiology

    PubMed Central

    Li, Nicole Y. K.; Abbott, Katherine Verdolini; Rosen, Clark; An, Gary; Hebda, Patricia A.; Vodovotz, Yoram

    2011-01-01

    Objectives/Hypothesis Personalized medicine has been called upon to tailor healthcare to an individual's needs. Evidence-based medicine (EBM) has advocated using randomized clinical trials with large populations to evaluate treatment effects. However, due to large variations across patients, the results are likely not to apply to an individual patient. We suggest that a complementary, systems biology approach using computational modeling may help tackle biological complexity in order to improve ultimate patient care. The purpose of the article is: 1) to review the pros and cons of EBM, and 2) to discuss the alternative systems biology method and present its utility in clinical voice research. Study Design Tutorial Methods Literature review and discussion. Results We propose that translational systems biology can address many of the limitations of EBM pertinent to voice and other health care domains, and thus complement current health research models. In particular, recent work using mathematical modeling suggests that systems biology has the ability to quantify the highly complex biologic processes underlying voice pathophysiology. Recent data support the premise that this approach can be applied specifically in the case of phonotrauma and surgically induced vocal fold trauma, and may have particular power to address personalized medicine. Conclusions We propose that evidence around vocal health and disease be expanded beyond a population-based method to consider more fully issues of complexity and systems interactions, especially in implementing personalized medicine in voice care and beyond. PMID:20025041

  2. Human voice perception.

    PubMed

    Latinus, Marianne; Belin, Pascal

    2011-02-22

    We are all voice experts. First and foremost, we can produce and understand speech, and this makes us a unique species. But in addition to speech perception, we routinely extract from voices a wealth of socially-relevant information in what constitutes a more primitive, and probably more universal, non-linguistic mode of communication. Consider the following example: you are sitting in a plane, and you can hear a conversation in a foreign language in the row behind you. You do not see the speakers' faces, and you cannot understand the speech content because you do not know the language. Yet, an amazing amount of information is available to you. You can evaluate the physical characteristics of the different protagonists, including their gender, approximate age and size, and associate an identity to the different voices. You can form a good idea of the different speaker's mood and affective state, as well as more subtle cues as the perceived attractiveness or dominance of the protagonists. In brief, you can form a fairly detailed picture of the type of social interaction unfolding, which a brief glance backwards can on the occasion help refine - sometimes surprisingly so. What are the acoustical cues that carry these different types of vocal information? How does our brain process and analyse this information? Here we briefly review an emerging field and the main tools used in voice perception research. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Aligning the Cultures of Teaching and Learning Science in Urban High Schools

    NASA Astrophysics Data System (ADS)

    Tobin, Kenneth

    2006-09-01

    This paper analyzes teaching and learning in urban science classrooms in which most of the students are African American and from low-income homes. Their teachers are also racial minorities and yet they struggle to teach successfully across cultural boundaries. The first set of case studies involves a male teacher who taught in a high-energy way that produced structures for students to get involved in the doing of science. His verbal fluency and expressive individualism, involving emphatic gestures, rhythmic use of his body, and voice intonation maintained student participation. A second case study examines successful interactions among the students, involving an argument over competing models for chemical valence. Whereas the students interacted successfully, the teacher was frequently out of synchrony in terms of amplitude, pitch, and non-verbal actions. The key implication is the necessity for teachers and students to learn how to interact successfully in ways that produce positive emotional energy, a sense of belonging to the class, and a commitment to shared responsibility for one another's participation. Aligning the cultures of teaching and learning offers a possibility that fluent interactions will occur, afford success, and facilitate the learning of science.

  4. The Effect of a Computerized Teaching Assistant on Student Interaction, Student Satisfaction, and Retention Rates of Students in a Distance Course

    ERIC Educational Resources Information Center

    Reindl-Johnson, Cheryl

    2004-01-01

    The purpose of this study was to investigate the effect of a computerized teaching assistant (CTA) on student interaction, student satisfaction, and retention rates of students in a distance course. The CTA is humanoid and speaks in a human voice from recorded sound clips, to give the student the feeling that he/she is interacting with a person,…

  5. The perception of complex pitch in cochlear implants: A comparison of monopolar and tripolar stimulation.

    PubMed

    Fielden, Claire A; Kluk, Karolina; Boyle, Patrick J; McKay, Colette M

    2015-10-01

    Cochlear implant listeners typically perform poorly in tasks of complex pitch perception (e.g., musical pitch and voice pitch). One explanation is that wide current spread during implant activation creates channel interactions that may interfere with perception of temporal fundamental frequency information contained in the amplitude modulations within channels. Current focusing using a tripolar mode of stimulation has been proposed as a way of reducing channel interactions, minimising spread of excitation and potentially improving place and temporal pitch cues. The present study evaluated the effect of mode in a group of cochlear implant listeners on a pitch ranking task using male and female singing voices separated by either a half or a quarter octave. Results were variable across participants, but on average, pitch ranking was at chance level when the pitches were a quarter octave apart and improved when the difference was a half octave. No advantage was observed for tripolar over monopolar mode at either pitch interval, suggesting that previously published psychophysical advantages for focused modes may not translate into improvements in complex pitch ranking. Evaluation of the spectral centroid of the stimulation pattern, plus a lack of significant difference between male and female voices, suggested that participants may have had difficulty in accessing temporal pitch cues in either mode.

  6. Neural basis of processing threatening voices in a crowded auditory world

    PubMed Central

    Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.

    2016-01-01

    In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543

  7. Unique voices in harmony: Call-and-response to address race and physics teaching

    NASA Astrophysics Data System (ADS)

    Cochran, Geraldine L.; White, Gary D.

    2017-09-01

    In the February 2016 issue of The Physics Teacher, we announced a call for papers on race and physics teaching. The response was muted at first, but has now grown to a respectable chorale-sized volume. As the manuscripts began to come in and the review process progressed, Geraldine Cochran graciously agreed to come on board as co-editor for this remarkable collection of papers, to be published throughout the fall of 2017 in TPT. Upon reviewing the original call and the responses from the physics community, the parallels between generating this collection and the grand call-and-response tradition became compelling. What follows is a conversation constructed by the co-editors that is intended to introduce the reader to the swell of voices that responded to the original call. The authors would like to thank Pam Aycock for providing many useful contributions to this editorial.

  8. A laryngographic and laryngoscopic study of Northern Vietnamese tones.

    PubMed

    Brunelle, Marc; Nguyên, Duy Duong; Nguyên, Khac Hùng

    2010-01-01

    A laryngographic and laryngoscopic study of tone production in Northern Vietnamese, a language whose tones combine both fundamental frequency (f0) modulations and voice qualities (phonation types), was conducted with 5 male and 5 female speakers. Results show that the f0 contours of Northern Vietnamese tones are not only attributable to changes in vocal fold length and tension (partly through changes in larynx height), but that f0 drops are also largely caused by the glottal configurations responsible for the contrastive voice qualities associated with some of the tones. We also find that voice quality contrasts are mostly due to glottal constriction: they occasionally involve additional ventricular fold incursion and epiglottal constriction, but these articulations are usually absent. Copyright © 2010 S. Karger AG, Basel.

  9. Emotionally conditioning the target-speech voice enhances recognition of the target speech under "cocktail-party" listening conditions.

    PubMed

    Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang

    2018-05-01

    Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.

  10. Communication-related affective, behavioral, and cognitive reactions in speakers with spasmodic dysphonia.

    PubMed

    Watts, Christopher R; Vanryckeghem, Martine

    2017-12-01

    To investigate the self-perceived affective, behavioral, and cognitive reactions associated with communication of speakers with spasmodic dysphonia as a function of employment status. Prospective cross-sectional investigation. 148 Participants with spasmodic dysphonia (SD) completed an adapted version of the Behavior Assessment Battery (BAB-Voice), a multidimensional assessment of self-perceived reactions to communication. The BAB-Voice consisted of four subtests: the Speech Situation Checklist for A) Emotional Reaction (SSC-ER) and B) Speech Disruption (SSC-SD), C) the Behavior Checklist (BCL), and D) the Communication Attitude Test for Adults (BigCAT). Participants were assigned to groups based on employment status (working versus retired). Descriptive comparison of the BAB-Voice in speakers with SD to previously published non-dysphonic speaker data revealed substantially higher scores associated with SD across all four subtests. Multivariate Analysis of Variance (MANOVA) revealed no significantly different BAB-Voice subtest scores as a function of SD group status (working vs. retired). BAB-Voice scores revealed that speakers with SD experienced substantial impact of their voice disorder on communication attitude, coping behaviors, and affective reactions in speaking situations as reflected in their high BAB scores. These impacts do not appear to be influenced by work status, as speakers with SD who were employed or retired experienced similar levels of affective and behavioral reactions in various speaking situations and cognitive responses. These findings are consistent with previously published pilot data. The specificity of items assessed by means of the BAB-Voice may inform the clinician of valid patient-centered treatment goals which target the impairment extended beyond the physiological dimension. 2b.

  11. Comparing the experience of voices in borderline personality disorder with the experience of voices in a psychotic disorder: A systematic review.

    PubMed

    Merrett, Zalie; Rossell, Susan L; Castle, David J

    2016-07-01

    In clinical settings, there is substantial evidence both clinically and empirically to suggest that approximately 50% of individuals with borderline personality disorder experience auditory verbal hallucinations. However, there is limited research investigating the phenomenology of these voices. The aim of this study was to review and compare our current understanding of auditory verbal hallucinations in borderline personality disorder with auditory verbal hallucinations in patients with a psychotic disorder, to critically analyse existing studies investigating auditory verbal hallucinations in borderline personality disorder and to identify gaps in current knowledge, which will help direct future research. The literature was searched using the electronic database Scopus, PubMed and MEDLINE. Relevant studies were included if they were written in English, were empirical studies specifically addressing auditory verbal hallucinations and borderline personality disorder, were peer reviewed, used only adult humans and sample comprising borderline personality disorder as the primary diagnosis, and included a comparison group with a primary psychotic disorder such as schizophrenia. Our search strategy revealed a total of 16 articles investigating the phenomenology of auditory verbal hallucinations in borderline personality disorder. Some studies provided evidence to suggest that the voice experiences in borderline personality disorder are similar to those experienced by people with schizophrenia, for example, occur inside the head, and often involved persecutory voices. Other studies revealed some differences between schizophrenia and borderline personality disorder voice experiences, with the borderline personality disorder voices sounding more derogatory and self-critical in nature and the voice-hearers' response to the voices were more emotionally resistive. Furthermore, in one study, the schizophrenia group's voices resulted in more disruption in daily functioning. These studies are, however, limited in number and do not provide definitive evidence of these differences. The limited research examining auditory verbal hallucinations experiences in borderline personality disorder poses a significant diagnostic and treatment challenge. A deeper understanding of the precise phenomenological characteristics will help us in terms of diagnostic distinction as well as inform treatments. © The Royal Australian and New Zealand College of Psychiatrists 2016.

  12. African Security Challenges: Now and Over the Horizon - Voices from the NGO Community

    DTIC Science & Technology

    2010-11-01

    economies will have a tendency to shift back to dependence on unskilled sectors such as mining, black -market or informal trading and international aid...PEPFAR Watch. Pepfarwatch.org. Rice , A. 2007. “An African Solution.” Nation, June 11. Voices from the NGO Community - 5.19 - African Security...oversight role in this area by parliaments, regardless of formal mandates, roles and responsibilities. In order to avoid the risk of cosmetic changes

  13. Plastic reorganization of neural systems for perception of others in the congenitally blind.

    PubMed

    Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I

    2017-09-01

    Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. To Hybrid or Not to Hybrid, that Is the Question! Incorporating VoiceThread Technology into a Traditional Communication Course

    ERIC Educational Resources Information Center

    Pecot-Hebert, Lisa

    2012-01-01

    A hybrid course, which combines the face-to-face interactions of a traditional course with the flexibility of an online course, provides an alternative option for educating students in a new media environment. While educators often interact with their students through various electronic learning management systems that are set up within the…

  15. Focusing on Culture-Related Episodes in a Teletandem Interaction between a Brazilian and an American Student

    ERIC Educational Resources Information Center

    Zakir, Maisa A.; Funo, Ludmila B. A.; Telles, João A.

    2016-01-01

    Teletandem is a telecollaborative learning context that involves pairs of native (or competent) speakers of different languages interacting through voice, text and webcam image. Using Skype, each participant plays the role of learner for half an hour, speaking and practising the language of his/her partner. This paper focuses on a teletandem…

  16. "Big Loud Voice. You Have Important Things to Say": The Nature of Student Initiations during One Teacher's Interactive Read-Alouds

    ERIC Educational Resources Information Center

    Maloch, Beth; Beutel, Denise Duncan

    2010-01-01

    This qualitative study explored the nature of student initiations during interactive read alouds of fiction and non-fiction texts in a second grade, urban classroom. Data sources--including expanded field notes, video/audiotape records and transcripts, and teacher interviews--were analyzed inductively, utilizing the constant comparative method and…

  17. Discussion boards: boring no more!

    PubMed

    Adelman, Deborah S; Nogueras, Debra J

    2013-01-01

    Creating discussion boards (DBs) that capture student imaginations and contain meaningful interactions can be a difficult process. Traditional DBs use a question-and-answer format that often is boring for both the student and instructor. The authors present creative approaches to DBs that result in lively debates and student-to-student and student-to-faculty interactions, including role playing, blogging, wikis, and the use of voice.

  18. Web 2.0, Pedagogical Support for Reflexive and Emotional Social Interaction among Swedish Students

    ERIC Educational Resources Information Center

    Augustsson, Gunnar

    2010-01-01

    Collaborative social interaction when using Web 2.0 in terms of VoiceThread is investigated in a case study of a Swedish university course in social psychology. The case study method was chosen because of the desire not to manipulate the students' behaviour, and data was collected in parallel with course implementation. Two particular…

  19. Former Auctioneer Finds Voice After Aphasia

    MedlinePlus

    ... And, people in trials also benefit from the social interaction. They become part of our group and we try to create an enjoyable, welcoming, and supportive environment." Now, five years ... social situations and continue normal activities. Limit your conversation ...

  20. Considerations in the Use of Interactive Voice Recording for the Temporal Assessment of Suicidal Ideation and Alcohol Use.

    PubMed

    Bishop, Todd M; Maisto, Stephen A; Britton, Peter C; Pigeon, Wilfred R

    2016-09-01

    A greater understanding of the temporal variation of suicidal ideation and suicidal behavior is needed to inform more effective prevention efforts. Interactive voice recording (IVR) allows for the study of temporal relationships that cannot be captured with most traditional methodologies. To examine the feasibility of implementing IVR for the assessment of suicidal ideation. Participants (n = 4) receiving a brief intervention based on dialectical behavior therapy were asked to respond to three phone-based surveys each day over 6 weeks that assessed suicidal ideation and alcohol consumption. Participants completed 77.7% of daily assessments, reported that calls were not burdensome, and indicated that calls were sometimes helpful in interrupting suicidal ideation. The preliminary data reported here provide optimism for the use of IVR and other forms of ecological momentary assessment in the exploration of the antecedents of suicidal behavior.

  1. The GuideView System for Interactive, Structured, Multi-modal Delivery of Clinical Guidelines

    NASA Technical Reports Server (NTRS)

    Iyengar, Sriram; Florez-Arango, Jose; Garcia, Carlos Andres

    2009-01-01

    GuideView is a computerized clinical guideline system which delivers clinical guidelines in an easy-to-understand and easy-to-use package. It may potentially enhance the quality of medical care or allow non-medical personnel to provide acceptable levels of care in situations where physicians or nurses may not be available. Such a system can be very valuable during space flight missions when a physician is not readily available, or perhaps the designated medical personnel is unable to provide care. Complex clinical guidelines are broken into simple steps. At each step clinical information is presented in multiple modes, including voice,audio, text, pictures, and video. Users can respond via mouse clicks or via voice navigation. GuideView can also interact with medical sensors using wireless or wired connections. The system's interface is illustrated and the results of a usability study are presented.

  2. Designing a spoken dialogue interface to an intelligent cognitive assistant for people with dementia.

    PubMed

    Wolters, Maria Klara; Kelly, Fiona; Kilgour, Jonathan

    2016-12-01

    Intelligent cognitive assistants support people who need help performing everyday tasks by detecting when problems occur and providing tailored and context-sensitive assistance. Spoken dialogue interfaces allow users to interact with intelligent cognitive assistants while focusing on the task at hand. In order to establish requirements for voice interfaces to intelligent cognitive assistants, we conducted three focus groups with people with dementia, carers, and older people without a diagnosis of dementia. Analysis of the focus group data showed that voice and interaction style should be chosen based on the preferences of the user, not those of the carer. For people with dementia, the intelligent cognitive assistant should act like a patient, encouraging guide, while for older people without dementia, assistance should be to the point and not patronising. The intelligent cognitive assistant should be able to adapt to cognitive decline. © The Author(s) 2015.

  3. [Effects of a voice metronome on compression rate and depth in telephone assisted, bystander cardiopulmonary resuscitation: an investigator-blinded, 3-armed, randomized, simulation trial].

    PubMed

    van Tulder, Raphael; Roth, Dominik; Krammel, Mario; Laggner, Roberta; Schriefl, Christoph; Kienbacher, Calvin; Lorenzo Hartmann, Alexander; Novosad, Heinz; Constantin Chwojka, Christof; Havel, Christoph; Schreiber, Wolfgang; Herkner, Harald

    2015-01-01

    We investigated the effect on compression rate and depth of a conventional metronome and a voice metronome in simulated telephone-assisted, protocol-driven bystander Cardiopulmonary resucitation (CPR) compared to standard instruction. Thirty-six lay volunteers performed 10 minutes of compression-only CPR in a prospective, investigator-blinded, 3-arm study on a manikin. Participants were randomized either to standard instruction ("push down firmly, 5 cm"), a regular metronome pacing 110 beats per minute (bpm), or a voice metronome continuously prompting "deep-deepdeep- deeper" at 110 bpm. The primary outcome was deviation from the ideal chest compression target range (50 mm compression depth x 100 compressions per minute x 10 minutes = 50 m). Secondary outcomes were CPR quality measures (compression and leaning depth, rate, no-flow times) and participants' related physiological response (heart rate, blood pressure and nine hole peg test and borg scales score). We used a linear regression model to calculate effects. The mean (SD) deviation from the ideal target range (50 m) was -11 (9) m in the standard group, -20 (11) m in the conventional metronome group (adjusted difference [95%, CI], 9.0 [1.2-17.5 m], P=.03), and -18 (9) m in the voice metronome group (adjusted difference, 7.2 [-0.9-15.3] m, P=.08). Secondary outcomes (CPR quality measures and physiological response of participants to CPR performance) showed no significant differences. Compared to standard instruction, the conventional metronome showed a significant negative effect on the chest compression target range. The voice metronome showed a non-significant negative effect and therefore cannot be recommended for regular use in telephone-assisted CPR.

  4. Precision and Disclosure in Text and Voice Interviews on Smartphones

    PubMed Central

    Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L.; Johnston, Michael; Vickers, Lucas; Yan, H. Yanna; Zhang, Chan

    2015-01-01

    As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data—fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey. PMID:26060991

  5. Vocal Qualities in Music Theater Voice: Perceptions of Expert Pedagogues.

    PubMed

    Bourne, Tracy; Kenny, Dianna

    2016-01-01

    To gather qualitative descriptions of music theater vocal qualities including belt, legit, and mix from expert pedagogues to better define this voice type. This is a prospective, semistructured interview. Twelve expert teachers from United States, United Kingdom, Asia, and Australia were interviewed by Skype and asked to identify characteristics of music theater vocal qualities including vocal production, physiology, esthetics, pitch range, and pedagogical techniques. Responses were compared with published studies on music theater voice. Belt and legit were generally described as distinct sounds with differing physiological and technical requirements. Teachers were concerned that belt should be taught "safely" to minimize vocal health risks. There was consensus between teachers and published research on the physiology of the glottis and vocal tract; however, teachers were not in agreement about breathing techniques. Neither were teachers in agreement about the meaning of "mix." Most participants described belt as heavily weighted, thick folds, thyroarytenoid-dominant, or chest register; however, there was no consensus on an appropriate term. Belt substyles were named and generally categorized by weightedness or tone color. Descriptions of male belt were less clear than for female belt. This survey provides an overview of expert pedagogical perspectives on the characteristics of belt, legit, and mix qualities in the music theater voice. Although teacher responses are generally in agreement with published research, there are still many controversial issues and gaps in knowledge and understanding of this vocal technique. Breathing techniques, vocal range, mix, male belt, and vocal registers require continuing investigation so that we can learn more about efficient and healthy vocal function in music theater singing. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. Precision and Disclosure in Text and Voice Interviews on Smartphones.

    PubMed

    Schober, Michael F; Conrad, Frederick G; Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L; Johnston, Michael; Vickers, Lucas; Yan, H Yanna; Zhang, Chan

    2015-01-01

    As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.

  7. Interference effects of vocalization on dual task performance

    NASA Astrophysics Data System (ADS)

    Owens, J. M.; Goodman, L. S.; Pianka, M. J.

    1984-09-01

    Voice command and control systems have been proposed as a potential means of off-loading the typically overburdened visual information processing system. However, prior to introducing novel human-machine interfacing technologies in high workload environments, consideration must be given to the integration of the new technologists within existing task structures to ensure that no new sources of workload or interference are systematically introduced. This study examined the use of voice interactive systems technology in the joint performance of two cognitive information processing tasks requiring continuous memory and choice reaction wherein a basis for intertask interference might be expected. Stimuli for the continuous memory task were presented aurally and either voice or keyboard responding was required in the choice reaction task. Performance was significantly degraded in each task when voice responding was required in the choice reaction time task. Performance degradation was evident in higher error scores for both the choice reaction and continuous memory tasks. Performance decrements observed under conditions of high intertask stimulus similarity were not statistically significant. The results signal the need to consider further the task requirements for verbal short-term memory when applying speech technology in multitask environments.

  8. Conceptual Sound System Design for Clifford Odets' "GOLDEN BOY"

    NASA Astrophysics Data System (ADS)

    Yang, Yen Chun

    There are two different aspects in the process of sound design, "Arts" and "Science". In my opinion, the sound design should engage both aspects strongly and in interaction with each other. I started the process of designing the sound for GOLDEN BOY by building the city soundscape of New York City in 1937. The scenic design for this piece is designed in the round, putting the audience all around the stage; this gave me a great opportunity to use surround and specialization techniques to transform the space into a different sonic world. My specialization design is composed of two subsystems -- one is the four (4) speakers center cluster diffusing towards the four (4) sections of audience, and the other is the four (4) speakers on the four (4) corners of the theatre. The outside ring provides rich sound source localization and the inside ring provides more support for control of the specialization details. In my design four (4) lavalier microphones are hung under the center iron cage from the four (4) corners of the stage. Each microphone is ten (10) feet above the stage. The signal for each microphone is sent to the two (2) center speakers in the cluster diagonally opposite the microphone. With the appropriate level adjustment of the microphones, the audience will not notice the amplification of the voices; however, through my specialization system, the presence and location of the voices of all actors are preserved for all audiences clearly. With such vocal reinforcements provided by the microphones, I no longer need to worry about overwhelming the dialogue on stage by the underscoring. A successful sound system design should not only provide a functional system, but also take the responsibility of bringing actors' voices to the audience and engaging the audience with the world that we create on stage. By designing a system which reinforces the actors' voices while at the same time providing control over localization of movement of sound effects, I was able not only to make the text present and clear for the audiences, but also to support the storyline strongly through my composed music, environmental soundscapes, and underscoring.

  9. The effect of auditory verbal imagery on signal detection in hallucination-prone individuals

    PubMed Central

    Moseley, Peter; Smailes, David; Ellison, Amanda; Fernyhough, Charles

    2016-01-01

    Cognitive models have suggested that auditory hallucinations occur when internal mental events, such as inner speech or auditory verbal imagery (AVI), are misattributed to an external source. This has been supported by numerous studies indicating that individuals who experience hallucinations tend to perform in a biased manner on tasks that require them to distinguish self-generated from non-self-generated perceptions. However, these tasks have typically been of limited relevance to inner speech models of hallucinations, because they have not manipulated the AVI that participants used during the task. Here, a new paradigm was employed to investigate the interaction between imagery and perception, in which a healthy, non-clinical sample of participants were instructed to use AVI whilst completing an auditory signal detection task. It was hypothesized that AVI-usage would cause participants to perform in a biased manner, therefore falsely detecting more voices in bursts of noise. In Experiment 1, when cued to generate AVI, highly hallucination-prone participants showed a lower response bias than when performing a standard signal detection task, being more willing to report the presence of a voice in the noise. Participants not prone to hallucinations performed no differently between the two conditions. In Experiment 2, participants were not specifically instructed to use AVI, but retrospectively reported how often they engaged in AVI during the task. Highly hallucination-prone participants who retrospectively reported using imagery showed a lower response bias than did participants with lower proneness who also reported using AVI. Results are discussed in relation to prominent inner speech models of hallucinations. PMID:26435050

  10. Responsive Evaluation in the Interference Zone between System and Lifeworld

    ERIC Educational Resources Information Center

    Abma, Tineke A.; Leyerzapf, Hannah; Landeweer, Elleke

    2017-01-01

    Responsive evaluation honors democratic and participatory values and intends to foster dialogues among stakeholders to include their voices and enhance mutual understandings. The question explored in this article is whether and how responsive evaluation can offer a platform for moral learning ("Bildung") in the interference zone between…

  11. Three input concepts for flight crew interaction with information presented on a large-screen electronic cockpit display

    NASA Technical Reports Server (NTRS)

    Jones, Denise R.

    1990-01-01

    A piloted simulation study was conducted comparing three different input methods for interfacing to a large-screen, multiwindow, whole-flight-deck display for management of transport aircraft systems. The thumball concept utilized a miniature trackball embedded in a conventional side-arm controller. The touch screen concept provided data entry through a capacitive touch screen. The voice concept utilized a speech recognition system with input through a head-worn microphone. No single input concept emerged as the most desirable method of interacting with the display. Subjective results, however, indicate that the voice concept was the most preferred method of data entry and had the most potential for future applications. The objective results indicate that, overall, the touch screen concept was the most effective input method. There was also significant differences between the time required to perform specific tasks and the input concept employed, with each concept providing better performance relative to a specific task. These results suggest that a system combining all three input concepts might provide the most effective method of interaction.

  12. Nonverbal behavior of vendors in customer-vendor interaction.

    PubMed

    Amsbary, J H; Powell, L

    2007-04-01

    Two research questions were posed on the homophily theory of customer-vendor interactions: (a) do vendors show any nonverbal preference for Euro-American or African-American customers?; (b) do vendors demonstrate any nonverbal preference for customers with which they share racial homophily? The results supported the homophily theory for Euro-American customers in that there were significant interaction effects by race in facial expression (F = 5.33, p < .05), amount of speaking (F = 6.76, p < .01), tone of voice (F = 7.62, p < .01), and touching (F = 4.57, p < .05). Vendor behavior varied when the customer was Euro-American, with Euro-American vendors smiling more frequently (M = 4.05) than African-American vendors (M = 3.69), speaking more frequently (M = 3.57) than African-American vendors (M = 3.09), using a more friendly tone of voice (M = 3.59, and engaging in more touching behaviors (M = 1.81) than African-American vendors (M = 1.48). There was no significant difference in the behavior of Euro-American and African-American vendors when the customer was African-American.

  13. Parents' Evaluations of Their Children's Dysphonia: The Mamas and the Papas.

    PubMed

    Amir, Ofer; Wolf, Michael; Mick, Liron; Levi, Omer; Primov-Fever, Adi

    2015-07-01

    This study aimed to evaluate the validity and reliability of a Hebrew translation of the Pediatric Voice Handicap Index (pVHI). It also examined differences between mothers and fathers in evaluating their child's dysphonia. Observational design. The pVHI was first translated and adapted to Hebrew. The translated version was, then, administered to a group of 141 parents of children aged younger than 14 years. Fifty-eight parents had a dysphonic child, and 83 had a nondysphonic child. Based on the parents' responses to the pVHI, statistical analyses were performed, evaluating validity and reliability, as well as group differences. Following, a subset of the participants, in which only cases where the responses of both parents were available, was examined for evaluating differences between the responses of mothers (n = 46) and fathers (n = 46). Statistical analyses revealed high reliability of the Hebrew version of the pVHI (Cronbach alpha = .97). Parents of the dysphonic children rated their children significantly higher than parents of the nondysphonic group (P < 0.001). Mothers of the dysphonic children rated their children significantly higher than the fathers, on all subscales of the questionnaire (≥0.001 P < 0.047). In contrast, no significant differences were found between mothers and fathers of the nondysphonic children (P > 0.05). The Hebrew version of the pVHI is a reliable tool for quantifying parents' perception of their child's voice handicap. Mothers of dysphonic children evaluate their children's voice handicap more severely than fathers, whereas both parents of nondysphonic children perform this evaluation similarly. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  14. The Effects of Rate of Deviation and Musical Context on Intonation Perception in Homophonic Four-Part Chorales.

    NASA Astrophysics Data System (ADS)

    Bell, Michael Stephen

    Sixty-four trained musicians listened to four -bar excerpts of selected chorales by J. S. Bach, which were presented both in four-part texture (harmonic context) and as a single voice part (melodic context). These digitally synthesized examples were created by combining the first twelve partials, and all voice parts had the same generic timbre. A within-subjects design was used, so subjects heard each example in both contexts. Included in the thirty -two excerpts for each subject were four soprano, four alto, four tenor, and four bass parts as the target voices. The intonation of the target voice was varied such that the voice stayed in tune or changed by a half cent, two cents, or eight cents per second (a cent is 1/100 of a half step). Although direction of the deviation (sharp or flat) was not a significant factor in intonation perception, main effects for context (melodic vs. harmonic) and rate of deviation were highly significant, as was the interaction between rate of deviation and context. Specifically, selections that stayed in tune or changed only by half cents were not perceived differently; for larger deviations, the error was detected earlier and the intonation was judged to be worse in the harmonic contexts compared to the melodic contexts. Additionally, the direction of the error was correctly identified in the melodic context more often than the hamonic context only for the examples that mistuned at a rate of eight cents per second. Correct identification of the voice part that went out of tune in the four-part textures depended only on rate of deviation: the in tune excerpts (no voice going out of tune) and the eight cent deviations were correctly identified most often, the two cent deviations were next, and the half cent deviation excerpts were the least accurately identified.

  15. Contributions of speech science to the technology of man-machine voice interactions

    NASA Technical Reports Server (NTRS)

    Lea, Wayne A.

    1977-01-01

    Research in speech understanding was reviewed. Plans which include prosodics research, phonological rules for speech understanding systems, and continued interdisciplinary phonetics research are discussed. Improved acoustic phonetic analysis capabilities in speech recognizers are suggested.

  16. Humans and Their Impact on Cyber Agility

    DTIC Science & Technology

    2012-06-01

    clearly). - Range of media across which these interactions occur (e.g. voice, email, video conferencing and whiteboards) - Collaborations (working...abilities that they need to accomplish the task at hand Combat Assessment Battle Damage Assessment ( BDA ) + Munitions Effectiveness Assessment (MEA), a

  17. An Investigation of the Application of Voice Input/Output Technology in the COINS Network Control Center,

    DTIC Science & Technology

    1982-03-01

    13: p. 27]. There are some connected-speech reccgnizers on the market today but they are expensive * 8 ($50,0-$10e,200) and their capabilities have...readout, end stock market quotationsrRef. 17: p. 6]. The second voice response technique, formant sjrthesis, uses a method in which a word library (again...users. Marketing brochures, therefore, should be looked 2t rather carefully, the best guarantee cf recogniticr. accuracy being a test with the desired

  18. Group-Level Analysis on Multiplayer Game Collaboration: How Do the Individuals Shape the Group Interaction?

    ERIC Educational Resources Information Center

    Bluemink, Johanna; Hamalainen, Raija; Manninen, Tony; Jarvela, Sanna

    2010-01-01

    In this study, the aim was to examine how small-group collaboration is shaped by individuals interacting in a virtual multiplayer game. The data were collected from a design experiment in which six randomly divided groups of four university students played a voice-enhanced game lasting about 1 h. The "eScape" game was a social action adventure…

  19. Reading Fluency through Alternative Text: Rereading with an Interact Sing-to-Read Program Embedded within a Middle School Music Classroom

    ERIC Educational Resources Information Center

    Biggs, Marie C.; Watkins, Nancy A.

    2008-01-01

    Singing exaggerates the language of reading. The students find their voices in the rhythm and bounce of language by using music as an alternative technological approach to reading. A concurrent mixed methods study was conducted to investigate the use of an interactive sing-to-read program Tune Into Reading (Electronic Learning Products, 2006)…

  20. Bigdata Oriented Multimedia Mobile Health Applications.

    PubMed

    Lv, Zhihan; Chirivella, Javier; Gagliardo, Pablo

    2016-05-01

    In this paper, two mHealth applications are introduced, which can be employed as the terminals of bigdata based health service to collect information for electronic medical records (EMRs). The first one is a hybrid system for improving the user experience in the hyperbaric oxygen chamber by 3D stereoscopic virtual reality glasses and immersive perception. Several HMDs have been tested and compared. The second application is a voice interactive serious game as a likely solution for providing assistive rehabilitation tool for therapists. The recorder of the voice of patients could be analysed to evaluate the long-time rehabilitation results and further to predict the rehabilitation process.

Top