Arterial chemoradiotherapy for carcinomas of the external auditory canal and middle ear.
Fujiwara, Masayuki; Yamamoto, Satoshi; Doi, Hiroshi; Takada, Yasuhiro; Odawara, Soichi; Niwa, Yasue; Ishikura, Reiichi; Kamikonya, Norihiko; Terada, Tomonori; Uwa, Nobuhiro; Sagawa, Kosuke; Hirota, Shozo
2015-03-01
The purpose of this study was to estimate the efficacy of superselective arterial chemoradiotherapy for locally advanced carcinomas of the external auditory canal and middle ear. A retrospective study of clinical data for consecutive patients with locally advanced carcinomas of the external auditory canal and middle ear. Thirteen patients with locally advanced carcinomas of the external auditory canal and middle ear (T3: one patient, T4: 12 patients) were reviewed. The median follow-up duration in the living patients was 33 months. The total dose of radiation therapy was 60 Gy using conventional fractionation. Four, five, or six courses of a superselective arterial infusion (cisplatin 50 mg) were given weekly. The overall survival and progression-free survival rates at 2 years, calculated by the Kaplan-Meier method, were 58.7% and 53.8%, respectively. No late-phase adverse effects due to chemoradiation and no adverse effects due to catheterization were observed. These results suggest that superselective arterial chemoradiation can be a treatment option for locally advanced carcinomas of the external auditory canal and middle ear. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging.
Hugdahl, Kenneth
2017-12-01
In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice) is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular). Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses). The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in the emotional «colouring» of the voices and that excitatory neurotransmitters might be involved. What we do not know is why hallucinatory episodes occur spontaneously, why they fluctuate over time, and what makes them spontaneously stop. Moreover, is voice hearing a category or dimension in its own right, independent of diagnosis, and why is the auditory modality predominantly implicated in psychotic disorders, while the visual modality dominates in, for example, neurological diseases?
Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging
Hugdahl, Kenneth
2017-01-01
In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing “hearing voices”. An auditory verbal hallucination (i.e. hearing a voice) is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular). Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses). The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in the emotional «colouring» of the voices and that excitatory neurotransmitters might be involved. What we do not know is why hallucinatory episodes occur spontaneously, why they fluctuate over time, and what makes them spontaneously stop. Moreover, is voice hearing a category or dimension in its own right, independent of diagnosis, and why is the auditory modality predominantly implicated in psychotic disorders, while the visual modality dominates in, for example, neurological diseases? PMID:29019460
External auditory exostoses and hearing loss in the Shanidar 1 Neandertal
2017-01-01
The Late Pleistocene Shanidar 1 older adult male Neandertal is known for the crushing fracture of his left orbit with a probable reduction in vision, the loss of his right forearm and hand, and evidence of an abnormal gait, as well as probable diffuse idiopathic skeletal hyperostosis. He also exhibits advanced external auditory exostoses in his left auditory meatus and larger ones with complete bridging across the porus in the right meatus (both Grade 3). These growths indicate at least unilateral conductive hearing (CHL) loss, a serious sensory deprivation for a Pleistocene hunter-gatherer. This condition joins the meatal atresia of the Middle Pleistocene Atapuerca-SH Cr.4 in providing evidence of survival with conductive hearing loss (and hence serious sensory deprivation) among these Pleistocene humans. The presence of CHL in these fossils thereby reinforces the paleobiological and archeological evidence for supporting social matrices among these Pleistocene foraging peoples. PMID:29053746
APEX 3: a multi-purpose test platform for auditory psychophysical experiments.
Francart, Tom; van Wieringen, Astrid; Wouters, Jan
2008-07-30
APEX 3 is a software test platform for auditory behavioral experiments. It provides a generic means of setting up experiments without any programming. The supported output devices include sound cards and cochlear implants from Cochlear Corporation and Advanced Bionics Corporation. Many psychophysical procedures are provided and there is an interface to add custom procedures. Plug-in interfaces are provided for data filters and external controllers. APEX 3 is supported under Linux and Windows and is available free of charge.
Effect of training and level of external auditory feedback on the singing voice: volume and quality
Bottalico, Pasquale; Graetzer, Simone; Hunter, Eric J.
2015-01-01
Background Previous research suggests that classically trained professional singers rely not only on external auditory feedback but also on proprioceptive feedback associated with internal voice sensitivities. Objectives The Lombard Effect in singers and the relationship between Sound Pressure Level (SPL) and external auditory feedback was evaluated for professional and non-professional singers. Additionally, the relationship between voice quality, evaluated in terms of Singing Power Ratio (SPR), and external auditory feedback, level of accompaniment, voice register and singer gender was analyzed. Methods The subjects were 10 amateur or beginner singers, and 10 classically-trained professional or semi-professional singers (10 males and 10 females). Subjects sang an excerpt from the Star-spangled Banner with three different levels of the accompaniment, 70, 80 and 90 dBA, and with three different levels of external auditory feedback. SPL and the SPR were analyzed. Results The Lombard Effect was stronger for non-professional singers than professional singers. Higher levels of external auditory feedback were associated with a reduction in SPL. As predicted, the mean SPR was higher for professional than non-professional singers. Better voice quality was detected in the presence of higher levels of external auditory feedback. Conclusions With an increase in training, the singer’s reliance on external auditory feedback decreases. PMID:26186810
Effect of Training and Level of External Auditory Feedback on the Singing Voice: Volume and Quality.
Bottalico, Pasquale; Graetzer, Simone; Hunter, Eric J
2016-07-01
Previous research suggests that classically trained professional singers rely not only on external auditory feedback but also on proprioceptive feedback associated with internal voice sensitivities. The Lombard effect and the relationship between sound pressure level (SPL) and external auditory feedback were evaluated for professional and nonprofessional singers. Additionally, the relationship between voice quality, evaluated in terms of singing power ratio (SPR), and external auditory feedback, level of accompaniment, voice register, and singer gender was analyzed. The subjects were 10 amateur or beginner singers and 10 classically trained professional or semiprofessional singers (10 men and 10 women). Subjects sang an excerpt from the Star-Spangled Banner with three different levels of the accompaniment, 70, 80, and 90 dBA and with three different levels of external auditory feedback. SPL and SPR were analyzed. The Lombard effect was stronger for nonprofessional singers than professional singers. Higher levels of external auditory feedback were associated with a reduction in SPL. As predicted, the mean SPR was higher for professional singers than nonprofessional singers. Better voice quality was detected in the presence of higher levels of external auditory feedback. With an increase in training, the singer's reliance on external auditory feedback decreases. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Dislocation of the incus into the external auditory canal after mountain-biking accident.
Saito, T; Kono, Y; Fukuoka, Y; Yamamoto, H; Saito, H
2001-01-01
We report a rare case of incus dislocation to the external auditory canal after a mountain-biking accident. Otoscopy showed ossicular protrusion in the upper part of the left external auditory canal. CT indicated the disappearance of the incus, and an incus-like bone was found in the left external auditory canal. There was another bony and board-like structure in the attic. During the surgery, a square-shaped bony plate (1 x 1 cm) was found in the attic. It was determined that the bony plate had fallen from the tegmen of the attic. The fracture line in the posterosuperior auditory canal extending to the fossa incudis was identified. According to these findings, it was considered that the incus was pushed into the external auditory canal by the impact of skull injury through the fractured posterosuperior auditory canal, which opened widely enough for incus dislocation. Copyright 2001 S. Karger AG, Basel
First branchial cleft sinus presenting with cholesteatoma and external auditory canal atresia.
Yalçin, Sinasi; Karlidağ, Turgut; Kaygusuz, Irfan; Demirbağ, Erhan
2003-07-01
First branchial cleft abnormalities are rare. They may involve the external auditory canal and middle ear. We describe a 6-year-old girl with congenital external auditory canal atresia, microtia, and cholesteatoma of mastoid and middle ear in addition to the first branchial cleft abnormalities. Clinical features of the patient are briefly described and the embryological relationship between first branchial cleft anomaly and external auditory canal atresia is discussed. The surgical management of these lesions may be performed, both the complete excision of the sinus and reconstructive otologic surgery.
Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans
Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro
2015-01-01
Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703
Ikeda, Ryoukichi; Tateda, Masaru; Okoshi, Akira; Morita, Shinkichi; Suzuki, Hiroyoshi; Hashimoto, Sho
2016-02-01
Leiomyoma usually originates from the uterus and alimentary tract, but in extremely rare cases leiomyoma can appear in the external auditory canal. Here we present a 37-year-old man with right auricular fullness. Preoperative findings suggested benign tumor or cholesteatoma in the right external auditory canal. We performed total resection using an endoauricular approach with transcanal endoscopic ear surgery. Histopathological and immunohistochemistry examination confirmed the diagnosis of leiomyoma of external auditory canal. Leiomyoma arising from soft tissue, including that in the external auditory canal, is classified into two types: that from the arrectores pilorum muscles and that from the muscle coats of blood vessels. Only four cases of leiomyoma of external auditor canal have been published in the English literature. The other four cases demonstrated vascular leiomyomas. This is the first report of leiomyoma of the EAC arising from arrectores pilorum muscles. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Abdollahi fakhim, Shahin; Naderpoor, Masoud; Mousaviagdas, Mehrnoosh
2014-01-01
Introduction: First branchial cleft anomalies manifest with duplication of the external auditory canal. Case Report: This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection. Conclusion: It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear. PMID:25320705
Abdollahi Fakhim, Shahin; Naderpoor, Masoud; Mousaviagdas, Mehrnoosh
2014-10-01
First branchial cleft anomalies manifest with duplication of the external auditory canal. This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection. It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear.
External auditory canal atresia of probable congenital origin in a dog.
Schmidt, K; Piaia, T; Bertolini, G; De Lorenzi, D
2007-04-01
A nine-month-old Labrador retriever was referred to the Clinica Veterinaria Privata San Marco because of frequent headshaking and downward turning of the right ear. Clinical examination revealed that there was no external acoustic meatus in the right ear. Computed tomography confirmed that the vertical part of the right auditory canal ended blindly, providing a diagnosis of external auditory canal atresia. Cytological examination and culture of fluid from the canal and the bulla revealed only aseptic cerumen; for this reason, it was assumed that the dog was probably affected by a congenital developmental deformity of the external auditory canal. Reconstructive surgery was performed using a "pull-through" technique. Four months after surgery the cosmetic and functional results were satisfactory.
Finite element modelling of sound transmission from outer to inner ear.
Areias, Bruno; Santos, Carla; Natal Jorge, Renato M; Gentil, Fernanda; Parente, Marco Pl
2016-11-01
The ear is one of the most complex organs in the human body. Sound is a sequence of pressure waves, which propagates through a compressible media such as air. The pinna concentrates the sound waves into the external auditory meatus. In this canal, the sound is conducted to the tympanic membrane. The tympanic membrane transforms the pressure variations into mechanical displacements, which are then transmitted to the ossicles. The vibration of the stapes footplate creates pressure waves in the fluid inside the cochlea; these pressure waves stimulate the hair cells, generating electrical signals which are sent to the brain through the cochlear nerve, where they are decoded. In this work, a three-dimensional finite element model of the human ear is developed. The model incorporates the tympanic membrane, ossicular bones, part of temporal bone (external auditory meatus and tympanic cavity), middle ear ligaments and tendons, cochlear fluid, skin, ear cartilage, jaw and the air in external auditory meatus and tympanic cavity. Using the finite element method, the magnitude and the phase angle of the umbo and stapes footplate displacement are calculated. Two slightly different models are used: one model takes into consideration the presence of air in the external auditory meatus while the other does not. The middle ear sound transfer function is determined for a stimulus of 60 dB SPL, applied to the outer surface of the air in the external auditory meatus. The obtained results are compared with previously published data in the literature. This study highlights the importance of external auditory meatus in the sound transmission. The pressure gain is calculated for the external auditory meatus.
External auditory canal stenosis due to the use of powdered boric acid.
Dündar, Riza; Soy, Fatih Kemal; Kulduk, Erkan; Muluk, Nuray Bayar; Cingi, Cemal
2014-09-01
Acquired stenosis of the external auditory canal (EAC) may occur because of chronic external otitis, recurrent chronic catarrhal otitis media associated with tympanic membrane perforation, chronic dermatitis, tumors, and trauma. Stenosis occurs generally at the one-third bone part of the external auditory canal. In this article, we present 3 cases of acquired EAC stenosis due to the previous powdered boric acid application. Besides the presentation of surgical intervetions in these cases, we want to notify the physicians not to use or carefully use powdered boric acid because of the complication of EAC stenosis.
Sleifer, Pricila; Didoné, Dayane Domeneghini; Keppeler, Ísis Bicca; Bueno, Claudine Devicari; Riesgo, Rudimar dos Santos
2017-01-01
Introduction The tone-evoked auditory brainstem responses (tone-ABR) enable the differential diagnosis in the evaluation of children until 12 months of age, including those with external and/or middle ear malformations. The use of auditory stimuli with frequency specificity by air and bone conduction allows characterization of hearing profile. Objective The objective of our study was to compare the results obtained in tone-ABR by air and bone conduction in children until 12 months, with agenesis of the external auditory canal. Method The study was cross-sectional, observational, individual, and contemporary. We conducted the research with tone-ABR by air and bone conduction in the frequencies of 500 Hz and 2000 Hz in 32 children, 23 boys, from one to 12 months old, with agenesis of the external auditory canal. Results The tone-ABR thresholds were significantly elevated for air conduction in the frequencies of 500 Hz and 2000 Hz, while the thresholds of bone conduction had normal values in both ears. We found no statistically significant difference between genders and ears for most of the comparisons. Conclusion The thresholds obtained by bone conduction did not alter the thresholds in children with conductive hearing loss. However, the conductive hearing loss alter all thresholds by air conduction. The tone-ABR by bone conduction is an important tool for assessing cochlear integrity in children with agenesis of the external auditory canal under 12 months. PMID:29018492
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
ERIC Educational Resources Information Center
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-01-01
Purpose: The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Method: Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for…
Preventing collapse of external auditory meatus during audiometry.
Pearlman, R C
1975-11-01
Occlusion of the external auditory meatus resulting from earphone pressure can produce a pseudoconductive hearing loss. I describe a method for detecting ear canal collapse by otoscopy and I suggest a method of correcting the problem with a polyethylene tube prosthesis.
Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations
Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.
2009-01-01
Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102
Nodular Fasciitis of External Auditory Canal
Ahn, Jihyun; Kim, Sunyoung; Park, Youngsil
2016-01-01
Nodular fasciitis is a pseudosarcomatous reactive process composed of fibroblasts and myofibroblasts, and it is most common in the upper extremities. Nodular fasciitis of the external auditory canal is rare. To the best of our knowledge, less than 20 cases have been reported to date. We present a case of nodular fasciitis arising in the cartilaginous part of the external auditory canal. A 19-year-old man complained of an auricular mass with pruritus. Computed tomography showed a 1.7 cm sized soft tissue mass in the right external auditory canal, and total excision was performed. Histologic examination revealed spindle or stellate cells proliferation in a fascicular and storiform pattern. Lymphoid cells and erythrocytes were intermixed with tumor cells. The stroma was myxoid to hyalinized with a few microcysts. The tumor cells were immunoreactive for smooth muscle actin, but not for desmin, caldesmon, CD34, S-100, anaplastic lymphoma kinase, and cytokeratin. The patient has been doing well during the 1 year follow-up period. PMID:27304679
Auditory pathways: anatomy and physiology.
Pickles, James O
2015-01-01
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
Woodruff, P W; Wright, I C; Bullmore, E T; Brammer, M; Howard, R J; Williams, S C; Shapleske, J; Rossell, S; David, A S; McGuire, P K; Murray, R M
1997-12-01
The authors explored whether abnormal functional lateralization of temporal cortical language areas in schizophrenia was associated with a predisposition to auditory hallucinations and whether the auditory hallucinatory state would reduce the temporal cortical response to external speech. Functional magnetic resonance imaging was used to measure the blood-oxygenation-level-dependent signal induced by auditory perception of speech in three groups of male subjects: eight schizophrenic patients with a history of auditory hallucinations (trait-positive), none of whom was currently hallucinating; seven schizophrenic patients without such a history (trait-negative); and eight healthy volunteers. Seven schizophrenic patients were also examined while they were actually experiencing severe auditory verbal hallucinations and again after their hallucinations had diminished. Voxel-by-voxel comparison of the median power of subjects' responses to periodic external speech revealed that this measure was reduced in the left superior temporal gyrus but increased in the right middle temporal gyrus in the combined schizophrenic groups relative to the healthy comparison group. Comparison of the trait-positive and trait-negative patients revealed no clear difference in the power of temporal cortical activation. Comparison of patients when experiencing severe hallucinations and when hallucinations were mild revealed reduced responsivity of the temporal cortex, especially the right middle temporal gyrus, to external speech during the former state. These results suggest that schizophrenia is associated with a reduced left and increased right temporal cortical response to auditory perception of speech, with little distinction between patients who differ in their vulnerability to hallucinations. The auditory hallucinatory state is associated with reduced activity in temporal cortical regions that overlap with those that normally process external speech, possibly because of competition for common neurophysiological resources.
Up-regulation of peroxidase proliferator-activated receptor gamma in cholesteatoma.
Hwang, Soon Jae; Kang, Hee Joon; Song, Jae-Jun; Kang, Jae Seong; Woo, Jeong Soo; Chae, Sung Won; Lee, Heung-Man
2006-01-01
To evaluate the localization and expression of peroxidase proliferator-activated receptor (PPAR)gamma in cholesteatoma epithelium. Experimental study. Reverse-transcription polymerase chain reaction was performed on cholesteatoma tissues from 10 adult patients undergoing tympanomastoid surgery for middle ear cholesteatoma and on 10 samples of normal external auditory canal skin tissue. The expression levels of PPARgamma to glyceraldehyde-3-phosphate dehydrogenase transcripts were semiquantified by densitometry. We also characterized the cellular localization of the PPARgamma protein immunohistochemically. Ki-67 was also localized to compare the proliferative activity of cells in cholesteatoma epithelium and in normal external auditory canal skin. PPARgamma mRNA and protein were detected in normal external auditory canal skin and in cholesteatoma epithelium. The expression level of PPARgamma mRNA in cholesteatoma was significantly increased compared with that in normal external auditory canal skin. PPARgamma protein was expressed in cells mainly in the granular and prickle cell layers. However, the intensity of its expression was generally decreased in the parabasal layer of the cholesteatoma epithelium. Ki-67 was expressed in the nuclei of cells in the basal and parabasal layers, and a greater number of cells were Ki-67 immunopositive in cholesteatoma epithelium. PPARgamma is up-regulated in the cholesteatoma epithelium compared with normal external auditory canal skin. These results suggest that PPARgamma may play an important role in the pathogenesis of cholesteatoma.
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-12-20
The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.
Thakar, A; Deepak, K K; Kumar, S Shyam
2008-10-01
To describe a previously unreported syndrome of recurrent syncopal attacks provoked by light stimulation of the external auditory canal. A 13-year-old girl had been receiving treatment for presumed absence seizures, with inadequate treatment response. Imaging was normal. Careful history taking indicated that the recurrent syncopal attacks were precipitated by external auditory canal stimulation. Targeted autonomic function tests confirmed a hyperactive vagal response, with documented significant bradycardia and lightheadedness, provoked by mild stimulation of the posterior wall of the left external auditory canal. Abstinence from ear scratching led to complete alleviation of symptoms without any pharmacological treatment. Reflex syncope consequent to stimulation of the auricular branch of the vagus nerve is proposed as the pathophysiological mechanism for this previously undocumented syndrome.
Kanemoto, Mari; Asai, Tomohisa; Sugimori, Eriko; Tanno, Yoshihiko
2013-01-01
Previous studies have suggested that a tendency to externalize internal thought is related to auditory hallucinations or even proneness to auditory hallucinations (AHp) in the general population. However, although auditory hallucinations are related to emotional phenomena, few studies have investigated the effect of emotional valence on the aforementioned relationship. In addition, we do not know what component of psychotic phenomena relate to externalizing bias. The current study replicated our previous research, which suggested that individual differences in auditory hallucination-like experiences are strongly correlated with the external misattribution of internal thoughts, conceptualized in terms of false memory, using the Deese–Roediger–McDermott (DRM) paradigm. We found a significant relationship between experimental performance and total scores on the Launay–Slade Hallucination Scale (LSHS). Among the LSHS factors, only vivid mental image, which is said to be a predictor of auditory hallucinations, was significantly related to experimental performance. We then investigated the potential effect of emotional valence using the DRM paradigm. The results indicate that participants with low scores on the LSHS (the low-AHp group in the current study) showed an increased discriminability index (d′) for positive words and a decreased d′ for negative words. However, no effects of emotional valence were found for participants with high LSHS scores (high-AHp group). This study indicated that external misattribution of internal thoughts predicts AHp, and that the high-AHp group showed a smaller emotional valence effect in the DRM paradigm compared with the low-AHp group. We discuss this outcome from the perspective of the dual-process activation-monitoring framework in the DRM paradigm in regard to emotion-driven automatic thought in false memory. PMID:23847517
An EMG Study of the Lip Muscles during Covert Auditory Verbal Hallucinations in Schizophrenia
ERIC Educational Resources Information Center
Rapin, Lucile; Dohen, Marion; Polosan, Mircea; Perrier, Pascal; Loevenbruck, Hélène
2013-01-01
Purpose: "Auditory verbal hallucinations" (AVHs) are speech perceptions in the absence of external stimulation. According to an influential theoretical account of AVHs in schizophrenia, a deficit in inner-speech monitoring may cause the patients' verbal thoughts to be perceived as external voices. The account is based on a…
Spatial Hearing with Incongruent Visual or Auditory Room Cues
Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten
2016-01-01
In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290
Choi, Jinhyun; Kim, Se-Heon; Koh, Yoon Woo; Choi, Eun Chang; Lee, Chang Geol; Keum, Ki Chang
2017-01-01
The purpose of this study was to evaluate the clinical outcomes of patients treated with radiotherapy (RT) for a carcinoma of the external auditory canal (EAC) and middle ear. The records of 32 patients who received RT from 1990 to 2013 were reviewed retrospectively. The Pittsburgh classification was used to stage all the cancers (early stage, T1/T2 [n=12]; advanced stage, T3/T4 or N positive [n=20]). Twenty-one patients (65.6%) were treated with postoperative RT and 11 patients (34.4%) were treated with definitive RT. The median radiation doses for postoperative and definitive RT were 60 Gy and 64.8 Gy, respectively. Chemotherapy was administered to seven patients (21.9%). The 5-year overall survival and disease-free survival rates for all patients were 57% and 52%, respectively. The disease control rates for the patients with early stage versus advanced stage carcinomawere 55.6% (5/9) and 50% (6/12) in the postoperative RT group and 66.7% (2/3) and 37.5% (3/8) in the definitive RT group, respectively. Overall, 15 cases (14 patients, 46.7%) experienced treatment failure; these failures were classified as local in four cases, regional in one case, and distant in 10 cases. The median follow-up period after RT was 51 months (range, 7 to 286 months). Patients with early stage carcinoma achieved better outcomes when definitive RT was used. Advanced stage carcinoma patients experienced better outcomes with postoperative RT. The high rate of distant failure after RT, with or without surgery, reflected the lack of a consensus regarding the best therapeutic approach for treating carcinoma of the EAC and middle ear.
Tinnitus: causes and clinical management.
Langguth, Berthold; Kreuzer, Peter M; Kleinjung, Tobias; De Ridder, Dirk
2013-09-01
Tinnitus is the perception of sound in the absence of a corresponding external acoustic stimulus. With prevalence ranging from 10% to 15%, tinnitus is a common disorder. Many people habituate to the phantom sound, but tinnitus severely impairs quality of life of about 1-2% of all people. Tinnitus has traditionally been regarded as an otological disorder, but advances in neuroimaging methods and development of animal models have increasingly shifted the perspective towards its neuronal correlates. Increased neuronal firing rate, enhanced neuronal synchrony, and changes in the tonotopic organisation are recorded in central auditory pathways in reaction to deprived auditory input and represent--together with changes in non-auditory brain areas--the neuronal correlate of tinnitus. Assessment of patients includes a detailed case history, measurement of hearing function, quantification of tinnitus severity, and identification of causal factors, associated symptoms, and comorbidities. Most widely used treatments for tinnitus involve counselling, and best evidence is available for cognitive behavioural therapy. New pathophysiological insights have prompted the development of innovative brain-based treatment approaches to directly target the neuronal correlates of tinnitus. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zhang, Y; Li, D D; Chen, X W
2017-06-20
Objective: Case-control study analysis of the speech discrimination of unilateral microtia and external auditory canal atresia patients with normal hearing subjects in quiet and noisy environment. To understand the speech recognition results of patients with unilateral external auditory canal atresia and provide scientific basis for clinical early intervention. Method: Twenty patients with unilateral congenital microtia malformation combined external auditory canal atresia, 20 age matched normal subjects as control group. All subjects used Mandarin speech audiometry material, to test the speech discrimination scores (SDS) in quiet and noisy environment in sound field. Result: There's no significant difference of speech discrimination scores under the condition of quiet between two groups. There's a statistically significant difference when the speech signal in the affected side and noise in the nomalside (single syllable, double syllable, statements; S/N=0 and S/N=-10) ( P <0.05). There's no significant difference of speech discrimination scores when the speech signal in the nomalside and noise in the affected side. There's a statistically significant difference in condition of the signal and noise in the same side when used one-syllable word recognition (S/N=0 and S/N=-5) ( P <0.05), while double syllable word and statement has no statistically significant difference ( P >0.05). Conclusion: The speech discrimination scores of unilateral congenital microtia malformation patients with external auditory canal atresia under the condition of noise is lower than the normal subjects. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Carcinoma of the middle ear and external auditory canal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, S.S.; Kim, J.A.; Goodchild, N.
1983-07-01
Thirty-one patients with malignant tumors of the middle ear and external auditory canal (EAC) were observed at the University of Virginia Hospital from 1956 through 1980. Of 27 patients with carcinoma, 21 had squamous cell carcinoma, 4 had basal cell carcinoma and 2 had adenoid cystic carcinoma. The 27 patients with carcinoma are reviewed with regard to clinical presentation, treatment modality, results and complications. The majority (67%) of patients had a history of chronic ear drainage, 22% had a previous mastoidectomy or polypectomy and 7% had an associated cholesteatoma. Eighty percent of patients with carcinoma limited to EAC were alivemore » and well at 5 years, compared to 43% of patients with involvement of the middle ear. Fifty-six percent of patients without invasion of the petrous bone were alive at 5 years compared to only 20% of patients with petrous bone involvement. The data strongly suggest that survival depends on the extent of disease. The corrected disease free 5 year survival rates were 14% for patients who had surgery alone and 50% for those who had surgery and radiotherapy. Of the three patients with advanced disease who received radiotherapy alone, none survived five years.« less
Correlation between the characteristics of resonance and aging of the external ear.
Silva, Aline Papin Roedas da; Blasca, Wanderléia Quinhoneiro; Lauris, José Roberto Pereira; Oliveira, Jerusa Roberta Massola de
2014-01-01
Aging causes changes in the external ear as a collapse of the external auditory canal and tympanic membrane senile. Knowing them is appropriate for the diagnosis of hearing loss and selection of hearing aids. For this reason, the study aimed to verify the influence of the anatomical changes of the external ear resonance in the auditory canal in the elderly. The sample consisted of objective measures of the external ear of elderly with collapse (group A), senile tympanic membrane (group B) and without changing the external auditory canal or tympanic membrane (group C) and adults without changing the external ear (group D). In the retrospective/clinical study were performed comparisons of measures of individuals with and without alteration of the external ear through the gain and response external ear resonant frequency and the primary peak to the right ear. In groups A, B and C was no statistically significant difference between Real Ear Unaided Response (REUR) and Real Ear Unaided Gain (REUG), but not for the peak frequency. For groups A and B were shown significant differences in REUR and REUG. Between the C and D groups were significant statistics to the REUR and REUG, but not for the frequency of the primary peak. Changes influence the external ear resonance, decreasing its amplitude. However, the frequency of the primary peak is not affected.
A biophysical model for modulation frequency encoding in the cochlear nucleus.
Eguia, Manuel C; Garcia, Guadalupe C; Romano, Sebastian A
2010-01-01
Encoding of amplitude modulated (AM) acoustical signals is one of the most compelling tasks for the mammalian auditory system: environmental sounds, after being filtered and transduced by the cochlea, become narrowband AM signals. Despite much experimental work dedicated to the comprehension of auditory system extraction and encoding of AM information, the neural mechanisms underlying this remarkable feature are far from being understood (Joris et al., 2004). One of the most accepted theories for this processing is the existence of a periodotopic organization (based on temporal information) across the more studied tonotopic axis (Frisina et al., 1990b). In this work, we will review some recent advances in the study of the mechanisms involved in neural processing of AM sounds, and propose an integrated model that runs from the external ear, through the cochlea and the auditory nerve, up to a sub-circuit of the cochlear nucleus (the first processing unit in the central auditory system). We will show that varying the amount of inhibition in our model we can obtain a range of best modulation frequencies (BMF) in some principal cells of the cochlear nucleus. This could be a basis for a synchronicity based, low-level periodotopic organization. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Effect of Training and Level of External Auditory Feedback on the Singing Voice: Pitch Inaccuracy
Bottalico, Pasquale; Graetzer, Simone; Hunter, Eric J.
2016-01-01
Background One of the aspects of major relevance to singing is the control of fundamental frequency. Objectives The effects on pitch inaccuracy, defined as the distance in cents in equally tempered tuning between the reference note and the sung note, of the following conditions were evaluated: (1) level of external feedback, (2) tempo (slow or fast), (3) articulation (legato or staccato), (4) tessitura (low, medium or high) and (5) semi-phrase direction (ascending or descending). Methods The subjects were 10 non-professional singers, and 10 classically-trained professional or semi-professional singers (10 males and 10 females). Subjects sang one octave and a fifth arpeggi with three different levels of external auditory feedback, two tempi and two articulations (legato or staccato). Results It was observed that inaccuracy was greatest in the descending semi-phrase arpeggi produced at a fast tempo and with a staccato articulation, especially for non-professional singers. The magnitude of inaccuracy was also relatively large in the high tessitura relative to the low and medium tessitura for such singers. Counter to predictions, when external auditory feedback was strongly attenuated by the hearing protectors, non-professional singers showed greater pitch accuracy than in the other external feedback conditions. This finding indicates the importance of internal auditory feedback in pitch control. Conclusions With an increase in training, the singer’s pitch inaccuracy decreases. PMID:26948385
Ebisumoto, Koji; Okami, Kenji; Hamada, Masashi; Maki, Daisuke; Sakai, Akihiro; Saito, Kosuke; Shimizu, Fukuko; Kaneda, Shoji; Iida, Masahiro
2018-06-01
The prognosis of advanced temporal bone cancer is poor, because complete surgical resection is difficult to achieve. Chemoradiotherapy is one of the available curative treatment options; however, its systemic effects on the patient restrict the use of this treatment. A 69-year-old female (who needed peritoneal dialysis) presented at our clinic with T4 left external auditory canal cancer and was treated with cetuximab plus radiotherapy (RT). The primary lesion showed complete response. The patient is currently alive with no evidence of disease two years after completion of the treatment and does not show any late toxicity. This is the first advanced temporal bone cancer patient treated with RT plus cetuximab. Cetuximab plus RT might be a treatment alternative for patients with advanced temporal bone cancer. Copyright © 2017 Elsevier B.V. All rights reserved.
Characterization of active hair-bundle motility by a mechanical-load clamp
NASA Astrophysics Data System (ADS)
Salvi, Joshua D.; Maoiléidigh, Dáibhid Ó.; Fabella, Brian A.; Tobin, Mélanie; Hudspeth, A. J.
2015-12-01
Active hair-bundle motility endows hair cells with several traits that augment auditory stimuli. The activity of a hair bundle might be controlled by adjusting its mechanical properties. Indeed, the mechanical properties of bundles vary between different organisms and along the tonotopic axis of a single auditory organ. Motivated by these biological differences and a dynamical model of hair-bundle motility, we explore how adjusting the mass, drag, stiffness, and offset force applied to a bundle control its dynamics and response to external perturbations. Utilizing a mechanical-load clamp, we systematically mapped the two-dimensional state diagram of a hair bundle. The clamp system used a real-time processor to tightly control each of the virtual mechanical elements. Increasing the stiffness of a hair bundle advances its operating point from a spontaneously oscillating regime into a quiescent regime. As predicted by a dynamical model of hair-bundle mechanics, this boundary constitutes a Hopf bifurcation.
Auditory spatial processing in Alzheimer’s disease
Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.
2015-01-01
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732
Where the imaginal appears real: A positron emission tomography study of auditory hallucinations
Szechtman, Henry; Woody, Erik; Bowers, Kenneth S.; Nahmias, Claude
1998-01-01
An auditory hallucination shares with imaginal hearing the property of being self-generated and with real hearing the experience of the stimulus being an external one. To investigate where in the brain an auditory event is “tagged” as originating from the external world, we used positron emission tomography to identify neural sites activated by both real hearing and hallucinations but not by imaginal hearing. Regional cerebral blood flow was measured during hearing, imagining, and hallucinating in eight healthy, highly hypnotizable male subjects prescreened for their ability to hallucinate under hypnosis (hallucinators). Control subjects were six highly hypnotizable male volunteers who lacked the ability to hallucinate under hypnosis (nonhallucinators). A region in the right anterior cingulate (Brodmann area 32) was activated in the group of hallucinators when they heard an auditory stimulus and when they hallucinated hearing it but not when they merely imagined hearing it. The same experimental conditions did not yield this activation in the group of nonhallucinators. Inappropriate activation of the right anterior cingulate may lead self-generated thoughts to be experienced as external, producing spontaneous auditory hallucinations. PMID:9465124
Network and external perturbation induce burst synchronisation in cat cerebral cortex
NASA Astrophysics Data System (ADS)
Lameu, Ewandson L.; Borges, Fernando S.; Borges, Rafael R.; Batista, Antonio M.; Baptista, Murilo S.; Viana, Ricardo L.
2016-05-01
The brain of mammals are divided into different cortical areas that are anatomically connected forming larger networks which perform cognitive tasks. The cat cerebral cortex is composed of 65 areas organised into the visual, auditory, somatosensory-motor and frontolimbic cognitive regions. We have built a network of networks, in which networks are connected among themselves according to the connections observed in the cat cortical areas aiming to study how inputs drive the synchronous behaviour in this cat brain-like network. We show that without external perturbations it is possible to observe high level of bursting synchronisation between neurons within almost all areas, except for the auditory area. Bursting synchronisation appears between neurons in the auditory region when an external perturbation is applied in another cognitive area. This is a clear evidence that burst synchronisation and collective behaviour in the brain might be a process mediated by other brain areas under stimulation.
Enhanced auditory temporal gap detection in listeners with musical training.
Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn
2014-08-01
Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.
Anteverted internal auditory canal as an inner ear anomaly in patients with craniofacial microsomia.
L'Heureux-Lebeau, Bénédicte; Saliba, Issam
2014-09-01
Craniofacial microsomia involves structure of the first and second branchial arches. A wide range of ear anomalies, affecting external, middle and inner ear, has been described in association with this condition. We report three cases of anteverted internal auditory canal in patients presenting craniofacial microsomia. This unique internal auditory canal orientation was found on high-resolution computed tomography of the temporal bones. This internal auditory canal anomaly is yet unreported in craniofacial anomalies. Copyright © 2014. Published by Elsevier Ireland Ltd.
How challenges in auditory fMRI led to general advancements for the field.
Talavage, Thomas M; Hall, Deborah A
2012-08-15
In the early years of fMRI research, the auditory neuroscience community sought to expand its knowledge of the underlying physiology of hearing, while also seeking to come to grips with the inherent acoustic disadvantages of working in the fMRI environment. Early collaborative efforts between prominent auditory research laboratories and prominent fMRI centers led to development of a number of key technical advances that have subsequently been widely used to elucidate principles of auditory neurophysiology. Perhaps the key imaging advance was the simultaneous and parallel development of strategies to use pulse sequences in which the volume acquisitions were "clustered," providing gaps in which stimuli could be presented without direct masking. Such sequences have become widespread in fMRI studies using auditory stimuli and also in a range of translational research domains. This review presents the parallel stories of the people and the auditory neurophysiology research that led to these sequences. Copyright © 2011 Elsevier Inc. All rights reserved.
Primary Synovial Sarcoma of External Auditory Canal: A Case Report.
Devi, Aarani; Jayakumar, Krishnannair L L
2017-07-20
Synovial sarcoma is a rare malignant tumor of mesenchymal origin. Primary synovial sarcoma of the ear is extremely rare and to date only two cases have been published in English medical literature. Though the tumor is reported to have an aggressive nature, early diagnosis and treatment may improve the outcome. Here, we report a rare case of synovial sarcoma of the external auditory canal in an 18-year-old male who was managed by chemotherapy and referred for palliation due to tumor progression.
Watts, Christopher; Murphy, Jessica; Barnes-Burroughs, Kathryn
2003-06-01
At a physiological level, the act of singing involves control and coordination of several systems involved in the production of sound, including respiration, phonation, resonance, and afferent systems used to monitor production. The ability to produce a melodious singing voice (eg, in tune with accurate pitch) is dependent on control over these motor and sensory systems. To test this position, trained singers and untrained subjects with and without expressed singing talent were asked to match pitches of target pure tones. The ability to match pitch reflected the ability to accurately integrate sensory perception with motor planning and execution. Pitch-matching accuracy was measured at the onset of phonation (prephonatory set) before external feedback could be utilized to adjust the voiced source, during phonation when external auditory feedback could be utilized, and during phonation when external auditory feedback was masked. Results revealed trained singers and untrained subjects with singing talent were no different in their pitch-matching abilities when measured before or after external feedback could be utilized. The untrained subjects with singing talent were also significantly more accurate than the trained singers when external auditory feedback was masked. Both groups were significantly more accurate than the untrained subjects without singing talent.
Liu, Yu-Hsi; Chang, Kuo-Ping
2016-04-01
Fibrous dysplasia is a slowly progressive benign fibro-osseous disease, rarely occurring in temporal bones. In these cases, most bony lesions developed from the bony part of the external auditory canals, causing otalgia, hearing impairment, otorrhea, and ear hygiene blockade and probably leading to secondary cholesteatoma. We presented the medical history of a 24-year-old woman with temporal monostotic fibrous dysplasia with secondary cholesteatoma. The initial presentation was unilateral conductive hearing loss. A hard external canal tumor contributing to canal stenosis and a near-absent tympanic membrane were found. Canaloplasty and type I tympanoplasty were performed, but the symptoms recurred after 5 years. She received canal wall down tympanomastoidectomy with ossciculoplasty at the second time, and secondary cholesteatoma in the middle ear was diagnosed. Fifteen years later, left otorrhea recurred again and transcanal endoscopic surgery was performed for middle ear clearance. Currently, revision surgeries provide a stable auditory condition, but her monostotic temporal fibrous dysplasia is still in place.
Göpfert, Martin C; Hennig, R Matthias
2016-01-01
Insect hearing has independently evolved multiple times in the context of intraspecific communication and predator detection by transforming proprioceptive organs into ears. Research over the past decade, ranging from the biophysics of sound reception to molecular aspects of auditory transduction to the neuronal mechanisms of auditory signal processing, has greatly advanced our understanding of how insects hear. Apart from evolutionary innovations that seem unique to insect hearing, parallels between insect and vertebrate auditory systems have been uncovered, and the auditory sensory cells of insects and vertebrates turned out to be evolutionarily related. This review summarizes our current understanding of insect hearing. It also discusses recent advances in insect auditory research, which have put forward insect auditory systems for studying biological aspects that extend beyond hearing, such as cilium function, neuronal signal computation, and sensory system evolution.
NASA Astrophysics Data System (ADS)
Nakada, Hirofumi; Horie, Seichi; Kawanami, Shoko; Inoue, Jinro; Iijima, Yoshinori; Sato, Kiyoharu; Abe, Takeshi
2017-09-01
We aimed to develop a practical method to estimate oesophageal temperature by measuring multi-locational auditory canal temperatures. This method can be applied to prevent heatstroke by simultaneously and continuously monitoring the core temperatures of people working under hot environments. We asked 11 healthy male volunteers to exercise, generating 80 W for 45 min in a climatic chamber set at 24, 32 and 40 °C, at 50% relative humidity. We also exposed the participants to radiation at 32 °C. We continuously measured temperatures at the oesophagus, rectum and three different locations along the external auditory canal. We developed equations for estimating oesophageal temperatures from auditory canal temperatures and compared their fitness and errors. The rectal temperature increased or decreased faster than oesophageal temperature at the start or end of exercise in all conditions. Estimated temperature showed good similarity with oesophageal temperature, and the square of the correlation coefficient of the best fitting model reached 0.904. We observed intermediate values between rectal and oesophageal temperatures during the rest phase. Even under the condition with radiation, estimated oesophageal temperature demonstrated concordant movement with oesophageal temperature at around 0.1 °C overestimation. Our method measured temperatures at three different locations along the external auditory canal. We confirmed that the approach can credibly estimate the oesophageal temperature from 24 to 40 °C for people performing exercise in the same place in a windless environment.
Complete occipitalization of the atlas with bilateral external auditory canal atresia.
Dolenšek, Janez; Cvetko, Erika; Snoj, Žiga; Meznaric, Marija
2017-09-01
Fusion of the atlas with the occipital bone is a rare congenital dysplasia known as occipitalization of the atlas, occipitocervical synostosis, assimilation of the atlas, or atlanto-occipital fusion. It is a component of the paraxial mesodermal maldevelopment and commonly associated with other dysplasias of the craniovertebral junction. External auditory canal atresia or external aural atresia is a rare congenital absence of the external auditory canal. It occurs as the consequence of the maldevelopment of the first pharyngeal cleft due to defects of cranial neural crest cells migration and/or differentiation. It is commonly associated with the dysplasias of the structures derived from the first and second pharyngeal arches including microtia. We present the coexistence of the occipitalization of the atlas and congenital aural atresia, an uncommon combination of the paraxial mesodermal maldevelopment, and defects of cranial neural crest cells. The association is most probably syndromic as minimal diagnostic criteria for the oculoariculovertebral spectrum are fulfilled. From the clinical point of view, it is important to be aware that patients with microtia must obtain also appropriate diagnostic imaging studies of the craniovetebral junction due to eventual concomitant occipitalization of the atlas and frequently associated C1-C2 instability.
De Paolis, Annalisa; Bikson, Marom; Nelson, Jeremy T; de Ru, J Alexander; Packer, Mark; Cardoso, Luis
2017-06-01
Hearing is an extremely complex phenomenon, involving a large number of interrelated variables that are difficult to measure in vivo. In order to investigate such process under simplified and well-controlled conditions, models of sound transmission have been developed through many decades of research. The value of modeling the hearing system is not only to explain the normal function of the hearing system and account for experimental and clinical observations, but to simulate a variety of pathological conditions that lead to hearing damage and hearing loss, as well as for development of auditory implants, effective ear protections and auditory hazard countermeasures. In this paper, we provide a review of the strategies used to model the auditory function of the external, middle, inner ear, and the micromechanics of the organ of Corti, along with some of the key results obtained from such modeling efforts. Recent analytical and numerical approaches have incorporated the nonlinear behavior of some parameters and structures into their models. Few models of the integrated hearing system exist; in particular, we describe the evolution of the Auditory Hazard Assessment Algorithm for Human (AHAAH) model, used for prediction of hearing damage due to high intensity sound pressure. Unlike the AHAAH model, 3D finite element models of the entire hearing system are not able yet to predict auditory risk and threshold shifts. It is expected that both AHAAH and FE models will evolve towards a more accurate assessment of threshold shifts and hearing loss under a variety of stimuli conditions and pathologies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Julia, Sophie; Pedespan, Jean Michel; Boudard, Philippe; Barbier, Richard; Gavilan-Cellie, Isabelle; Chateil, Jean François; Lacombe, Didier
2002-06-15
In 1979, Rasmussen et al. reported six members of a family with congenital, bilateral, symmetrical, and isolated subtotal atresia of the external auditory canal, bilateral foot abnormalities, and increased interocular distance. The family history suggested autosomal dominant inheritance of the syndrome. We report a 3-year-old girl whose symptoms are compatible with this diagnosis. Therefore, we suggest confirmation of the description by Rasmussen et al. as a distinct entity and suggest the term Rasmussen syndrome for this condition. Copyright 2002 Wiley-Liss, Inc.
Primary Synovial Sarcoma of External Auditory Canal: A Case Report
Jayakumar, Krishnannair l L
2017-01-01
Synovial sarcoma is a rare malignant tumor of mesenchymal origin. Primary synovial sarcoma of the ear is extremely rare and to date only two cases have been published in English medical literature. Though the tumor is reported to have an aggressive nature, early diagnosis and treatment may improve the outcome. Here, we report a rare case of synovial sarcoma of the external auditory canal in an 18-year-old male who was managed by chemotherapy and referred for palliation due to tumor progression. PMID:28948118
Linking prenatal experience to the emerging musical mind.
Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E
2013-09-03
The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.
The impact of negative affect on reality discrimination.
Smailes, David; Meins, Elizabeth; Fernyhough, Charles
2014-09-01
People who experience auditory hallucinations tend to show weak reality discrimination skills, so that they misattribute internal, self-generated events to an external, non-self source. We examined whether inducing negative affect in healthy young adults would increase their tendency to make external misattributions on a reality discrimination task. Participants (N = 54) received one of three mood inductions (one positive, two negative) and then performed an auditory signal detection task to assess reality discrimination. Participants who received either of the two negative inductions made more false alarms, but not more hits, than participants who received the neutral induction, indicating that negative affect makes participants more likely to misattribute internal, self-generated events to an external, non-self source. These findings are drawn from an analogue sample, and research that examines whether negative affect also impairs reality discrimination in patients who experience auditory hallucinations is required. These findings show that negative affect disrupts reality discrimination and suggest one way in which negative affect may lead to hallucinatory experiences. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bilateral acquired external auditory canal stenosis with squamous papilloma: a case report.
Demirbaş, Duygu; Dağlı, Muharrem; Göçer, Celil
2011-01-01
Acquired external auditory canal (EAC) stenosis is described as resulting from a number of different causes such as infection, trauma, neoplasia, inflammation and radiotherapy. Human papilloma virus (HPV) type 6, a deoxyribonucleic acid (DNA) virus, is considered to cause squamous papilloma of the EAC. In this article, we report a case of a 56-year-old male with warty lesions in the left external ear and a totally stenotic right external ear which had similar lesions one year before the involvement of his left ear. On computed tomography of the temporal bone, there was soft tissue obstruction of the right EAC, and thickening in the skin of the left EAC. The middle ear structures were normal on both sides. Biopsy was performed from the lesion in the left ear, and revealed squamous papilloma. We presented this case because squamous papilloma related bilateral acquired EAC stenosis is a rare entity.
Automatic control of liquid cooling garment by cutaneous and external auditory meatus temperatures
NASA Technical Reports Server (NTRS)
Fulcher, C. W. G. (Inventor)
1971-01-01
An automatic control apparatus for a liquid cooling garment is described that is responsive to actual physiological needs during work and rest periods of a man clothed in the liquid cooling garment. Four skin temperature readings and a reading taken at the external portion of the auditory meatus are added and used in the control signal for a temperature control valve regulating inlet water temperature for the liquid cooling garment. The control apparatus comprises electronic circuits to which the temperatures are applied as control signals and an electro-pneumatic transducer attached to the control valve.
A corollary discharge maintains auditory sensitivity during sound production
NASA Astrophysics Data System (ADS)
Poulet, James F. A.; Hedwig, Berthold
2002-08-01
Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.
External audio for IBM-compatible computers
NASA Technical Reports Server (NTRS)
Washburn, David A.
1992-01-01
Numerous applications benefit from the presentation of computer-generated auditory stimuli at points discontiguous with the computer itself. Modification of an IBM-compatible computer for use of an external speaker is relatively easy but not intuitive. This modification is briefly described.
Rieger, Kathryn; Rarra, Marie-Helene; Moor, Nicolas; Diaz Hernandez, Laura; Baenninger, Anja; Razavi, Nadja; Dierks, Thomas; Hubl, Daniela; Koenig, Thomas
2018-03-01
Previous studies showed a global reduction of the event-related potential component N100 in patients with schizophrenia, a phenomenon that is even more pronounced during auditory verbal hallucinations. This reduction assumingly results from dysfunctional activation of the primary auditory cortex by inner speech, which reduces its responsiveness to external stimuli. With this study, we tested the feasibility of enhancing the responsiveness of the primary auditory cortex to external stimuli with an upregulation of the event-related potential component N100 in healthy control subjects. A total of 15 healthy subjects performed 8 double-sessions of EEG-neurofeedback training over 2 weeks. The results of the used linear mixed effect model showed a significant active learning effect within sessions ( t = 5.99, P < .001) against an unspecific habituation effect that lowered the N100 amplitude over time. Across sessions, a significant increase in the passive condition ( t = 2.42, P = .03), named as carry-over effect, was observed. Given that the carry-over effect is one of the ultimate aims of neurofeedback, it seems reasonable to apply this neurofeedback training protocol to influence the N100 amplitude in patients with schizophrenia. This intervention could provide an alternative treatment option for auditory verbal hallucinations in these patients.
Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio
2015-01-01
Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498
Recent advances in exploring the neural underpinnings of auditory scene perception
Snyder, Joel S.; Elhilali, Mounya
2017-01-01
Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the past few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field. PMID:28199022
The relationship between auditory exostoses and cold water: a latitudinal analysis.
Kennedy, G E
1986-12-01
The frequency of auditory exostoses was examined by latitude. It was found that discrete bony lesions of the external auditory canal were, with very few exceptions, either absent or in very low frequency (less than 3.0%) in 0-30 degrees N and S latitudes and above 45 degrees N. The highest frequencies of auditory exostoses were found in the middle latitudes (30-45 degrees N and S) among populations who exploit either marine or fresh water resources. Clinical and experimental data are discussed, and these data are found to support strongly the hypothesis that there is a causative relationship between the formation of auditory exostoses and exploitation of resources in cold water, particularly through diving. It is therefore suggested that since auditory exostoses are behavioral rather than genetic in etiology, they should not be included in estimates of population distance based on nonmetric variables.
Needle in the external auditory canal: an unusual complication of inferior alveolar nerve block.
Ribeiro, Leandro; Ramalho, Sara; Gerós, Sandra; Ferreira, Edite Coimbra; Faria e Almeida, António; Condé, Artur
2014-06-01
Inferior alveolar nerve block is used to anesthetize the ipsilateral mandible. The most commonly used technique is one in which the anesthetic is injected directly into the pterygomandibular space, by an intraoral approach. The fracture of the needle, although uncommon, can lead to potentially serious complications. The needle is usually found in the pterygomandibular space, although it can migrate and damage adjacent structures, with variable consequences. The authors report an unusual case of a fractured needle, migrating to the external auditory canal, as a result of an inferior alveolar nerve block. Copyright © 2014 Elsevier Inc. All rights reserved.
Management of Acquired Atresia of the External Auditory Canal.
Bajin, Münir Demir; Yılmaz, Taner; Günaydın, Rıza Önder; Kuşçu, Oğuz; Sözen, Tevfik; Jafarov, Shamkal
2015-08-01
The aim was to evaluate surgical techniques and their relationship to postoperative success rate and hearing outcomes in acquired atresia of the external auditory canal. In this article, 24 patients with acquired atresia of the external auditory canal were retrospectively evaluated regarding their canal status, hearing, and postoperative success. Acquired stenosis occurs more commonly in males with a male: female ratio of 2-3:1; it seems to be a disorder affecting young adults. Previous ear surgery (13 patients, 54.2%) and external ear trauma (11 patients, 45.8%) were the main etiological factors of acquired ear canal stenosis. Mastoidectomy (12/13) and traffic accidents (8/11) comprise the majority of these etiological factors. Endaural incision is performed in 79.2% and postauricular incision for 20.8% of cases during the operation. As types of surgical approach, transcanal (70.8%), transmastoid (20.8%), and combined (8.4%) approaches are chosen. The atretic plate is generally located at the bony-cartilaginous junction (37.5%) and in the cartilaginous canal (33.3%); the bony canal is involved in a few cases only. Preserved healthy canal skin, split- or full-thickness skin grafts, or pre- or postauricular skin flaps are used to line the ear canal, but preserved healthy canal skin is preferred. The results of surgery are generally satisfactory, and complications are few if surgical principles are followed.
Niederleitner, Bertram; Gutierrez-Ibanez, Cristian; Krabichler, Quirin; Weigel, Stefan; Luksch, Harald
2017-02-15
Processing multimodal sensory information is vital for behaving animals in many contexts. The barn owl, an auditory specialist, is a classic model for studying multisensory integration. In the barn owl, spatial auditory information is conveyed to the optic tectum (TeO) by a direct projection from the external nucleus of the inferior colliculus (ICX). In contrast, evidence of an integration of visual and auditory information in auditory generalist avian species is completely lacking. In particular, it is not known whether in auditory generalist species the ICX projects to the TeO at all. Here we use various retrograde and anterograde tracing techniques both in vivo and in vitro, intracellular fillings of neurons in vitro, and whole-cell patch recordings to characterize the connectivity between ICX and TeO in the chicken. We found that there is a direct projection from ICX to the TeO in the chicken, although this is small and only to the deeper layers (layers 13-15) of the TeO. However, we found a relay area interposed among the IC, the TeO, and the isthmic complex that receives strong synaptic input from the ICX and projects broadly upon the intermediate and deep layers of the TeO. This area is an external portion of the formatio reticularis lateralis (FRLx). In addition to the projection to the TeO, cells in FRLx send, via collaterals, descending projections through tectopontine-tectoreticular pathways. This newly described connection from the inferior colliculus to the TeO provides a solid basis for visual-auditory integration in an auditory generalist bird. J. Comp. Neurol. 525:513-534, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model
Nakao, Kazuhito; Nakazawa, Kazu
2014-01-01
In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691
Internal versus External Auditory Hallucinations in Schizophrenia: Symptom and Course Correlates
Docherty, Nancy M.; Dinzeo, Thomas J.; McCleery, Amanda; Bell, Emily K.; Shakeel, Mohammed K.; Moe, Aubrey
2015-01-01
Introduction The auditory hallucinations associated with schizophrenia are phenomenologically diverse. “External” hallucinations classically have been considered to reflect more severe psychopathology than “internal” hallucinations, but empirical support has been equivocal. Methods We examined associations of “internal” v. “external” hallucinations with (a) other characteristics of the hallucinations, (b) severity of other symptoms, and (c) course of illness variables, in a sample of 97 stable outpatients with schizophrenia or schizoaffective disorder who experienced auditory hallucinations. Results Patients with internal hallucinations did not differ from those with external hallucinations on severity of other symptoms. However, they reported their hallucinations to be more emotionally negative, distressing, and long-lasting, less controllable, and less likely to remit over time. They also were more likely to experience voices commenting, conversing, or commanding. However, they also were more likely to have insight into the self-generated nature of their voices. Patients with internal hallucinations were not older, but had a later age of illness onset. Conclusions Differences in characteristics of auditory hallucinations are associated with differences in other characteristics of the disorder, and hence may be relevant to identifying subgroups of patients that are more homogeneous with respect to their underlying disease processes. PMID:25530157
Ginis, Pieter; Heremans, Elke; Ferrari, Alberto; Dockx, Kim; Canning, Colleen G; Nieuwboer, Alice
2017-01-01
Rhythmic auditory cueing is a well-accepted tool for gait rehabilitation in Parkinson's disease (PD), which can now be applied in a performance-adapted fashion due to technological advance. This study investigated the immediate differences on gait during a prolonged, 30 min, walk with performance-adapted (intelligent) auditory cueing and verbal feedback provided by a wearable sensor-based system as alternatives for traditional cueing. Additionally, potential effects on self-perceived fatigue were assessed. Twenty-eight people with PD and 13 age-matched healthy elderly (HE) performed four 30 min walks with a wearable cue and feedback system. In randomized order, participants received: (1) continuous auditory cueing; (2) intelligent cueing (10 metronome beats triggered by a deviating walking rhythm); (3) intelligent feedback (verbal instructions triggered by a deviating walking rhythm); and (4) no external input. Fatigue was self-scored at rest and after walking during each session. The results showed that while HE were able to maintain cadence for 30 min during all conditions, cadence in PD significantly declined without input. With continuous cueing and intelligent feedback people with PD were able to maintain cadence ( p = 0.04), although they were more physically fatigued than HE. Furthermore, cadence deviated significantly more in people with PD than in HE without input and particularly with intelligent feedback (both: p = 0.04). In PD, continuous and intelligent cueing induced significantly less deviations of cadence ( p = 0.006). Altogether, this suggests that intelligent cueing is a suitable alternative for the continuous mode during prolonged walking in PD, as it induced similar effects on gait without generating levels of fatigue beyond that of HE.
White matter microstructural properties correlate with sensorimotor synchronization abilities.
Blecher, Tal; Tal, Idan; Ben-Shachar, Michal
2016-09-01
Sensorimotor synchronization (SMS) to an external auditory rhythm is a developed ability in humans, particularly evident in dancing and singing. This ability is typically measured in the lab via a simple task of finger tapping to an auditory beat. While simplistic, there is some evidence that poor performance on this task could be related to impaired phonological and reading abilities in children. Auditory-motor synchronization is hypothesized to rely on a tight coupling between auditory and motor neural systems, but the specific pathways that mediate this coupling have not been identified yet. In this study, we test this hypothesis and examine the contribution of fronto-temporal and callosal connections to specific measures of rhythmic synchronization. Twenty participants went through SMS and diffusion magnetic resonance imaging (dMRI) measurements. We quantified the mean asynchrony between an auditory beat and participants' finger taps, as well as the time to resynchronize (TTR) with an altered meter, and examined the correlations between these behavioral measures and diffusivity in a small set of predefined pathways. We found significant correlations between asynchrony and fractional anisotropy (FA) in the left (but not right) arcuate fasciculus and in the temporal segment of the corpus callosum. On the other hand, TTR correlated with FA in the precentral segment of the callosum. To our knowledge, this is the first demonstration that relates these particular white matter tracts with performance on an auditory-motor rhythmic synchronization task. We propose that left fronto-temporal and temporal-callosal fibers are involved in prediction and constant comparison between auditory inputs and motor commands, while inter-hemispheric connections between the motor/premotor cortices contribute to successful resynchronization of motor responses with a new external rhythm, perhaps via inhibition of tapping to the previous rhythm. Our results indicate that auditory-motor synchronization skills are associated with anatomical pathways that have been previously related to phonological awareness, thus offering a possible anatomical basis for the behavioral covariance between these abilities. Copyright © 2016 Elsevier Inc. All rights reserved.
PROCRU: A model for analyzing crew procedures in approach to landing
NASA Technical Reports Server (NTRS)
Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.
1980-01-01
A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.
Hernia of the tympanic membrane.
Ikeda, Ryoukichi; Miyazaki, Hiromitsu; Kawase, Tetsuaki; Katori, Yukio; Kobayashi, Toshimitsu
2017-02-01
Although tympanic bulging is commonly encountered, tympanic herniation occupying the external auditory canal is extremely rare. A 66-year-old man was presented to our hospital with left aural fullness, bilateral hearing loss and otorrhea. Preoperative findings suggested tympanic membrane (TM) hernia located in the left external auditory canal. We performed total resection of the soft mass by a transcanal approach using endoscopy. Ventilation tubes were inserted into bilateral ears. Histopathological findings confirmed diagnosis of TM hernia. Passive opening pressure of this patient was higher than normal condition of the Eustachian tube, where active opening was not observed. Hernia of the TM most likely resulted from long-term excessive Valsalva maneuver. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Nodular fasciitis of the external auditory canal in six Egyptian children.
Abdel-Aziz, Mosaad; Khattab, Hany; El-bosraty, Hussam; El-hoshy, Hassan; Hesham, Ahmed; Al-taweel, Hayam W
2008-05-01
Nodular fasciitis of external auditory canal may mimic a malignant tumor due to its progressive course, so it was the aim of this study to focus on a new etiology for aural masses to avoid unnecessary aggressive treatment. Retrospective study on six children presented with aural masses that were diagnosed pathologically to have nodular fasciitis. Presentation of the cases clinically, radiologically and pathologically was carried out. Surgical excision of the lesions was done through the external canal with follow up of the cases for 1 year. Recurrence was detected in two cases, one after 2 months and the other after 4 months. Re-excision was carried out without recurrence till the end of the follow up period. Proper diagnosis of this lesion is mandatory to avoid aggressive treatment (radical surgery and/or radiotherapy) as the disease has favorable prognosis with local excision.
Global dynamics of selective attention and its lapses in primary auditory cortex.
Lakatos, Peter; Barczak, Annamaria; Neymotin, Samuel A; McGinnis, Tammy; Ross, Deborah; Javitt, Daniel C; O'Connell, Monica Noelle
2016-12-01
Previous research demonstrated that while selectively attending to relevant aspects of the external world, the brain extracts pertinent information by aligning its neuronal oscillations to key time points of stimuli or their sampling by sensory organs. This alignment mechanism is termed oscillatory entrainment. We investigated the global, long-timescale dynamics of this mechanism in the primary auditory cortex of nonhuman primates, and hypothesized that lapses of entrainment would correspond to lapses of attention. By examining electrophysiological and behavioral measures, we observed that besides the lack of entrainment by external stimuli, attentional lapses were also characterized by high-amplitude alpha oscillations, with alpha frequency structuring of neuronal ensemble and single-unit operations. Entrainment and alpha-oscillation-dominated periods were strongly anticorrelated and fluctuated rhythmically at an ultra-slow rate. Our results indicate that these two distinct brain states represent externally versus internally oriented computational resources engaged by large-scale task-positive and task-negative functional networks.
Was Cheselden's One-Century-Long Otological Writings Concordant With His Time?
Corrales, C Eduardo; Mudry, Albert
2015-08-01
William Cheselden's famous anatomical treatise spanned the entire 18th century period with its 15 editions. The aim of this study is to analyze the otological knowledge described in all these editions, to identify key 18th century otological advancements, and to study their concordance.In the first edition (1713), Cheselden notably mentioned four middle ear ossicles: malleus, incus, fourth ossicle, and stapes; four auditory muscles: "external tympani," "external oblique," tensor tympani, and stapedial; and a small opening in the tympanic membrane. In subsequent editions, minimal changes appeared, except for nomenclature changes and the proposal of an artificial opening of the tympanic membrane. Virtually no changes were performed up to the last edition (1806). All Cheselden's Editions confirm the uncertain presence of a fourth ossicle, the disputable presence of a tympanic membrane opening and the "usual" accepted presence of three muscles to the malleus. Key otologic advancements, not found in any of Cheselden's writings, were catherization of the Eustachian tube, presence of fluid in the inner ear, and the surgical opening of the mastoid.This study demonstrates that Cheselden, and his subsequent editors, were unaware of some important otologic developments that revolutionized the field of otology. Description of key advancements lacking in his treatise includes catherization of the Eustachian tube, the presence of fluid in the inner ear, and the surgical opening of the mastoid. Nevertheless, Cheselden is first in proposing to artificially open the tympanic membrane in humans.
[Management and classification of first branchial cleft anomalies].
Zhong, Zhen; Zhao, Enmin; Liu, Yuhe; Liu, Ping; Wang, Quangui; Xiao, Shuifang
2013-07-01
We aimed to identify the different courses of first branchial cleft anomalies and to discuss the management and classification of these anomalies. Twenty-four patients with first branchial cleft anomalies were reviewed. The courses of first branchial cleft anomalies and their corresponding managements were analyzed. Each case was classified according to Olsen's criteria and Works criteria. According to Olsen's criteria, 3 types of first branchial cleft anomalies are identified: cysts (n = 4), sinuses (n = 13), and fistulas (n = 7). The internal opening was in the external auditory meatus in 16 cases. Two fistulas were parallel to the external auditory canal and the Eustachian tube, with the internal openings on the Eustachian tube. Fourteen cases had close relations to the parotid gland and dissection of the facial nerve had to be done in the operation. Temporary weakness of the mandibular branch of facial nerve occurred in 2 cases. Salivary fistula of the parotid gland occurred in one patient, which was managed by pressure dressing for two weeks. Canal stenosis occurred in one patient, who underwent canalplasty after three months. The presence of squamous epithelium was reported in all cases, adnexal skin structures in 6 cases, and cartilage in 14 cases. The specimens of the fistula which extended to the nasopharynx were reported as tracts lined with squamous epithelium (the external part) and ciliated columnar epithelium (the internal part). According to Work's criteria, 9 cases were classified as Type I lesions, 13 cases were classified as Type II lesions, and two special cases could not be classified. The average follow-up was 83 months (ranging from 12 to 152 months). No recurrence was found. First branchial cleft anomalies have high variability in the courses. If a patient is suspected to have first branchial anomalies, the external auditory canal must be examined for the internal opening. CT should be done to understand the extension of the lesion. For cases without internal openings in the external auditory canal, CT fistulography should be done to demonstrate the courses, followed by corresponding treatment. Two special cases might be classified as a new type of lesions.
Neural dynamics underlying attentional orienting to auditory representations in short-term memory.
Backer, Kristina C; Binns, Malcolm A; Alain, Claude
2015-01-21
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.
Vanneste, Sven; De Ridder, Dirk
2012-01-01
Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375
The RetroX auditory implant for high-frequency hearing loss.
Garin, P; Genard, F; Galle, C; Jamart, J
2004-07-01
The objective of this study was to analyze the subjective satisfaction and measure the hearing gain provided by the RetroX (Auric GmbH, Rheine, Germany), an auditory implant of the external ear. We conducted a retrospective case review. We conducted this study at a tertiary referral center at a university hospital. We studied 10 adults with high-frequency sensori-neural hearing loss (ski-slope audiogram). The RetroX consists of an electronic unit sited in the postaural sulcus connected to a titanium tube implanted under the auricle between the sulcus and the entrance of the external auditory canal. Implanting requires only minor surgery under local anesthesia. Main outcome measures were a satisfaction questionnaire, pure-tone audiometry in quiet, speech audiometry in quiet, speech audiometry in noise, and azimuth audiometry (hearing threshold in function of sound source location within the horizontal plane at ear level). : Subjectively, all 10 patients are satisfied or even extremely satisfied with the hearing improvement provided by the RetroX. They wear the implant daily, from morning to evening. We observe a statistically significant improvement of pure-tone thresholds at 1, 2, and 4 kHz. In quiet, the speech reception threshold improves by 9 dB. Speech audiometry in noise shows that intelligibility improves by 26% for a signal-to-noise ratio of -5 dB, by 18% for a signal-to-noise ratio of 0 dB, and by 13% for a signal-to-noise ratio of +5 dB. Localization audiometry indicates that the skull masks sound contralateral to the implanted ear. Of the 10 patients, one had acoustic feedback and one presented with a granulomatous reaction to the foreign body that necessitated removing the implant. The RetroX auditory implant is a semi-implantable hearing aid without occlusion of the external auditory canal. It provides a new therapeutic alternative for managing high-frequency hearing loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, Kazuhiko; Nakamura, Katsumasa; Hatano, Kazuo
Purpose: To examine the relative roles of surgery, radiotherapy, and chemotherapy in the management of patients with squamous cell carcinomas of the external auditory canal and middle ear. Methods and Materials: The records of 87 patients with histologically confirmed squamous cell carcinoma who were treated between 1984 and 2005 were reviewed. Fifty-three patients (61%) were treated with surgery and radiotherapy (S + RT group) and the remaining 34 patients with radiotherapy alone (RT group). Chemotherapy was administered in 34 patients (39%). Results: The 5-year actuarial overall and disease-free survival (DFS) rates for all patients were 55% and 54%, respectively. Onmore » univariate analysis, T stage (Stell's classification), treatment modality, and Karnofsky performance status had significant impact on DFS. On multivariate analysis, T stage and treatment modality were significant prognostic factors. Chemotherapy did not influence DFS. The 5-year DFS rate in T1, T2, and T3 patients was 83%, 45%, and 0 in the RT group (p < 0.0001) and 75%, 75%, and 46% in the S + RT group (p = 0.13), respectively. The 5-year DFS rate in patients with negative surgical margins, those with positive margins, and those with macroscopic residual disease was 83%, 55%, and 38%, respectively (p = 0.007). Conclusions: Radical radiotherapy is the treatment of choice for early-stage (T1) diseases, whereas surgery (negative surgical margins if possible) with radiotherapy is recommended as the standard care for advanced (T2-3) disease. Further clarification on the role of chemotherapy is necessary.« less
An Evaluative Report on the Current Status of Parapsychology
1986-05-01
mentation" (Stanford, 1979). The ganzfeld procedure eliminates patterned stimulation in the visual h and auditory modes. Visual isolation is provided by...distracting external stimulation . The most popular of such techniques is the ganzfeld, a procedure in which the subject looks through halves of ping...powerful statistical analyses. Ongoing analog or digital feedback can be provided to subjects in innumerable ways in either the visual or auditory mode
A Cognitive Paradigm to Investigate Interference in Working Memory by Distractions and Interruptions
Janowich, Jacki; Mishra, Jyoti; Gazzaley, Adam
2015-01-01
Goal-directed behavior is often impaired by interference from the external environment, either in the form of distraction by irrelevant information that one attempts to ignore, or by interrupting information that demands attention as part of another (secondary) task goal. Both forms of external interference have been shown to detrimentally impact the ability to maintain information in working memory (WM). Emerging evidence suggests that these different types of external interference exert different effects on behavior and may be mediated by distinct neural mechanisms. Better characterizing the distinct neuro-behavioral impact of irrelevant distractions versus attended interruptions is essential for advancing an understanding of top-down attention, resolution of external interference, and how these abilities become degraded in healthy aging and in neuropsychiatric conditions. This manuscript describes a novel cognitive paradigm developed the Gazzaley lab that has now been modified into several distinct versions used to elucidate behavioral and neural correlates of interference, by to-be-ignored distractors versus to-be-attended interruptors. Details are provided on variants of this paradigm for investigating interference in visual and auditory modalities, at multiple levels of stimulus complexity, and with experimental timing optimized for electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) studies. In addition, data from younger and older adult participants obtained using this paradigm is reviewed and discussed in the context of its relationship with the broader literatures on external interference and age-related neuro-behavioral changes in resolving interference in working memory. PMID:26273742
Rerouting the external auditory canal. A method of correcting congenital stenosis.
Baron, S H
1975-04-01
An hourglass or funnel-shaped, stenosed, external auditory meatus with a normal tympanic membrane, middle and inner ear is one of the congenital anomalies that occasionally occurs. Such abnormality was present in both ears of a woman and caused chromic otitis externa and deafness. A routine meatoplasty on the right ear failed because of an unusual cephalad position of the drumhead in relation to a "downhill" position of the stenosed outer meatus. Rerouting the ear canal to a horizontal position by removing bone of the canal superiorly, posteriorly, and inferiorly, and grafting the now horizontal canal with skin taken from the postauricular fold produced a good result. This is a satisfactory procedure for a woman, but would be cosmetically unacceptable for a man.
Data of ERPs and spectral alpha power when attention is engaged on visual or verbal/auditory imagery
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-01-01
This article provides data from statistical analysis of event-related brain potentials (ERPs) and spectral power from 20 participants during three attentional conditions. Specifically, P1, N1 and P300 amplitude of ERP were compared when participant׳s attention was oriented to an external task, to a visual imagery and to an inner speech. The spectral power from alpha band was also compared in these three attentional conditions. These data are related to the research article where sensory processing of external information was compared during these three conditions entitled “Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli” (Villena-Gonzalez et al., 2016) [1]. PMID:27077090
Rasmussen, N; Johnsen, N J; Thomsen, J
1979-01-01
Six out of twenty descendants of a reportedly affected grandfather have congenital bilateral symmetrical and isolated subtotal atresia of the external auditory canal. Four of the six affected descendants have bilateral foot anomalies--two affected cousins having congenital vertical talus. All of the three affected boys in the third generation have increased interocular distance. Short fifth fingers, bilateral single transverse palmar creases, pyloric stenosis and congenital exotropia were found infrequently and are considered coincidental features. Apart from the atresia, oto-rhinolaryngologic examination, mental function, dermatoglyphics, IgA, kidney function and heart function of the affected descendants were all normal. The karyotype of four affected descendants examined was normal. An autosomal dominant inheritance with variable expressivity is suggested.
Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude
2016-06-01
Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
The Experience of Soviet Medicine in the Great Patriotic War 1941-1945,
1980-02-06
mainly during shock/counterblow of brain against the contradictory/opposite walls of skull. Subliminal stimulations cause system resFonse of IX-X nerve...the same effect was cbtained during the stimulation of external auditory passage and muccsa of ncse. Vith sharp pressure to the region of the inguinal...zone of stimulation are invclved the centers cf tespcral region in combination with vestibular, auditory or gustatory aura. At the same time the
Auditory priming improves neural synchronization in auditory-motor entrainment.
Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J
2018-05-22
Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Huschke's anterior external auditory canal foramen: art before medicine?
Pirsig, Wolfgang; Mudry, Albert
2015-03-01
During the Renaissance, several anatomic details were described with a degree of exactness, which would stand the test of time. One example is the foramen in the anteroinferior wall of the external auditory canal, eponymously named after the German anatomist, Emil Huschke, who described it in 1844. However, the first clearly medical observation of this foramen was published by the French physician Jean Riolan the Younger in 1648. After a short excursion into some paleopathologic findings of this foramen in skulls of the Early Bronze Age and of pre-Columbian Peruvian populations, this article follows the traces of the early medical descriptions and depictions of the foramen up until the 19th century. They are connected with the names of Duverney (1683), Cassebohm (1734), Lincke (1837), Huschke (1844); Humphry (1858), von Troeltsch (1860), and especially Buerkner (1878). Surprisingly, the earliest exact depiction of the foramen in the auditory canal of a skull was found in the oil painting Saint Jerome in his study by the Flemish artist Marinus Claeszon van Reymerswaele. He depicted the foramen in the period between 1521 and 1541, a hundred years before Riolan the Younger.
A direct comparison of short-term audiomotor and visuomotor memory.
Ward, Amanda M; Loucks, Torrey M; Ofori, Edward; Sosnoff, Jacob J
2014-04-01
Audiomotor and visuomotor short-term memory are required for an important variety of skilled movements but have not been compared in a direct manner previously. Audiomotor memory capacity might be greater to accommodate auditory goals that are less directly related to movement outcome than for visually guided tasks. Subjects produced continuous isometric force with the right index finger under auditory and visual feedback. During the first 10 s of each trial, subjects received continuous auditory or visual feedback. For the following 15 s, feedback was removed but the force had to be maintained accurately. An internal effort condition was included to test memory capacity in the same manner but without external feedback. Similar decay times of ~5-6 s were found for vision and audition but the decay time for internal effort was ~4 s. External feedback thus provides an advantage in maintaining a force level after feedback removal, but may not exclude some contribution from a sense of effort. Short-term memory capacity appears longer than certain previous reports but there may not be strong distinctions in capacity across different sensory modalities, at least for isometric force.
Paillère-Martinot, M-L; Galinowski, A; Plaze, M; Andoh, J; Bartrés-Faz, D; Bellivier, F; Lefaucheur, J-P; Rivière, D; Gallarda, T; Martinot, J-L; Artiges, E
2017-03-01
Repetitive transcranial magnetic stimulation (rTMS) over the left temporo-parietal region has been proposed as a treatment for resistant auditory verbal hallucinations (AVH), but which patients are more likely to benefit from rTMS is still unclear. This study sought to assess the effects of rTMS on AVH, with a focus on hallucination phenomenology. Twenty-seven patients with schizophrenia and medication-resistant AVH participated to a randomized, double-blind, placebo-controlled, add-on rTMS study. The stimulation targeted a language-perception area individually determined using functional magnetic resonance imaging and a language recognition task. AVH were assessed using the hallucination subscale of the Scale for the Assessment of Positive Symptoms (SAPS). The spatial location of AVH was assessed using the Psychotic Symptom Rating Scales. A significant improvement in SAPS hallucination subscale score was observed in both actively treated and placebo-treated groups with no difference between both modalities. Patients with external AVH were significantly more improved than patients with internal AVH, with both modalities. A marked placebo effect of rTMS was observed in patients with resistant AVH. Patients with prominent external AVH may be more likely to benefit from both active and placebo interventions. Cortical effects related to non-magnetic stimulation of the auditory cortex are suggested. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Shinnabe, Akihiro; Hara, Mariko; Hasegawa, Masayo; Matsuzawa, Shingo; Kanazawa, Hiromi; Yoshida, Naohiro; Iino, Yukiko
2013-01-01
To investigate the different pathways of progression to the middle ear in keratosis obturans (KO) and external auditory canal cholesteatoma (EACC). Retrospective case review. Referral hospital otolaryngology department. Patients with KO or EACC and middle ear disease who underwent surgical management were included. Four ears of 4 patients (mean age, 41.25 yr) were the KO group, and 5 ears of 4 patients (mean age, 49.5 yr) were the EACC group. Intraoperative findings of the middle ear cavity were investigated in KO and EACC groups. In the KO group, 3 patients had a perforated tympanic membrane and cholesteatoma in the tympanic cavity. The other patient had preoperative right facial palsy. Removal of the keratin plug revealed an adherent tympanic membrane. In intraoperative findings, the tympanic segment of the fallopian canal was found to be eroded because of inflammation. No case initially progressed to the mastoid cavity. Four patients had external auditory canal cholesteatoma with middle ear disease. In EACC group, all patients had initial progression to the mastoid cavity. KO tends to progress initially to the tympanic cavity via a diseased tympanic membrane. EACC tends to progress to the mastoid cavity via destruction of the posterior bony canal. This is the first report to investigate differences in pathway of progression to the middle ear cavity in these 2 diseases.
McCoul, Edward D; Hanson, Matthew B
2011-12-01
We conducted a retrospective study to compare the clinical characteristics of external auditory canal cholesteatoma (EACC) with those of a similar entity, keratosis obturans (KO). We also sought to identify those aspects of each disease that may lead to complications. We identified 6 patients in each group. Imaging studies were reviewed for evidence of bony erosion and the proximity of disease to vital structures. All 6 patients in the EACC group had their diagnosis confirmed by computed tomography (CT), which demonstrated widening of the bony external auditory canal; 4 of these patients had critical erosion of bone adjacent to the facial nerve. Of the 6 patients with KO, only 2 had undergone CT, and neither exhibited any significant bony erosion or expansion; 1 of them developed osteomyelitis of the temporal bone and adjacent temporomandibular joint. Another patient manifested KO as part of a dermatophytid reaction. The essential component of treatment in all cases of EACC was microscopic debridement of the ear canal. We conclude that EACC may produce significant erosion of bone with exposure of vital structures, including the facial nerve. Because of the clinical similarity of EACC to KO, misdiagnosis is possible. Temporal bone imaging should be obtained prior to attempts at debridement of suspected EACC. Increased awareness of these uncommon conditions is warranted to prompt appropriate investigation and prevent iatrogenic complications such as facial nerve injury.
Chhabra, Harleen; Sowmya, Selvaraj; Sreeraj, Vanteemar S; Kalmady, Sunil V; Shivakumar, Venkataram; Amaresha, Anekal C; Narayanaswamy, Janardhanan C; Venkatasubramanian, Ganesan
2016-12-01
Auditory hallucinations constitute an important symptom component in 70-80% of schizophrenia patients. These hallucinations are proposed to occur due to an imbalance between perceptual expectation and external input, resulting in attachment of meaning to abstract noises; signal detection theory has been proposed to explain these phenomena. In this study, we describe the development of an auditory signal detection task using a carefully chosen set of English words that could be tested successfully in schizophrenia patients coming from varying linguistic, cultural and social backgrounds. Schizophrenia patients with significant auditory hallucinations (N=15) and healthy controls (N=15) performed the auditory signal detection task wherein they were instructed to differentiate between a 5-s burst of plain white noise and voiced-noise. The analysis showed that false alarms (p=0.02), discriminability index (p=0.001) and decision bias (p=0.004) were significantly different between the two groups. There was a significant negative correlation between false alarm rate and decision bias. These findings extend further support for impaired perceptual expectation system in schizophrenia patients. Copyright © 2016 Elsevier B.V. All rights reserved.
Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.
Hazell, J W; Jastreboff, P J
1990-02-01
A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.
47 CFR 14.21 - Performance Objectives.
Code of Federal Regulations, 2013 CFR
2013-10-01
... operate and use the product, including but not limited to, text, static or dynamic images, icons, labels.... (2) Connection point for external audio processing devices. Products providing auditory output shall...
47 CFR 14.21 - Performance Objectives.
Code of Federal Regulations, 2014 CFR
2014-10-01
... operate and use the product, including but not limited to, text, static or dynamic images, icons, labels.... (2) Connection point for external audio processing devices. Products providing auditory output shall...
Adhershitha, A. R.; Anilkumar, S.; Rajesh, C.; Mohan, Deepak C.
2016-01-01
Acquired external auditory canal (EAC) atresia is an infrequent entity which can originate from a number of different causes including trauma, infection, neoplasia, inflammation, and radiotherapy. Posttraumatic atresias are exceptionally rare, only 10% of atresias are attributed to trauma in most of the series. The management of stenosis of the EAC is challenging as it is associated with residual hearing loss and late recurrence. Traditional stents often occlude the EAC, resulting in a temporary conductive hearing loss. This case report describes the technique of fabrication of a wide-bored acrylic stent which attained additional retention from the folds of the auricle. The customized earmold stent effectively prevented restenosis, while the large bore provided ventilation and improved hearing subjectively during the stenting period. PMID:27746605
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.
1991-01-01
A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate cues for localization. In general, these data suggest that most listeners can obtain useful directional information from an auditory display without requiring the use of individually-tailored HRTF's.
A new method for selecting auricle positions in skull base reconstruction for temporal bone cancer.
Tanaka, Kentaro; Yano, Tomoyuki; Homma, Tsutomu; Tsunoda, Atsunobu; Aoyagi, Masaru; Kishimoto, Seiji; Okazaki, Mutsumi
2018-03-25
In advanced temporal bone carcinoma cases, we attempted to preserve as much of the auricle as possible from a cosmetic and functional perspective. Difficulties are associated with selecting an adequate position for reconstructed auricles intraoperatively. We improved the surgical procedure to achieve a good postoperative auricle position. Nine patients were included in this study. All patients underwent subtotal removal of the temporal bone and resection of the external auditory canal while preserving most of the external ear, and lateral skull base reconstruction was performed with anterolateral thigh flaps. We invented a new device, the auricle localizer, to select the correct position for the replaced external ear. The head skin incision line and two points of three-point pin fixation were used as criteria, and a Kirschner wire was shaped as a basic line to match these criteria. Another Kirschner wire was shaped by wrapping it around the inferior edge of the external ear as the positioning line, and these two lines were then combined. To evaluate the postoperative auricle position, the auricle inclination angle was measured using head frontal cephalogram imaging. The external ear on the affected side clearly drooped postoperatively in nonlocalizer cases, whereas this was not obvious in localizer cases. Auricle inclination angles 1 year after surgery significantly differed between these two cases (P = 0.018). The surgical device, the auricle localizer, is useful for selecting intraoperative accurate auricle positions. The assessment index, the auricle inclination angle, is useful for quantitatively evaluating postoperative results. 4 Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.
Psychophysics and Neuronal Bases of Sound Localization in Humans
Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.
2013-01-01
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698
A 3 year update on the influence of noise on performance and behavior.
Clark, Charlotte; Sörqvist, Patrik
2012-01-01
The effect of noise exposure on human performance and behavior continues to be a focus for research activities. This paper reviews developments in the field over the past 3 years, highlighting current areas of research, recent findings, and ongoing research in two main research areas: Field studies of noise effects on children's cognition and experimental studies of auditory distraction. Overall, the evidence for the effects of external environmental noise on children's cognition has strengthened in recent years, with the use of larger community samples and better noise characterization. Studies have begun to establish exposure-effect thresholds for noise effects on cognition. However, the evidence remains predominantly cross-sectional and future research needs to examine whether sound insulation might lessen the effects of external noise on children's learning. Research has also begun to explore the link between internal classroom acoustics and children's learning, aiming to further inform the design of the internal acoustic environment. Experimental studies of the effects of noise on cognitive performance are also reviewed, including functional differences in varieties of auditory distraction, semantic auditory distraction, individual differences in susceptibility to auditory distraction, and the role of cognitive control on the effects of noise on understanding and memory of target speech materials. In general, the results indicate that there are at least two functionally different types of auditory distraction: One due to the interruption of processes (as a result of attention being captured by the sound), another due to interference between processes. The magnitude of the former type is related to individual differences in cognitive control capacities (e.g., working memory capacity); the magnitude of the latter is not. Few studies address noise effects on behavioral outcomes, emphasizing the need for researchers to explore noise effects on behavior in more detail.
[Otomycosis and topical application of thimerosal: study of 152 cases].
Tisner, J; Millán, J; Rivas, P; Adiego, I; Castellote, A; Valles, H
1995-01-01
To evaluate the effectiveness of the topical application of Timerosal (merthilate tintura) in mycosis involving the external auditory canal. The study includes 152 patients with the clinical, otoscopic and microscopic diagnosis of otomycosis. Results were assessed 72 hours and 10 days after the application. Bacteriological study was performed in 83 patients, finding Aspergilly niger in 54.0% of the cases, Candida albicans in 25.4%, Aspergillus fumigatus in 15.8% and Penicillium in 4.8%. Improvement at 72 h. was found in 66.4% and at 10 days in 93.4% of the patients. Bacteriological contamination was found in 6.6% of the total. In most of the patients, the otomycosis healed after cleaning of the external auditory canal and topical application of timerosal. This method is easy to apply, fast, effective, of low cost and few side effects.
Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro
2016-09-13
The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.
New HRCT-based measurement of the human outer ear canal as a basis for acoustical methods.
Grewe, Johanna; Thiele, Cornelia; Mojallal, Hamidreza; Raab, Peter; Sankowsky-Rothe, Tobias; Lenarz, Thomas; Blau, Matthias; Teschner, Magnus
2013-06-01
As the form and size of the external auditory canal determine its transmitting function and hence the sound pressure in front of the eardrum, it is important to understand its anatomy in order to develop, optimize, and compare acoustical methods. High-resolution computed tomography (HRCT) data were measured retrospectively for 100 patients who had received a cochlear implant. In order to visualize the anatomy of the auditory canal, its length, radius, and the angle at which it runs were determined for the patients’ right and left ears. The canal’s volume was calculated, and a radius function was created. The determined length of the auditory canal averaged 23.6 mm for the right ear and 23.5 mm for the left ear. The calculated auditory canal volume (Vtotal) was 0.7 ml for the right ear and 0.69 ml for the left ear. The auditory canal was found to be significantly longer in men than in women, and the volume greater. The values obtained can be employed to develop a method that represents the shape of the auditory canal as accurately as possible to allow the best possible outcomes for hearing aid fitting.
Listening to Filtered Music as a Treatment Option for Tinnitus: A Review
Wilson, E. Courtenay; Schlaug, Gottfried; Pantev, Christo
2010-01-01
TINNITUS IS THE PERCEPTION OF A SOUND IN THE absence of an external acoustic stimulus and it affects roughly 10-15% of the population. This review will discuss the different types of tinnitus and the current research on the underlying neural substrates of subjective tinnitus. Specific focus will be paid to the plasticity of the auditory cortex, the inputs from non-auditory centers in the central nervous system and how these are affected by tinnitus. We also will discuss several therapies that utilize music as a treatment for tinnitus and highlight a novel method that filters out the tinnitus frequency from the music, leveraging the plasticity in the auditory cortex as a means of reducing the impact of tinnitus. PMID:21170296
A periodic network of neurochemical modules in the inferior colliculus.
Chernock, Michelle L; Larue, David T; Winer, Jeffery A
2004-02-01
A new organization has been found in shell nuclei of rat inferior colliculus. Chemically specific modules with a periodic distribution fill about half of layer 2 of external cortex and dorsal cortex. Modules contain clusters of small glutamic acid decarboxylase-positive neurons and large boutons at higher density than in other inferior colliculus subdivisions. The modules are also present in tissue stained for parvalbumin, cytochrome oxidase, nicotinamide adenine dinucleotide phosphate-diaphorase, and acetylcholinesterase. Six to seven bilaterally symmetrical modules extend from the caudal extremity of the external cortex of the inferior colliculus to its rostral pole. Modules are from approximately 800 to 2200 microm long and have areas between 5000 and 40,000 microm2. Modules alternate with immunonegative regions. Similar modules are found in inbred and outbred strains of rat, and in both males and females. They are absent in mouse, squirrel, cat, bat, macaque monkey, and barn owl. Modules are immunonegative for glycine, calbindin, serotonin, and choline acetyltransferase. The auditory cortex and ipsi- and contralateral inferior colliculi project to the external cortex. Somatic sensory influences from the dorsal column nuclei and spinal trigeminal nucleus are the primary ascending sensory input to the external cortex; ascending auditory input to layer 2 is sparse. If the immunopositive modular neurons receive this input, the external cortex could participate in spatial orientation and somatic motor control through its intrinsic and extrinsic projections.
Li, Jianwen; Li, Yan; Zhang, Ming; Ma, Weifang; Ma, Xuezong
2014-01-01
The current use of hearing aids and artificial cochleas for deaf-mute individuals depends on their auditory nerve. Skin-hearing technology, a patented system developed by our group, uses a cutaneous sensory nerve to substitute for the auditory nerve to help deaf-mutes to hear sound. This paper introduces a new solution, multi-channel-array skin-hearing technology, to solve the problem of speech discrimination. Based on the filtering principle of hair cells, external voice signals at different frequencies are converted to current signals at corresponding frequencies using electronic multi-channel bandpass filtering technology. Different positions on the skin can be stimulated by the electrode array, allowing the perception and discrimination of external speech signals to be determined by the skin response to the current signals. Through voice frequency analysis, the frequency range of the band-pass filter can also be determined. These findings demonstrate that the sensory nerves in the skin can help to transfer the voice signal and to distinguish the speech signal, suggesting that the skin sensory nerves are good candidates for the replacement of the auditory nerve in addressing deaf-mutes’ hearing problems. Scientific hearing experiments can be more safely performed on the skin. Compared with the artificial cochlea, multi-channel-array skin-hearing aids have lower operation risk in use, are cheaper and are more easily popularized. PMID:25317171
Mu, Yan; Huang, Yingyu; Ji, Chao; Gu, Li; Wu, Xiang
2018-05-01
The superiority of the auditory over visual modality in sensorimotor synchronization-a fundamental ability to coordinate movements with external rhythms-has long been established, whereas recent metronome synchronization work showed that synchronization of a visual bouncing ball was not less stable than synchronization of auditory tones in adults. The present study examined synchronization to isochronous sequences composed of auditory tones, visual flashes, or a bouncing ball in 6- to 7-year-old children, 12- to 15-year-old children, and 19- to 29-year-old adults. Consistent with previous reporting, the results showed that synchronization stability increased with age and synchronization was less stable for flashes than for tones and bouncing balls. As for the focus of the present study, the results revealed that synchronization of the bouncing ball was less stable than synchronization of tones for younger children, but not for teenagers and adults. The finding suggests the predisposition of the auditory advantage of sensorimotor synchronization in early childhood. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Temporal Organization of Sound Information in Auditory Memory.
Song, Kun; Luo, Huan
2017-01-01
Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.
The many facets of auditory display
NASA Technical Reports Server (NTRS)
Blattner, Meera M.
1995-01-01
In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.
Kim, Soo Ji; Kwak, Eunmi E; Park, Eun Sook; Cho, Sung-Rae
2012-10-01
To investigate the effects of rhythmic auditory stimulation (RAS) on gait patterns in comparison with changes after neurodevelopmental treatment (NDT/Bobath) in adults with cerebral palsy. A repeated-measures analysis between the pretreatment and posttreatment tests and a comparison study between groups. Human gait analysis laboratory. Twenty-eight cerebral palsy patients with bilateral spasticity participated in this study. The subjects were randomly allocated to either neurodevelopmental treatment (n = 13) or rhythmic auditory stimulation (n = 15). Gait training with rhythmic auditory stimulation or neurodevelopmental treatment was performed three sessions per week for three weeks. Temporal and kinematic data were analysed before and after the intervention. Rhythmic auditory stimulation was provided using a combination of a metronome beat set to the individual's cadence and rhythmic cueing from a live keyboard, while neurodevelopmental treatment was implemented following the traditional method. Temporal data, kinematic parameters and gait deviation index as a measure of overall gait pathology were assessed. Temporal gait measures revealed that rhythmic auditory stimulation significantly increased cadence, walking velocity, stride length, and step length (P < 0.05). Kinematic data demonstrated that anterior tilt of the pelvis and hip flexion during a gait cycle was significantly ameliorated after rhythmic auditory stimulation (P < 0.05). Gait deviation index also showed modest improvement in cerebral palsy patients treated with rhythmic auditory stimulation (P < 0.05). However, neurodevelopmental treatment showed that internal and external rotations of hip joints were significantly improved, whereas rhythmic auditory stimulation showed aggravated maximal internal rotation in the transverse plane (P < 0.05). Gait training with rhythmic auditory stimulation or neurodevelopmental treatment elicited differential effects on gait patterns in adults with cerebral palsy.
Relative size of auditory pathways in symmetrically and asymmetrically eared owls.
Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R
2011-01-01
Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.
Epidermoid cyst of the external auditory canal in children: diagnosis and management.
Abdel-Aziz, Mosaad
2011-07-01
Epidermoid cyst of the external auditory canal (EAC) is rarely encountered in the clinical practice, but when it occurs, it may cause obstruction of the meatus that necessitates surgical excision. The aims of this study were to present 9 pediatric patients with epidermoid cysts of the EAC and to evaluate the outcome of the surgical technique that has been used in excision. Surgical removal of the cyst was carried out through a simple transmeatal approach, a medially based rectangular skin flap was elevated and the cyst was completely removed. No complications or recurrence have been reported. Epidermoid cyst should be listed in the differential diagnosis of EAC masses; it appears on computed tomography as a cystic mass in the outer cartilaginous part of EAC that is usually limited to the soft tissue with no bone erosion. It can be removed easily through simple transmeatal approach with high success rate and no morbidity.
Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul
2016-01-01
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception. PMID:27042360
Visual influences on auditory spatial learning
King, Andrew J.
2008-01-01
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967
Impey, Danielle; Knott, Verner
2015-08-01
Membrane potentials and brain plasticity are basic modes of cerebral information processing. Both can be externally (non-invasively) modulated by weak transcranial direct current stimulation (tDCS). Polarity-dependent tDCS-induced reversible circumscribed increases and decreases in cortical excitability and functional changes have been observed following stimulation of motor and visual cortices but relatively little research has been conducted with respect to the auditory cortex. The aim of this pilot study was to examine the effects of tDCS on auditory sensory discrimination in healthy participants (N = 12) assessed with the mismatch negativity (MMN) brain event-related potential (ERP). In a randomized, double-blind, sham-controlled design, participants received anodal tDCS over the primary auditory cortex (2 mA for 20 min) in one session and 'sham' stimulation (i.e., no stimulation except initial ramp-up for 30 s) in the other session. MMN elicited by changes in auditory pitch was found to be enhanced after receiving anodal tDCS compared to 'sham' stimulation, with the effects being evidenced in individuals with relatively reduced (vs. increased) baseline amplitudes and with relatively small (vs. large) pitch deviants. Additional studies are needed to further explore relationships between tDCS-related parameters, auditory stimulus features and individual differences prior to assessing the utility of this tool for treating auditory processing deficits in psychiatric and/or neurological disorders.
[Can music therapy for patients with neurological disorders?].
Myskja, Audun
2004-12-16
Recent developments in brain research and in the field of music therapy have led to the development of music-based methods specifically aimed at relieving symptoms of Parkinson's disease and other neurologic disorders. Rhythmic auditory stimulation uses external rhythmic auditory cues from song, music or metronome to aid patients improving their walking functioning and has been shown to be effective both within sessions and as a result of training over time. Melodic intonation therapy and related vocal techniques can improve expressive dysphasia and aid rehabilitation of neurologic disorders, particularly Parkinson's disease, stroke and developmental disorders.
Surfer's ear: external auditory exostoses are more prevalent in cold water surfers.
Kroon, David F; Lawson, M Louise; Derkay, Craig S; Hoffmann, Karen; McCook, Joe
2002-05-01
The study goal was to demonstrate the prevalence and severity of external auditory exostoses (EAEs) in a population of surfers and to examine the relationship between these lesions and the length of time surfed as well as water temperature in which the swimmers surfed. It was hypothesized that subjects who predominantly surfed in colder waters had more frequent and more severe exostoses. Two hundred two avid surfers (91% male and 9% female, median age 17 years) were included in the study. EAEs were graded based on the extent of external auditory canal patency; grades of normal (100% patency), mild (66% to 99% patency), and moderate-severe (<66% patency) were assigned. Otoscopic findings were correlated with data collected via questionnaires that detailed surfing habits. There was a 38% overall prevalence of EAEs, with 69% of lesions graded as mild and 31% graded as moderate-severe. Professional surfers (odds ratio 3.8) and those subjects who surfed predominantly in colder waters (odds ratio 5.8) were found to be at a significantly increased risk for the development of EAEs. The number of years surfed was also found to be significant, increasing one's risk for developing an exostosis by 12% per year and for developing more severe lesions by 10% per year. Individuals who had moderate-severe EAEs were significantly more likely to be willing to surf in colder waters than were those who had mild EAEs (odds ratio 4.3). EAEs are more prevalent in cold water surfers, and additional years surfing increase one's risk not only for developing an EAE but also for developing more severe lesions.
[Functional anatomy of the cochlear nerve and the central auditory system].
Simon, E; Perrot, X; Mertens, P
2009-04-01
The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
A role for descending auditory cortical projections in songbird vocal learning
Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S
2014-01-01
Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934
Unusual extension of the first branchial cleft anomaly.
Ada, Mehmet; Korkut, Nazim; Güvenç, M Güven; Acioğlu, Engin; Yilmaz, Süleyman; Cevikbaş, Uğur
2006-03-01
First branchial cleft is the only branchial structure that persists as the external ear canal, while all other clefts are resorbed. Incomplete obliteration and the degree of closure cause the varied types of first branchial cleft anomalies. They were classified based on the anatomical and histological features. We present an unusual type of first branchial cleft anomaly involving the external auditory canal, the middle ear and the nasopharynx through the eustachian tube.
Garrison, Jane R; Bond, Rebecca; Gibbard, Emma; Johnson, Marcia K; Simons, Jon S
2017-02-01
Reality monitoring refers to processes involved in distinguishing internally generated information from information presented in the external world, an activity thought to be based, in part, on assessment of activated features such as the amount and type of cognitive operations and perceptual content. Impairment in reality monitoring has been implicated in symptoms of mental illness and associated more widely with the occurrence of anomalous perceptions as well as false memories and beliefs. In the present experiment, the cognitive mechanisms of reality monitoring were probed in healthy individuals using a task that investigated the effects of stimulus modality (auditory vs visual) and the type of action undertaken during encoding (thought vs speech) on subsequent source memory. There was reduced source accuracy for auditory stimuli compared with visual, and when encoding was accompanied by thought as opposed to speech, and a greater rate of externalization than internalization errors that was stable across factors. Interpreted within the source monitoring framework (Johnson, Hashtroudi, & Lindsay, 1993), the results are consistent with the greater prevalence of clinically observed auditory than visual reality discrimination failures. The significance of these findings is discussed in light of theories of hallucinations, delusions and confabulation. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Effect of voluntary attention on auditory processing during REM sleep.
Takahara, Madoka; Nittono, Hiroshi; Hori, Tadao
2006-07-01
The study investigates whether there is an effect of voluntary attention to external auditory stimuli during rapid eye movement (REM) sleep in humans by measuring event-related potentials (ERPs). Using a 2-tone auditory-discrimination task, a standard 1000-Hz tone and a deviant 2000-Hz tone were presented to participants when awake and during sleep. In the ATTENTIVE condition, participants were requested to detect the deviant stimuli during their sleep whenever possible. In the PASSIVE sleep condition, participants were only exposed to the tones. ERPs were measured during REM sleep and compared between the 2 conditions. All experiments were conducted at the sleep laboratory of Hiroshima University. Twenty healthy university student volunteers. N/A. In the tonic period of REM sleep (the period without REM), P200 and P400 were elicited by deviant stimuli, with scalp distributions maximal at central and occipital sites, respectively. The P400 in REM sleep showed larger amplitudes in the ATTENTIVE condition, whereas the P200 amplitude did not differ between the 2 conditions. No effects on ERPs due to attention were observed during stage 2 sleep. The instruction to pay attention to external stimuli during REM sleep influenced the late positive potentials. Thus electrophysiologic evidence of voluntary attention during REM sleep has been demonstrated.
Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas
2018-03-01
Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback. Furthermore, independent of the training group, a significant spatial pre-post difference was found in the event-related component P200 ( P = .04).
Pondé, Pedro H; de Sena, Eduardo P; Camprodon, Joan A; de Araújo, Arão Nogueira; Neto, Mário F; DiBiasi, Melany; Baptista, Abrahão Fontes; Moura, Lidia MVR; Cosmo, Camila
2017-01-01
Introduction Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS) – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods A systematic review was performed, searching in the main electronic databases including the Cochrane Library and MEDLINE/PubMed. The searches were performed by combining descriptors, applying terms of the Medical Subject Headings (MeSH) of Descriptors of Health Sciences and descriptors contractions. PRISMA protocol was used as a guide and the terms used were the clinical outcomes (“Schizophrenia” OR “Auditory Hallucinations” OR “Auditory Verbal Hallucinations” OR “Psychosis”) searched together (“AND”) with interventions (“transcranial Direct Current Stimulation” OR “tDCS” OR “Brain Polarization”). Results Six randomized controlled trials that evaluated the effects of tDCS on the severity of auditory hallucinations in schizophrenic patients were selected. Analysis of the clinical results of these studies pointed toward incongruence in the information with regard to the therapeutic use of tDCS with a view to reducing the severity of auditory hallucinations in schizophrenia. Only three studies revealed a therapeutic benefit, manifested by reductions in severity and frequency of auditory verbal hallucinations in schizophrenic patients. Conclusion Although tDCS has shown promising results in reducing the severity of auditory hallucinations in schizophrenic patients, this technique cannot yet be used as a therapeutic alternative due to lack of studies with large sample sizes that portray the positive effects that have been described. PMID:28203084
Bouncing Ball with a Uniformly Varying Velocity in a Metronome Synchronization Task.
Huang, Yingyu; Gu, Li; Yang, Junkai; Wu, Xiang
2017-09-21
Sensorimotor synchronization (SMS), a fundamental human ability to coordinate movements with external rhythms, has long been thought to be modality specific. In the canonical metronome synchronization task that requires tapping a finger along with an isochronous sequence, a well-established finding is that synchronization is much more stable to an auditory sequence consisting of auditory tones than to a visual sequence consisting of visual flashes. However, recent studies have shown that periodically moving visual stimuli can substantially improve synchronization compared with visual flashes. In particular, synchronization of a visual bouncing ball that has a uniformly varying velocity was found to be not less stable than synchronization of auditory tones. Here, the current protocol describes the application of the bouncing ball with a uniformly varying velocity in a metronome synchronization task. The usage of the bouncing ball in sequences with different inter-onset intervals (IOI) is included. The representative results illustrate synchronization performance of the bouncing ball, as compared with the performances of auditory tones and visual flashes. Given its comparable synchronization performance to that of auditory tones, the bouncing ball is of particular importance for addressing the current research topic of whether modality-specific mechanisms underlay SMS.
Demodex cati Hirst 1919: a redescription.
Desch, C; Nutting, W B
1979-07-01
All life stages of Demodex cati are described and compared with D. canis. Presence of D. cati is reported for the first time from the external auditory meatus. In the two cases examined mites occurred in large numbers with little pathogenic effect.
Options for Auditory Training for Adults with Hearing Loss.
Olson, Anne D
2015-11-01
Hearing aid devices alone do not adequately compensate for sensory losses despite significant technological advances in digital technology. Overall use rates of amplification among adults with hearing loss remain low, and overall satisfaction and performance in noise can be improved. Although improved technology may partially address some listening problems, auditory training may be another alternative to improve speech recognition in noise and satisfaction with devices. The literature underlying auditory plasticity following placement of sensory devices suggests that additional auditory training may be needed for reorganization of the brain to occur. Furthermore, training may be required to acquire optimal performance from devices. Several auditory training programs that are readily accessible for adults with hearing loss, hearing aids, or cochlear implants are described. Programs that can be accessed via Web-based formats and smartphone technology are reviewed. A summary table is provided for easy access to programs with descriptions of features that allow hearing health care providers to assist clients in selecting the most appropriate auditory training program to fit their needs.
A Brain System for Auditory Working Memory.
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
2016-04-20
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
NASA Technical Reports Server (NTRS)
Begault, Durand R.
1993-01-01
The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.
Prospects for Replacement of Auditory Neurons by Stem Cells
Shi, Fuxin; Edge, Albert S.B.
2013-01-01
Sensorineural hearing loss is caused by degeneration of hair cells or auditory neurons. Spiral ganglion cells, the primary afferent neurons of the auditory system, are patterned during development and send out projections to hair cells and to the brainstem under the control of largely unknown guidance molecules. The neurons do not regenerate after loss and even damage to their projections tends to be permanent. The genesis of spiral ganglion neurons and their synapses forms a basis for regenerative approaches. In this review we critically present the current experimental findings on auditory neuron replacement. We discuss the latest advances with a focus on (a) exogenous stem cell transplantation into the cochlea for neural replacement, (b) expression of local guidance signals in the cochlea after loss of auditory neurons, (c) the possibility of neural replacement from an endogenous cell source, and (d) functional changes from cell engraftment. PMID:23370457
On the accuracy of adults' auditory perception of normophonic and dysphonic children's personality.
Verduyckt, Ingrid; Remacle, Marc; Morsomme, Dominique
2015-10-01
We investigated the accuracy of auditory inferences of personality of Belgian children with vocal fold nodules (VFN). External judges (n = 57) were asked to infer the personality of normophonic (NP) children and children with VFN (n = 10) on the basis of vowels and sentences. The auditory inferred profiles were compared to the actual personality of NP and VFN children. Positive and partly accurate inferences of VFN children's personality were made on the basis of connected speech, while sustained vowels yielded negative and inaccurate inferences of personality traits of children with VFN. Dysphonic voice quality, as defined by the overall severity of vocal abnormality, conveyed inaccurate and low degrees of extraversion. This effect was counterbalanced in connected speech by faster speaking rate that accurately conveyed higher degrees of extraversion, a characteristic trait of VFN children's actual personality.
Randi, Federico; McDonald, Michael; Duffy, Pat; Kelly, Alan K; Lonergan, Patrick
2018-04-01
The aim of this study was to evaluate the relationship of body temperature fluctuations, as measured by external auditory canal temperature, to the onset of estrus and ovulation. Beef heifers (n = 44, mean age 23.5 ± 0.4 months, mean weight 603.3 ± 5.7 kg) were fitted with a Boviminder ® ear tag 2 weeks before the start of the estrous synchronization protocol to allow acclimatization. The device recorded the temperature, accurate to 0.01° Fahrenheit, every 10 min and transmitted the data via a base station over the internet where it could be accessed remotely. The estrous cycles of all heifers were synchronized using an 8-day progesterone-based synchronization program; on day 0 a PRID was inserted in conjunction with an injection of GnRH, and PGF2α was administered the day before PRID removal. Heifers were checked for signs of estrus at 4-h intervals (i.e., 6 times per day) commencing 24 h after PRID withdrawal. Beginning 12 h after the onset of estrus, the ovaries were ultrasound scanned at 4-h intervals to determine the time of ovulation. Body temperature was recorded every 10 min and averaged to hourly means for the following 4 periods relative to the detected oestrus onset (=Time 0): Period I: -48 h to -7 h, Period II: -6 h to +6 h, Period III +7 h to ovulation, and Period IV: ovulation to 48 h post ovulation. Data were analysed using a Mixed Model ANOVA in SAS in a completely randomized design to observe effects of induced estrus on external auditory canal temperature. The mean (±SD) interval from removal of the PRID to onset of estrus activity was 46.6 ± 14.7 h. The mean duration of estrus was 16.0 ± 5.67 h and the mean interval from estrus onset to ovulation was 27.9 ± 7.68 h. Highest temperatures (100.95 ± 0.03 °F) were observed in Period II around estrus onset, whereas lowest temperatures were observed in the 48 h preceding estrus onset (100.28 ± 0.03 °F; Period I) and around ovulation (100.30 ± 0.2 °F; Period III)(P < .001). Indeed, around the time of estrus onset (Period II) mean temperature was 0.66 °F (P < .001) higher compared with Period I. Diurnal temperature rhythms were similar (P > .10) before (Period I) and after oestrus (Period III). In conclusion, a significant elevation in external auditory canal temperature was associated with estrus in beef heifers and was followed by a decline in temperature leading up to ovulation approximately 28 h later. Future studies are required to assess pregnancy rates following AI based on changes in external auditory canal temperature. Copyright © 2018 Elsevier Inc. All rights reserved.
Correlations of External Landmarks With Internal Structures of the Temporal Bone.
Piromchai, Patorn; Wijewickrema, Sudanthi; Smeds, Henrik; Kennedy, Gregor; O'Leary, Stephen
2015-09-01
The internal anatomy of a temporal bone could be inferred from external landmarks. Mastoid surgery is an important skill that ENT surgeons need to acquire. Surgeons commonly use CT scans as a guide to understanding anatomical variations before surgery. Conversely, in cases where CT scans are not available, or in the temporal bone laboratory where residents are usually not provided with CT scans, it would be beneficial if the internal anatomy of a temporal bone could be inferred from external landmarks. We explored correlations between internal anatomical variations and metrics established to quantify the position of external landmarks that are commonly exposed in the operating room, or the temporal bone laboratory, before commencement of drilling. Mathematical models were developed to predict internal anatomy based on external structures. From an operating room view, the distances between the following external landmarks were observed to have statistically significant correlations with the internal anatomy of a temporal bone: temporal line, external auditory canal, mastoid tip, occipitomastoid suture, and Henle's spine. These structures can be used to infer a low lying dura mater (p = 0.002), an anteriorly located sigmoid sinus (p = 0.006), and a more lateral course of the facial nerve (p < 0.001). In the temporal bone laboratory view, the mastoid tegmen and sigmoid sinus were also regarded as external landmarks. The distances between these two landmarks and the operating view external structures were able to further infer the laterality of the facial nerve (p < 0.001) and a sclerotic mastoid (p < 0.001). Two nonlinear models were developed that predicted the distances between the following internal structures with a high level of accuracy: the distance from the sigmoid sinus to the posterior external auditory canal (p < 0.001) and the diameter of the round window niche (p < 0.001). The prospect of encountering some of the more technically challenging anatomical variants encountered in temporal bone dissection can be inferred from the distance between external landmarks found on the temporal bone. These relationships could be used as a guideline to predict challenges during drilling and choosing appropriate temporal bones for dissection.
Reality Monitoring and Feedback Control of Speech Production Are Related Through Self-Agency.
Subramaniam, Karuna; Kothare, Hardik; Mizuiri, Danielle; Nagarajan, Srikantan S; Houde, John F
2018-01-01
Self-agency is the experience of being the agent of one's own thoughts and motor actions. The intact experience of self-agency is necessary for successful interactions with the outside world (i.e., reality monitoring) and for responding to sensory feedback of our motor actions (e.g., speech feedback control). Reality monitoring is the ability to distinguish internally self-generated information from outside reality (externally-derived information). In the present study, we examined the relationship of self-agency between lower-level speech feedback monitoring (i.e., monitoring what we hear ourselves say) and a higher-level cognitive reality monitoring task. In particular, we examined whether speech feedback monitoring and reality monitoring were driven by the capacity to experience self-agency-the ability to make reliable predictions about the outcomes of self-generated actions. During the reality monitoring task, subjects made judgments as to whether information was previously self-generated (self-agency judgments) or externally derived (external-agency judgments). During speech feedback monitoring, we assessed self-agency by altering environmental auditory feedback so that subjects listened to a perturbed version of their own speech. When subjects heard minimal perturbations in their auditory feedback while speaking, they made corrective responses, indicating that they judged the perturbations as errors in their speech output. We found that self-agency judgments in the reality-monitoring task were higher in people who had smaller corrective responses ( p = 0.05) and smaller inter-trial variability ( p = 0.03) during minimal pitch perturbations of their auditory feedback. These results provide support for a unitary process for the experience of self-agency governing low-level speech control and higher level reality monitoring.
Speech comprehension training and auditory and cognitive processing in older adults.
Pichora-Fuller, M Kathleen; Levitt, Harry
2012-12-01
To provide a brief history of speech comprehension training systems and an overview of research on auditory and cognitive aging as background to recommendations for future directions for rehabilitation. Two distinct domains were reviewed: one concerning technological and the other concerning psychological aspects of training. Historical trends and advances in these 2 domains were interrelated to highlight converging trends and directions for future practice. Over the last century, technological advances have influenced both the design of hearing aids and training systems. Initially, training focused on children and those with severe loss for whom amplification was insufficient. Now the focus has shifted to older adults with relatively little loss but difficulties listening in noise. Evidence of brain plasticity from auditory and cognitive neuroscience provides new insights into how to facilitate perceptual (re-)learning by older adults. There is a new imperative to complement training to increase bottom-up processing of the signal with more ecologically valid training to boost top-down information processing based on knowledge of language and the world. Advances in digital technologies enable the development of increasingly sophisticated training systems incorporating complex meaningful materials such as music, audiovisual interactive displays, and conversation.
Two-stage removal of an impacted foreign body with an epoxied anchor.
Isaacson, Glenn
2003-09-01
A stone impacted in a child's external auditory canal had defied all conventional means of removal. It was extracted successfully after attachment of a specially formed metal anchor with epoxy glue. The technique of and rationale for this approach are discussed.
Hwang, Euna; Kim, Young Soo; Chung, Seum
2014-06-01
Before visiting a plastic surgeon, some microtia patients may undergo canaloplasty for hearing improvement. In such cases, scarred tissues and the reconstructed external auditory canal in the postauricular area may cause a significant limitation in using the posterior auricular skin flap for ear reconstruction. In this article, we present a new method for auricular reconstruction in microtia patients with previous canaloplasty. By dividing a postauricular skin flap into an upper scalp extended skin flap and a lower mastoid extended skin flap at the level of a reconstructed external auditory canal, the entire anterior surface of the auricular framework can be covered with the two extended postauricular skin flaps. The reconstructed ear shows good color match and texture, with the entire anterior surface of the reconstructed ear being resurfaced with the skin flaps. Clinical question/level of evidence; therapeutic level IV. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Stimulus induced bursts in severe postanoxic encephalopathy.
Tjepkema-Cloostermans, Marleen C; Wijers, Elisabeth T; van Putten, Michel J A M
2016-11-01
To report on a distinct effect of auditory and sensory stimuli on the EEG in comatose patients with severe postanoxic encephalopathy. In two comatose patients admitted to the Intensive Care Unit (ICU) with severe postanoxic encephalopathy and burst-suppression EEG, we studied the effect of external stimuli (sound and touch) on the occurrence of bursts. In patient A bursts could be induced by either auditory or sensory stimuli. In patient B bursts could only be induced by touching different facial regions (forehead, nose and chin). When stimuli were presented with relatively long intervals, bursts persistently followed the stimuli, while stimuli with short intervals (<1s) did not induce bursts. In both patients bursts were not accompanied by myoclonia. Both patients deceased. Bursts in patients with a severe postanoxic encephalopathy can be induced by external stimuli, resulting in stimulus-dependent burst-suppression. Stimulus induced bursts should not be interpreted as prognostic favourable EEG reactivity. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Choi, HeeSun; Geden, Michael; Feng, Jing
2017-01-01
Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct.
Anthropometry of external auditory canal by non-contactable measurement.
Yu, Jen-Fang; Lee, Kun-Che; Wang, Ren-Hung; Chen, Yen-Sheng; Fan, Chun-Chieh; Peng, Ying-Chin; Tu, Tsung-Hsien; Chen, Ching-I; Lin, Kuei-Yi
2015-09-01
Human ear canals cannot be measured directly with existing general measurement tools. Furthermore, general non-contact optical methods can only conduct simple peripheral measurements of the auricle and cannot obtain the internal ear canal shape-related measurement data. Therefore, this study uses the computed tomography (CT) technology to measure the geometric shape of the ear canal and the shape of the ear canal using a non-invasive method, and to complete the anthropometry of external auditory canal. The results of the study show that the average height and width of ear canal openings, and the average depth of the first bend for men are generally longer, wider and deeper than those for women. In addition, the difference between the height and width of the ear canal opening is about 40% (p < 0.05). Hence, the circular cross-section shape of the earplugs should be replaced with an elliptical cross-section shape during manufacturing for better fitting. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Choi, HeeSun; Geden, Michael
2017-01-01
Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct. PMID:29240817
To, Wing Ting; Ost, Jan; Hart, John; De Ridder, Dirk; Vanneste, Sven
2017-01-01
Tinnitus is the perception of a sound in the absence of a corresponding external sound source. Research has suggested that functional abnormalities in tinnitus patients involve auditory as well as non-auditory brain areas. Transcranial electrical stimulation (tES), such as transcranial direct current stimulation (tDCS) to the dorsolateral prefrontal cortex and transcranial random noise stimulation (tRNS) to the auditory cortex, has demonstrated modulation of brain activity to transiently suppress tinnitus symptoms. Targeting two core regions of the tinnitus network by tES might establish a promising strategy to enhance treatment effects. This proof-of-concept study aims to investigate the effect of a multisite tES treatment protocol on tinnitus intensity and distress. A total of 40 tinnitus patients were enrolled in this study and received either bifrontal tDCS or the multisite treatment of bifrontal tDCS before bilateral auditory cortex tRNS. Both groups were treated on eight sessions (two times a week for 4 weeks). Our results show that a multisite treatment protocol resulted in more pronounced effects when compared with the bifrontal tDCS protocol or the waiting list group, suggesting an added value of auditory cortex tRNS to the bifrontal tDCS protocol for tinnitus patients. These findings support the involvement of the auditory as well as non-auditory brain areas in the pathophysiology of tinnitus and demonstrate the idea of the efficacy of network stimulation in the treatment of neurological disorders. This multisite tES treatment protocol proved to be save and feasible for clinical routine in tinnitus patients.
The Schultz MIDI Benchmarking Toolbox for MIDI interfaces, percussion pads, and sound cards.
Schultz, Benjamin G
2018-04-17
The Musical Instrument Digital Interface (MIDI) was readily adopted for auditory sensorimotor synchronization experiments. These experiments typically use MIDI percussion pads to collect responses, a MIDI-USB converter (or MIDI-PCI interface) to record responses on a PC and manipulate feedback, and an external MIDI sound module to generate auditory feedback. Previous studies have suggested that auditory feedback latencies can be introduced by these devices. The Schultz MIDI Benchmarking Toolbox (SMIDIBT) is an open-source, Arduino-based package designed to measure the point-to-point latencies incurred by several devices used in the generation of response-triggered auditory feedback. Experiment 1 showed that MIDI messages are sent and received within 1 ms (on average) in the absence of any external MIDI device. Latencies decreased when the baud rate increased above the MIDI protocol default (31,250 bps). Experiment 2 benchmarked the latencies introduced by different MIDI-USB and MIDI-PCI interfaces. MIDI-PCI was superior to MIDI-USB, primarily because MIDI-USB is subject to USB polling. Experiment 3 tested three MIDI percussion pads. Both the audio and MIDI message latencies were significantly greater than 1 ms for all devices, and there were significant differences between percussion pads and instrument patches. Experiment 4 benchmarked four MIDI sound modules. Audio latencies were significantly greater than 1 ms, and there were significant differences between sound modules and instrument patches. These experiments suggest that millisecond accuracy might not be achievable with MIDI devices. The SMIDIBT can be used to benchmark a range of MIDI devices, thus allowing researchers to make informed decisions when choosing testing materials and to arrive at an acceptable latency at their discretion.
Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.
Petkov, Christopher I; Sutter, Mitchell L
2011-01-01
Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.
Heine, Lizette; Castro, Maïté; Martial, Charlotte; Tillmann, Barbara; Laureys, Steven; Perrin, Fabien
2015-01-01
Preferred music is a highly emotional and salient stimulus, which has previously been shown to increase the probability of auditory cognitive event-related responses in patients with disorders of consciousness (DOC). To further investigate whether and how music modifies the functional connectivity of the brain in DOC, five patients were assessed with both a classical functional connectivity scan (control condition), and a scan while they were exposed to their preferred music (music condition). Seed-based functional connectivity (left or right primary auditory cortex), and mean network connectivity of three networks linked to conscious sound perception were assessed. The auditory network showed stronger functional connectivity with the left precentral gyrus and the left dorsolateral prefrontal cortex during music as compared to the control condition. Furthermore, functional connectivity of the external network was enhanced during the music condition in the temporo-parietal junction. Although caution should be taken due to small sample size, these results suggest that preferred music exposure might have effects on patients auditory network (implied in rhythm and music perception) and on cerebral regions linked to autobiographical memory. PMID:26617542
Influence of aging on human sound localization
Dobreva, Marina S.; O'Neill, William E.
2011-01-01
Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004
Auditory neuroimaging with fMRI and PET.
Talavage, Thomas M; Gonzalez-Castillo, Javier; Scott, Sophie K
2014-01-01
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc; Cachia, Arnaud
2011-01-01
Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N=12) and patients with only inner space hallucinations (N=15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the "where" auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge.
Ceponiene, R; Westerfield, M; Torki, M; Townsend, J
2008-06-18
Major accounts of aging implicate changes in processing external stimulus information. Little is known about differential effects of auditory and visual sensory aging, and the mechanisms of sensory aging are still poorly understood. Using event-related potentials (ERPs) elicited by unattended stimuli in younger (M=25.5 yrs) and older (M=71.3 yrs) subjects, this study examined mechanisms of sensory aging under minimized attention conditions. Auditory and visual modalities were examined to address modality-specificity vs. generality of sensory aging. Between-modality differences were robust. The earlier-latency responses (P1, N1) were unaffected in the auditory modality but were diminished in the visual modality. The auditory N2 and early visual N2 were diminished. Two similarities between the modalities were age-related enhancements in the late P2 range and positive behavior-early N2 correlation, the latter suggesting that N2 may reflect long-latency inhibition of irrelevant stimuli. Since there is no evidence for salient differences in neuro-biological aging between the two sensory regions, the observed between-modality differences are best explained by the differential reliance of auditory and visual systems on attention. Visual sensory processing relies on facilitation by visuo-spatial attention, withdrawal of which appears to be more disadvantageous in older populations. In contrast, auditory processing is equipped with powerful inhibitory capacities. However, when the whole auditory modality is unattended, thalamo-cortical gating deficits may not manifest in the elderly. In contrast, ERP indices of longer-latency, stimulus-level inhibitory modulation appear to diminish with age.
Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús
2004-01-01
amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more evolved species.
Broadened population-level frequency tuning in the auditory cortex of tinnitus patients.
Sekiya, Kenichi; Takahashi, Mariko; Murakami, Shingo; Kakigi, Ryusuke; Okamoto, Hidehiko
2017-03-01
Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus. NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Copyright © 2017 the American Physiological Society.
Ito, Masanori; Kado, Naoki; Suzuki, Toshiaki; Ando, Hiroshi
2013-01-01
[Purpose] The purpose of this study was to investigate the influence of external pacing with periodic auditory stimuli on the control of periodic movement. [Subjects and Methods] Eighteen healthy subjects performed self-paced, synchronization-continuation, and syncopation-continuation tapping. Inter-onset intervals were 1,000, 2,000 and 5,000 ms. The variability of inter-tap intervals was compared between the different pacing conditions and between self-paced tapping and each continuation phase. [Results] There were no significant differences in the mean and standard deviation of the inter-tap interval between pacing conditions. For the 1,000 and 5,000 ms tasks, there were significant differences in the mean inter-tap interval following auditory pacing compared with self-pacing. For the 2,000 ms syncopation condition and 5,000 ms task, there were significant differences from self-pacing in the standard deviation of the inter-tap interval following auditory pacing. [Conclusion] These results suggest that the accuracy of periodic movement with intervals of 1,000 and 5,000 ms can be improved by the use of auditory pacing. However, the consistency of periodic movement is mainly dependent on the inherent skill of the individual; thus, improvement of consistency based on pacing is unlikely. PMID:24259932
Duplication of the External Auditory Canal: Two Cases and a Review of the Literature
Goudakos, John K.; Blioskas, Sarantis; Psillas, George; Vital, Victor; Markou, Konstantinos
2012-01-01
The objective of the present paper is to describe the clinical presentation, diagnostic process, surgical treatment, and outcome of 2 patients with first branchial cleft anomaly. The first case was an 8-year-old girl presented with an elastic lesion located in the left infra-auricular area, in close relation with the lobule, duplicating the external auditory canal. The magnetic resonance imaging revealed a lesion, appearing as a rather well-circumscribed mass within the left parotid gland and duplicating the ear canal. A superficial parotidectomy was subsequently performed, with total excision of the cyst. The second patient was a 15-year-old girl presented with a congenital fistula of the right lateral neck. At superficial parotidectomy, a total excision of the fistula was performed. During the operation the tract was recorded to lay between the branches of the facial nerve, extending with a blind ending canal parallel to the external acoustic meatus. Conclusively, first branchial cleft anomalies are rare malformations with cervical, parotid, or auricular clinical manifestations. Diagnosis of first branchial cleft lesions is achieved mainly through careful physical examination. Complete surgical excision with wide exposure of the lesion is essential in order to achieve permanent cure and avoid recurrence. PMID:23213587
Auditory Neuroimaging with fMRI and PET
Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.
2013-01-01
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424
The Potential Role of the cABR in Assessment and Management of Hearing Impairment
Anderson, Samira; Kraus, Nina
2013-01-01
Hearing aid technology has improved dramatically in the last decade, especially in the ability to adaptively respond to dynamic aspects of background noise. Despite these advancements, however, hearing aid users continue to report difficulty hearing in background noise and having trouble adjusting to amplified sound quality. These difficulties may arise in part from current approaches to hearing aid fittings, which largely focus on increased audibility and management of environmental noise. These approaches do not take into account the fact that sound is processed all along the auditory system from the cochlea to the auditory cortex. Older adults represent the largest group of hearing aid wearers; yet older adults are known to have deficits in temporal resolution in the central auditory system. Here we review evidence that supports the use of the auditory brainstem response to complex sounds (cABR) in the assessment of hearing-in-noise difficulties and auditory training efficacy in older adults. PMID:23431313
NASA Astrophysics Data System (ADS)
Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo
2017-11-01
External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and could need others specific external cues. In conclusion the current protocol (and their selected parameters, kind of sound time for training, step of variation, range of variation) provide a suitable gait facilitation method specially for patients with the highest gait disturbance (stage 2 and 3). The method should be adjusted for initial stages and evaluated in a rehabilitation program.
The role of the primary auditory cortex in the neural mechanism of auditory verbal hallucinations
Kompus, Kristiina; Falkenberg, Liv E.; Bless, Josef J.; Johnsen, Erik; Kroken, Rune A.; Kråkvik, Bodil; Larøi, Frank; Løberg, Else-Marie; Vedul-Kjelsås, Einar; Westerhausen, René; Hugdahl, Kenneth
2013-01-01
Auditory verbal hallucinations (AVHs) are a subjective experience of “hearing voices” in the absence of corresponding physical stimulation in the environment. The most remarkable feature of AVHs is their perceptual quality, that is, the experience is subjectively often as vivid as hearing an actual voice, as opposed to mental imagery or auditory memories. This has lead to propositions that dysregulation of the primary auditory cortex (PAC) is a crucial component of the neural mechanism of AVHs. One possible mechanism by which the PAC could give rise to the experience of hallucinations is aberrant patterns of neuronal activity whereby the PAC is overly sensitive to activation arising from internal processing, while being less responsive to external stimulation. In this paper, we review recent research relevant to the role of the PAC in the generation of AVHs. We present new data from a functional magnetic resonance imaging (fMRI) study, examining the responsivity of the left and right PAC to parametrical modulation of the intensity of auditory verbal stimulation, and corresponding attentional top-down control in non-clinical participants with AVHs, and non-clinical participants with no AVHs. Non-clinical hallucinators showed reduced activation to speech sounds but intact attentional modulation in the right PAC. Additionally, we present data from a group of schizophrenia patients with AVHs, who do not show attentional modulation of left or right PAC. The context-appropriate modulation of the PAC may be a protective factor in non-clinical hallucinations. PMID:23630479
Prevention of meatal stenosis in conchal setback otoplasty.
Small, A
1975-10-01
The conchal setback is a useful technique for correcting many prominent ear deformities. A disadvantage of the technique in some cases is meatal stenosis of the external auditory canal. By excising a portion of meatal cartilage, this problem is prevented. The technique is illustrated and post-operative result is shown.
DOT National Transportation Integrated Search
1999-12-01
To achieve the goals for Advanced Traveler Information Systems (ATIS), significant information will necessarily be provided to the driver. A primary ATIS design issue is the display modality (i.e., visual, auditory, or the combination) selected for p...
An anatomical and functional topography of human auditory cortical areas
Moerel, Michelle; De Martino, Federico; Formisano, Elia
2014-01-01
While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426
Theoretical Tinnitus Framework: A Neurofunctional Model.
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques.
Theoretical Tinnitus Framework: A Neurofunctional Model
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C. B.; Sani, Siamak S.; Ekhtiari, Hamed; Sanchez, Tanit G.
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the “sourceless” sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques. PMID:27594822
Grahn, Jessica A.; Rowe, James B.
2009-01-01
Little is known about the underlying neurobiology of rhythm and beat perception, despite its universal cultural importance. Here we used functional magnetic resonance imaging to study rhythm perception in musicians and non-musicians. Three conditions varied in the degree to which external reinforcement versus internal generation of the beat was required. The ‘Volume’ condition strongly externally marked the beat with volume changes, the ‘Duration’ condition marked the beat with weaker accents arising from duration changes, and the ‘Unaccented’ condition required the beat to be entirely internally generated. In all conditions, beat rhythms compared to nonbeat control rhythms revealed putamen activity. The presence of a beat was also associated with greater connectivity between the putamen and the supplementary motor area (SMA), the premotor cortex (PMC) and auditory cortex. In contrast, the type of accent within the beat conditions modulated the coupling between premotor and auditory cortex, with greater modulation for musicians than non-musicians. Importantly, the putamen's response to beat conditions was not due to differences in temporal complexity between the three rhythm conditions. We propose that a cortico-subcortical network including the putamen, SMA, and PMC is engaged for the analysis of temporal sequences and prediction or generation of putative beats, especially under conditions that may require internal generation of the beat. The importance of this system for auditory-motor interaction and development of precisely timed movement is suggested here by its facilitation in musicians. PMID:19515922
Grahn, Jessica A; Rowe, James B
2009-06-10
Little is known about the underlying neurobiology of rhythm and beat perception, despite its universal cultural importance. Here we used functional magnetic resonance imaging to study rhythm perception in musicians and nonmusicians. Three conditions varied in the degree to which external reinforcement versus internal generation of the beat was required. The "volume" condition strongly externally marked the beat with volume changes, the "duration" condition marked the beat with weaker accents arising from duration changes, and the "unaccented" condition required the beat to be entirely internally generated. In all conditions, beat rhythms compared with nonbeat control rhythms revealed putamen activity. The presence of a beat was also associated with greater connectivity between the putamen and the supplementary motor area (SMA), the premotor cortex (PMC), and auditory cortex. In contrast, the type of accent within the beat conditions modulated the coupling between premotor and auditory cortex, with greater modulation for musicians than nonmusicians. Importantly, the response of the putamen to beat conditions was not attributable to differences in temporal complexity between the three rhythm conditions. We propose that a cortico-subcortical network including the putamen, SMA, and PMC is engaged for the analysis of temporal sequences and prediction or generation of putative beats, especially under conditions that may require internal generation of the beat. The importance of this system for auditory-motor interaction and development of precisely timed movement is suggested here by its facilitation in musicians.
Plastic brain mechanisms for attaining auditory temporal order judgment proficiency.
Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas
2010-04-15
Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity. Copyright 2010 Elsevier Inc. All rights reserved.
Auditory short-term memory in the primate auditory cortex.
Scott, Brian H; Mishkin, Mortimer
2016-06-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Cerebellar contribution to the prediction of self-initiated sounds.
Knolle, Franziska; Schröger, Erich; Kotz, Sonja A
2013-10-01
In everyday life we frequently make the fundamental distinction between sensory input resulting from our own actions and sensory input that is externally-produced. It has been speculated that making this distinction involves the use of an internal forward-model, which enables the brain to adjust its response to self-produced sensory input. In the auditory domain, this idea has been supported by event-related potential and evoked-magnetic field studies revealing that self-initiated sounds elicit a suppressed N100/M100 brain response compared to externally-produced sounds. Moreover, a recent study reveals that patients with cerebellar lesions do not show a significant N100-suppression effect. This result supports the theory that the cerebellum is essential for generating internal forward predictions. However, all except one study compared self-initiated and externally-produced auditory stimuli in separate conditions. Such a setup prevents an unambiguous interpretation of the N100-suppression effect when distinguishing self- and externally-produced sensory stimuli: the N100-suppression can also be explained by differences in the allocation of attention in different conditions. In the current electroencephalography (EEG)-study we investigated the N100-suppression effect in an altered design comparing (i) self-initiated sounds to externally-produced sounds that occurred intermixed with these self-initiated sounds (i.e., both sound types occurred in the same condition) or (ii) self-initiated sounds to externally-produced sounds that occurred in separate conditions. Results reveal that the cerebellum generates selective predictions in response to self-initiated sounds independent of condition type: cerebellar patients, in contrast to healthy controls, do not display an N100-suppression effect in response to self-initiated sounds when intermixed with externally-produced sounds. Furthermore, the effect is not influenced by the temporal proximity of externally-produced sounds to self-produced sounds. Controls and patients showed a P200-reduction in response to self-initiated sounds. This suggests the existence of an additional and probably more conscious mechanism for identifying self-generated sounds that does not functionally depend on the cerebellum. Copyright © 2012 Elsevier Srl. All rights reserved.
Surfer's exostosis in a child who does not surf.
Paddock, Michael; Lau, Kimberley; Raghavan, Ashok; Dritsoula, Aikaterini
2018-06-01
Surfer's exostoses are more commonly seen in adults who frequently participate in aquatic activities with repeated exposed to cold water and wind. However, this entity has not been previously reported in the pediatric population. Most patients can be managed conservatively, particularly considering that surgical removal of external auditory canal exostosis can be challenging.
NASA Astrophysics Data System (ADS)
Shimokura, Ryota; Hosoi, Hiroshi; Nishimura, Tadashi; Iwakura, Takashi; Yamanaka, Toshiaki
2015-01-01
When the aural cartilage is made to vibrate it generates sound directly into the external auditory canal which can be clearly heard. Although the concept of cartilage conduction can be applied to various speech communication and music industrial devices (e.g. smartphones, music players and hearing aids), the conductive performance of such devices has not yet been defined because the calibration methods are different from those currently used for air and bone conduction. Thus, the aim of this study was to simulate the cartilage conduction sound (CCS) using a head and torso simulator (HATS) and a model of aural cartilage (polyurethane resin pipe) and compare the results with experimental ones. Using the HATS, we found the simulated CCS at frequencies above 2 kHz corresponded to the average measured CCS from seven subjects. Using a model of skull bone and aural cartilage, we found that the simulated CCS at frequencies lower than 1.5 kHz agreed with the measured CCS. Therefore, a combination of these two methods can be used to estimate the CCS with high accuracy.
2017-01-01
In the context of Middle and Late Pleistocene eastern Eurasian human crania, the external auditory exostoses (EAE) of the late archaic Xuchang 1 and 2 and the Xujiayao 15 early Late Pleistocene human temporal bones are described. Xujiayao 15 has small EAE (Grade 1), Xuchang 1 presents bilateral medium EAE (Grade 2), and Xuchang 2 exhibits bilaterally large EAE (Grade 3), especially on the right side. These cranial remains join the other eastern Eurasian later Pleistocene humans in providing frequencies of 61% (N = 18) and 58% (N = 12) respectively for archaic and early modern human samples. These values are near the upper limits of recent human frequencies, and they imply frequent aquatic exposure among these Pleistocene humans. In addition, the medial extents of the Xuchang 1 and 2 EAE would have impinged on their tympanic membranes, and the large EAE of Xuchang 2 would have resulted in cerumen impaction. Both effects would have produced conductive hearing loss, a serious impairment in a Pleistocene foraging context. PMID:29232394
Trinkaus, Erik; Wu, Xiu-Jie
2017-01-01
In the context of Middle and Late Pleistocene eastern Eurasian human crania, the external auditory exostoses (EAE) of the late archaic Xuchang 1 and 2 and the Xujiayao 15 early Late Pleistocene human temporal bones are described. Xujiayao 15 has small EAE (Grade 1), Xuchang 1 presents bilateral medium EAE (Grade 2), and Xuchang 2 exhibits bilaterally large EAE (Grade 3), especially on the right side. These cranial remains join the other eastern Eurasian later Pleistocene humans in providing frequencies of 61% (N = 18) and 58% (N = 12) respectively for archaic and early modern human samples. These values are near the upper limits of recent human frequencies, and they imply frequent aquatic exposure among these Pleistocene humans. In addition, the medial extents of the Xuchang 1 and 2 EAE would have impinged on their tympanic membranes, and the large EAE of Xuchang 2 would have resulted in cerumen impaction. Both effects would have produced conductive hearing loss, a serious impairment in a Pleistocene foraging context.
Auditory short-term memory in the primate auditory cortex
Scott, Brian H.; Mishkin, Mortimer
2015-01-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581
Diagnosis and management of somatosensory tinnitus: review article
Sanchez, Tanit Ganz; Rocha, Carina Bezerra
2011-01-01
Tinnitus is the perception of sound in the absence of an acoustic external stimulus. It affects 10–17% of the world's population and it a complex symptom with multiple causes, which is influenced by pathways other than the auditory one. Recently, it has been observed that tinnitus may be provoked or modulated by stimulation arising from the somatosensorial system, as well as from the somatomotor and visual–motor systems. This specific subgroup – somatosensory tinnitus – is present in 65% of cases, even though it tends to be underdiagnosed. As a consequence, it is necessary to establish evaluation protocols and specific treatments focusing on both the auditory pathway and the musculoskeletal system. PMID:21808880
Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya
2013-09-01
Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.
Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie
2016-12-07
A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.
Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.
Nees, Michael A; Helbein, Benji; Porter, Anna
2016-05-01
Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.
ERP correlates of processing the auditory consequences of own versus observed actions.
Ghio, Marta; Scharmach, Katrin; Bellebaum, Christian
2018-06-01
Research has so far focused on neural mechanisms that allow us to predict the sensory consequences of our own actions, thus also contributing to ascribing them to ourselves as agents. Less attention has been devoted to processing the sensory consequences of observed actions ascribed to another human agent. Focusing on audition, there is consistent evidence of a reduction of the auditory N1 ERP for self- versus externally generated sounds, while ERP correlates of processing sensory consequences of observed actions are mainly unexplored. In a between-groups ERP study, we compared sounds generated by self-performed (self group) or observed (observation group) button presses with externally generated sounds, which were presented either intermixed with action-generated sounds or in a separate condition. Results revealed an overall reduction of the N1 amplitude for processing action- versus externally generated sounds in both the intermixed and the separate condition, with no difference between the groups. Further analyses, however, suggested that an N1 attenuation effect relative to the intermixed condition at frontal electrode sites might exist only for the self but not for the observation group. For both groups, we found a reduction of the P2 amplitude for processing action- versus all externally generated sounds. We discuss whether the N1 and the P2 reduction can be interpreted in terms of predictive mechanisms for both action execution and observation, and to what extent these components might reflect also the feeling of (self) agency and the judgment of agency (i.e., ascribing agency either to the self or to others). © 2017 Society for Psychophysiological Research.
Establishing the Response of Low Frequency Auditory Filters
NASA Technical Reports Server (NTRS)
Rafaelof, Menachem; Christian, Andrew; Shepherd, Kevin; Rizzi, Stephen; Stephenson, James
2017-01-01
The response of auditory filters is central to frequency selectivity of sound by the human auditory system. This is true especially for realistic complex sounds that are often encountered in many applications such as modeling the audibility of sound, voice recognition, noise cancelation, and the development of advanced hearing aid devices. The purpose of this study was to establish the response of low frequency (below 100Hz) auditory filters. Two experiments were designed and executed; the first was to measure subject's hearing threshold for pure tones (at 25, 31.5, 40, 50, 63 and 80 Hz), and the second was to measure the Psychophysical Tuning Curves (PTCs) at two signal frequencies (Fs= 40 and 63Hz). Experiment 1 involved 36 subjects while experiment 2 used 20 subjects selected from experiment 1. Both experiments were based on a 3-down 1-up 3AFC adaptive staircase test procedure using either a variable level narrow-band noise masker or a tone. A summary of the results includes masked threshold data in form of PTCs, the response of auditory filters, their distribution, and comparison with similar recently published data.
Fractal-Based Analysis of the Influence of Music on Human Respiration
NASA Astrophysics Data System (ADS)
Reza Namazi, H.
An important challenge in respiration related studies is to investigate the influence of external stimuli on human respiration. Auditory stimulus is an important type of stimuli that influences human respiration. However, no one discovered any trend, which relates the characteristics of the auditory stimuli to the characteristics of the respiratory signal. In this paper, we investigate the correlation between auditory stimuli and respiratory signal from fractal point of view. We found out that the fractal structure of respiratory signal is correlated with the fractal structure of the applied music. Based on the obtained results, the music with greater fractal dimension will result in respiratory signal with smaller fractal dimension. In order to verify this result, we benefit from approximate entropy. The results show the respiratory signal will have smaller approximate entropy by choosing the music with smaller approximate entropy. The method of analysis could be further investigated to analyze the variations of different physiological time series due to the various types of stimuli when the complexity is the main concern.
Hearing rehabilitation in Treacher Collins Syndrome with bone anchored hearing aid
Polanski, José Fernando; Plawiak, Anna Clara; Ribas, Angela
2015-01-01
Objective: To describe a case of hearing rehabilitation with bone anchored hearing aid in a patient with Treacher Collins syndrome. Case description: 3 years old patient, male, with Treacher Collins syndrome and severe complications due to the syndrome, mostly related to the upper airway and hearing. He had bilateral atresia of external auditory canals, and malformation of the pinna. The initial hearing rehabilitation was with bone vibration arch, but there was poor acceptance due the discomfort caused by skull compression. It was prescribed a model of bone-anchored hearing aid, in soft band format. The results were evaluated through behavioral hearing tests and questionnaires Meaningful Use of Speech Scale (MUSS) and Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS). Comments: The patient had a higher acceptance of the bone-anchored hearing aid compared to the traditional bone vibration arch. Audiological tests and the speech and auditory skills assessments also showed better communication and hearing outcomes. The bone-anchored hearing aid is a good option in hearing rehabilitation in this syndrome. PMID:26298651
Pichora-Fuller, M. Kathleen; Singh, Gurjit
2006-01-01
Recent advances in research and clinical practice concerning aging and auditory communication have been driven by questions about age-related differences in peripheral hearing, central auditory processing, and cognitive processing. A “site-of-lesion” view based on anatomic levels inspired research to test competing hypotheses about the contributions of changes at these three levels of the nervous system. A “processing” view based on psychologic functions inspired research to test alternative hypotheses about how lower-level sensory processes and higher-level cognitive processes interact. In the present paper, we suggest that these two views can begin to be unified following the example set by the cognitive neuroscience of aging. The early pioneers of audiology anticipated such a unified view, but today, advances in science and technology make it both possible and necessary. Specifically, we argue that a synthesis of new knowledge concerning the functional neuroscience of auditory cognition is necessary to inform the design and fitting of digital signal processing in “intelligent” hearing devices, as well as to inform best practices for resituating hearing aid fitting in a broader context of audiologic rehabilitation. Long-standing approaches to rehabilitative audiology should be revitalized to emphasize the important role that training and therapy play in promoting compensatory brain reorganization as older adults acclimatize to new technologies. The purpose of the present paper is to provide an integrated framework for understanding how auditory and cognitive processing interact when older adults listen, comprehend, and communicate in realistic situations, to review relevant models and findings, and to suggest how new knowledge about age-related changes in audition and cognition may influence future developments in hearing aid fitting and audiologic rehabilitation. PMID:16528429
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Techniques and applications for binaural sound manipulation in human-machine interfaces
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.
1990-01-01
The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.
Techniques and applications for binaural sound manipulation in human-machine interfaces
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.
1992-01-01
The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.
Aitkin, L M; Nelson, J E
1989-01-01
Two specialized features are described in the auditory system of Acrobates pygmaeus, a small gliding marsupial. Firstly, the ear canal includes a transverse disk of bone that partly occludes the canal near the eardrum. The resultant narrow-necked chamber above the eardrum appears to attenuate sound across a broad frequency range, except at 27-29 kHz at which a net gain of sound pressure occurs. Secondly, the lateral medulla is hypertrophied at the level of the cochlear nucleus, forming a massive lateral lobe comprised of multipolar cells and granule cells. This lobe has connections with the auditory nerve and the cerebellum. Speculations are advanced about the functions of these structures in gliding behaviour and predator avoidance.
Seibold, Julia C; Nolden, Sophie; Oberem, Josefa; Fels, Janina; Koch, Iring
2018-06-01
In an auditory attention-switching paradigm, participants heard two simultaneously spoken number-words, each presented to one ear, and decided whether the target number was smaller or larger than 5 by pressing a left or right key. An instructional cue in each trial indicated which feature had to be used to identify the target number (e.g., female voice). Auditory attention-switch costs were found when this feature changed compared to when it repeated in two consecutive trials. Earlier studies employing this paradigm showed mixed results when they examined whether such cued auditory attention-switches can be prepared actively during the cue-stimulus interval. This study systematically assessed which preconditions are necessary for the advance preparation of auditory attention-switches. Three experiments were conducted that controlled for cue-repetition benefits, modality switches between cue and stimuli, as well as for predictability of the switch-sequence. Only in the third experiment, in which predictability for an attention-switch was maximal due to a pre-instructed switch-sequence and predictable stimulus onsets, active switch-specific preparation was found. These results suggest that the cognitive system can prepare auditory attention-switches, and this preparation seems to be triggered primarily by the memorised switching-sequence and valid expectations about the time of target onset.
Limb, Charles J; Molloy, Anne T; Jiradejvong, Patpong; Braun, Allen R
2010-03-01
Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H(2) (15)O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.
ERIC Educational Resources Information Center
de Melo Roiz, Roberta; Azevedo Cacho, Enio Walker; Cliquet, Alberto, Jr.; Barasnevicius Quagliato, Elizabeth Maria Aparecida
2011-01-01
Idiopathic Parkinson's disease (IPD) has been defined as a chronic progressive neurological disorder with characteristics that generate changes in gait pattern. Several studies have reported that appropriate external influences, such as visual or auditory cues may improve the gait pattern of patients with IPD. Therefore, the objective of this…
Impact of Noise and Working Memory on Speech Processing in Adults with and without ADHD
ERIC Educational Resources Information Center
Michalek, Anne M. P.
2012-01-01
Auditory processing of speech is influenced by internal (i.e., attention, working memory) and external factors (i.e., background noise, visual information). This study examined the interplay among these factors in individuals with and without ADHD. All participants completed a listening in noise task, two working memory capacity tasks, and two…
Methodological challenges and solutions in auditory functional magnetic resonance imaging
Peelle, Jonathan E.
2014-01-01
Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI. PMID:25191218
Lopez, William Omar Contreras; Higuera, Carlos Andres Escalante; Fonoff, Erich Talamoni; Souza, Carolina de Oliveira; Albicker, Ulrich; Martinez, Jairo Alberto Espinoza
2014-10-01
Evidence supports the use of rhythmic external auditory signals to improve gait in PD patients (Arias & Cudeiro, 2008; Kenyon & Thaut, 2000; McIntosh, Rice & Thaut, 1994; McIntosh et al., 1997; Morris, Iansek, & Matyas, 1994; Thaut, McIntosh, & Rice, 1997; Suteerawattananon, Morris, Etnyre, Jankovic, & Protas , 2004; Willems, Nieuwboer, Chavert, & Desloovere, 2006). However, few prototypes are available for daily use, and to our knowledge, none utilize a smartphone application allowing individualized sounds and cadence. Therefore, we analyzed the effects on gait of Listenmee®, an intelligent glasses system with a portable auditory device, and present its smartphone application, the Listenmee app®, offering over 100 different sounds and an adjustable metronome to individualize the cueing rate as well as its smartwatch with accelerometer to detect magnitude and direction of the proper acceleration, track calorie count, sleep patterns, steps count and daily distances. The present study included patients with idiopathic PD presented gait disturbances including freezing. Auditory rhythmic cues were delivered through Listenmee®. Performance was analyzed in a motion and gait analysis laboratory. The results revealed significant improvements in gait performance over three major dependent variables: walking speed in 38.1%, cadence in 28.1% and stride length in 44.5%. Our findings suggest that auditory cueing through Listenmee® may significantly enhance gait performance. Further studies are needed to elucidate the potential role and maximize the benefits of these portable devices. Copyright © 2014 Elsevier B.V. All rights reserved.
Lalaki, Panagiota; Hatzopoulos, Stavros; Lorito, Guiscardo; Kochanek, Krzysztof; Sliwa, Lech; Skarzynski, Henryk
2011-07-01
Subjective tinnitus is an auditory perception that is not caused by external stimulation, its source being anywhere in the auditory system. Furthermore, evidence exists that exposure to noise alters cochlear micromechanics, either directly or through complex feed-back mechanisms, involving the medial olivocochlear efferent system. The aim of this study was to assess the role of the efferent auditory system in noise-induced tinnitus generation. Contralateral sound-activated suppression of TEOAEs was performed in a group of 28 subjects with noise-induced tinnitus (NIT) versus a group of 35 subjects with normal hearing and tinnitus, without any history of exposure to intense occupational or recreational noise (idiopathic tinnitus-IT). Thirty healthy, normally hearing volunteers were used as controls for the efferent suppression test. Suppression of the TEOAE amplitude less than 1 dB SPL was considered abnormal, giving a false positive rate of 6.7%. Eighteen out of 28 (64.3%) patients of the NIT group and 9 out of 35 (25.7%) patients of the IT group showed abnormal suppression values, which were significantly different from the controls' (p<0.0001 and p<0.045, respectively). The abnormal activity of the efferent auditory system in NIT cases might indicate that either the activity of the efferent fibers innervating the outer hair cells (OHCs) is impaired or that the damaged OHCs themselves respond abnormally to the efferent stimulation.
Auditory and visual interactions between the superior and inferior colliculi in the ferret.
Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K
2015-05-01
The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hedwig, Berthold
2014-01-01
Crickets carry wind-sensitive mechanoreceptors on their cerci, which, in response to the airflow produced by approaching predators, triggers escape reactions via ascending giant interneurons (GIs). Males also activate their cercal system by air currents generated due to the wing movements underlying sound production. Singing males still respond to external wind stimulation, but are not startled by the self-generated airflow. To investigate how the nervous system discriminates sensory responses to self-generated and external airflow, we intracellularly recorded wind-sensitive afferents and ventral GIs of the cercal escape pathway in fictively singing crickets, a situation lacking any self-stimulation. GI spiking was reduced whenever cercal wind stimulation coincided with singing motor activity. The axonal terminals of cercal afferents showed no indication of presynaptic inhibition during singing. In two ventral GIs, however, a corollary discharge inhibition occurred strictly in phase with the singing motor pattern. Paired intracellular recordings revealed that this inhibition was not mediated by the activity of the previously identified corollary discharge interneuron (CDI) that rhythmically inhibits the auditory pathway during singing. Cercal wind stimulation, however, reduced the spike activity of this CDI by postsynaptic inhibition. Our study reveals how precisely timed corollary discharge inhibition of ventral GIs can prevent self-generated airflow from triggering inadvertent escape responses in singing crickets. The results indicate that the responsiveness of the auditory and wind-sensitive pathway is modulated by distinct CDIs in singing crickets and that the corollary discharge inhibition in the auditory pathway can be attenuated by cercal wind stimulation. PMID:25318763
Brennan, J F; Jastreboff, P J
1991-01-01
Tonal frequency generalization was examined in a total of 114 pigmented male rats, 60 of which were tested under the influence of salicylate-induced phantom auditory perception, introduced before or after lick suppression training. Thirty control subjects received saline injections, and the remaining 24 subjects served as noninjected controls of tonal background effects on generalization. Rats were continuously exposed to background noise alone or with a superimposed tone. Offset of background noise alone (Experiment I), or combined with onset or continuation of the tone (Experiments II and III) served as the conditioned stimulus (CS). In Experiment I, tone presentations were introduced only after suppression training. Depending on the time of salicylate introduction, a strong and differential influence on generalization gradients was observed, which is consistent with subjects' detection of salicylate-induced, high-pitched sound. Moreover, when either 12- or 3 kHz tones were introduced before or after Pavlovian training to mimic salicylate effects in 24 rats, the distortions in generalization gradients resembled trends obtained from respective salicylate injected groups. Experiments II and III were aimed at evaluating the masking effect of salicylate-induced phantom auditory perception on external sounds, with a 5- or a 10-kHz tone imposed continuously on the noise or presented only during the CS. Tests of tonal generalization to frequencies ranging from 4- to 11- kHz showed that in this experimental context salicylate-induced perception did not interfere with the dominant influence of external tones, a result that further strengthens the conclusion of Experiment I.
Articulatory movements modulate auditory responses to speech
Agnew, Z.K.; McGettigan, C.; Banks, B.; Scott, S.K.
2013-01-01
Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior–posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information. PMID:22982103
Enhanced auditory spatial localization in blind echolocators.
Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A
2015-01-01
Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.
Giovannelli, Fabio; Innocenti, Iglis; Rossi, Simone; Borgheresi, Alessandra; Ragazzoni, Aldo; Zaccara, Gaetano; Viggiano, Maria Pia; Cincotta, Massimo
2014-04-01
Synchronization of body movements to an external beat is a universal human ability, which has also been recently documented in nonhuman species. The neural substrates of this rhythmic motor entrainment are still under investigation. Correlational neuroimaging data suggest an involvement of the dorsal premotor cortex (dPMC) and the supplementary motor area (SMA). In 14 healthy volunteers, we more specifically investigated the neural network underlying this phenomenon using a causal approach by an established 1-Hz repetitive transcranial magnetic stimulation (rTMS) protocol, which produces a focal suppression of cortical excitability outlasting the stimulation period. Synchronization accuracy between rhythmic cues and right index finger tapping, as measured by the mean time lag (asynchrony) between motor and auditory events, was significantly affected when the right dPMC function was transiently perturbed by "off-line" focal rTMS, whereas the reproduction of the rhythmic sequence per se (inter-tap-interval) was spared. This approach affected metrical rhythms of different complexity, but not non-metrical or isochronous sequences. Conversely, no change in auditory-motor synchronization was observed with rTMS of the SMA, of the left dPMC or over a control site (midline occipital area). Our data strongly support the view that the right dPMC is crucial for rhythmic auditory-motor synchronization in humans.
Musical hallucination associated with hearing loss.
Sanchez, Tanit Ganz; Rocha, Savya Cybelle Milhomem; Knobel, Keila Alessandra Baraldi; Kii, Márcia Akemi; Santos, Rosa Maria Rodrigues dos; Pereira, Cristiana Borges
2011-01-01
In spite of the fact that musical hallucination have a significant impact on patients' lives, they have received very little attention of experts. Some researchers agree on a combination of peripheral and central dysfunctions as the mechanism that causes hallucination. The most accepted physiopathology of musical hallucination associated to hearing loss (caused by cochlear lesion, cochlear nerve lesion or by interruption of mesencephalon or pontine auditory information) is the disinhibition of auditory memory circuits due to sensory deprivation. Concerning the cortical area involved in musical hallucination, there is evidence that the excitatory mechanism of the superior temporal gyrus, as in epilepsies, is responsible for musical hallucination. In musical release hallucination there is also activation of the auditory association cortex. Finally, considering the laterality, functional studies with musical perception and imagery in normal individuals showed that songs with words cause bilateral temporal activation and melodies activate only the right lobe. The effect of hearing aids on the improvement of musical hallucination as a result of the hearing loss improvement is well documented. It happens because auditory hallucination may be influenced by the external acoustical environment. Neuroleptics, antidepressants and anticonvulsants have been used in the treatment of musical hallucination. Cases of improvement with the administration of carbamazepine, meclobemide and donepezil were reported, but the results obtained were not consistent.
Technological, biological, and acoustical constraints to music perception in cochlear implant users.
Limb, Charles J; Roy, Alexis T
2014-02-01
Despite advances in technology, the ability to perceive music remains limited for many cochlear implant users. This paper reviews the technological, biological, and acoustical constraints that make music an especially challenging stimulus for cochlear implant users, while highlighting recent research efforts to overcome these shortcomings. The limitations of cochlear implant devices, which have been optimized for speech comprehension, become evident when applied to music, particularly with regards to inadequate spectral, fine-temporal, and dynamic range representation. Beyond the impoverished information transmitted by the device itself, both peripheral and central auditory nervous system deficits are seen in the presence of sensorineural hearing loss, such as auditory nerve degeneration and abnormal auditory cortex activation. These technological and biological constraints to effective music perception are further compounded by the complexity of the acoustical features of music itself that require the perceptual integration of varying rhythmic, melodic, harmonic, and timbral elements of sound. Cochlear implant users not only have difficulty perceiving spectral components individually (leading to fundamental disruptions in perception of pitch, melody, and harmony) but also display deficits with higher perceptual integration tasks required for music perception, such as auditory stream segregation. Despite these current limitations, focused musical training programs, new assessment methods, and improvements in the representation and transmission of the complex acoustical features of music through technological innovation offer the potential for significant advancements in cochlear implant-mediated music perception. Copyright © 2013 Elsevier B.V. All rights reserved.
The organization and reorganization of audiovisual speech perception in the first year of life.
Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F
2017-04-01
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.
The organization and reorganization of audiovisual speech perception in the first year of life
Danielson, D. Kyle; Bruderer, Alison G.; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F.
2017-01-01
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone. PMID:28970650
Meas, Steven J.; Zhang, Chun-Li; Dabdoub, Alain
2018-01-01
Disabling hearing loss affects over 5% of the world’s population and impacts the lives of individuals from all age groups. Within the next three decades, the worldwide incidence of hearing impairment is expected to double. Since a leading cause of hearing loss is the degeneration of primary auditory neurons (PANs), the sensory neurons of the auditory system that receive input from mechanosensory hair cells in the cochlea, it may be possible to restore hearing by regenerating PANs. A direct reprogramming approach can be used to convert the resident spiral ganglion glial cells into induced neurons to restore hearing. This review summarizes recent advances in reprogramming glia in the CNS to suggest future steps for regenerating the peripheral auditory system. In the coming years, direct reprogramming of spiral ganglion glial cells has the potential to become one of the leading biological strategies to treat hearing impairment. PMID:29593497
The role of reverberation-related binaural cues in the externalization of speech.
Catic, Jasmina; Santurette, Sébastien; Dau, Torsten
2015-08-01
The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.
Badakh, Dinesh K; Grover, Amit H
2014-01-01
The purpose of this study was to analyze the impact of intra-cavitary brachytherapy (ICBT) as boost radiation after external beam radiotherapy (EBRT) in carcinoma of the external auditory canal and middle ear (EACMA): A retrospective analysis. A retrospective study of 18 patients with carcinoma of the EACMA who were treated with a curative intent from the year 1998 to 2010 was carried out. The age of the patients ranged from 25 years to 67 years. There were 11 male patients (61.1%) and 7 female patients (38.9%). A total of 15 (88.2%) patients were treated with curative radiation alone after a biopsy and two patients received post-operative radiation therapy. The patients were initially treated with EBRT with cobalt 60 machine up to 60-64 Gy. In our department, all the patients who were technically suitable for ICBT received an ICBT boost. The overall survival (OS) in these patients ranged from 7 months to 151 months (9 out of 17 patients, no evidence of disease 53%). The OS in patients treated with a combination of EBRT with ICBT was (8 out of 11) 72.7%, P value statistically significant (P value: 0.0024). The multivariate analysis shows statistically significant difference only for patients who got an ICBT boost (P Value: 0.020). ICBT as a boost after EBRT has got a positive impact on the OS. In conclusion, our results demonstrate that radical radiation therapy (EBRT and ICBT) is the treatment of choice for stage T2, carcinoma of EACMA.
Nakashima, Ann; Farinaccio, Rocco
2015-04-01
Noise-induced hearing loss resulting from weapon noise exposure has been studied for decades. A summary of recent work in weapon noise signal analysis, current knowledge of hearing damage risk criteria, and auditory performance in impulse noise is presented. Most of the currently used damage risk criteria are based on data that cannot be replicated or verified. There is a need to address the effects of combined noise exposures, from similar or different weapons and continuous background noise, in future noise exposure regulations. Advancements in hearing protection technology have expanded the options available to soldiers. Individual selection of hearing protection devices that are best suited to the type of exposure, the auditory task requirements, and hearing status of the user could help to facilitate their use. However, hearing protection devices affect auditory performance, which in turn affects situational awareness in the field. This includes communication capability and the localization and identification of threats. Laboratory training using high-fidelity weapon noise recordings has the potential to improve the auditory performance of soldiers in the field, providing a low-cost tool to enhance readiness for combat. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Gene therapy in the inner ear using adenovirus vectors.
Husseman, Jacob; Raphael, Yehoash
2009-01-01
Therapies for the protection and regeneration of auditory hair cells are of great interest given the significant monetary and lifestyle impact of hearing loss. The past decade has seen tremendous advances in the use of adenoviral vectors to achieve these aims. Preliminary data demonstrated the functional capacity of this technique as adenoviral-induced expression of neurotrophic and growth factors protected hair cells and spiral ganglion neurons from ototoxic insults. Subsequent efforts confirmed the feasibility of adenoviral transfection of cells in the auditory neuroepithelium via cochleostomy into the scala media. Most recently, efforts have focused on regeneration of depleted hair cells. Mammalian hearing loss is generally considered a permanent insult as the auditory epithelium lacks a basal layer capable of producing new hair cells. Recently, the transcription factor Atoh1 has been found to play a critical role in hair cell differentiation. Adenoviral-mediated overexpression of Atoh1 in culture and in vivo have shown the ability to regenerate auditory and vestibular hair cells by causing transdifferentiation of neighboring epithelial-supporting cells. Functional recovery of both the auditory and vestibular systems has been documented following adenoviral induced Atoh1 overexpression. Copyright (c) 2009 S. Karger AG, Basel.
ERIC Educational Resources Information Center
Teng, Santani; Whitney, David
2011-01-01
Echolocation is a specialized application of spatial hearing that uses reflected auditory information to localize objects and represent the external environment. Although it has been documented extensively in nonhuman species, such as bats and dolphins, its use by some persons who are blind as a navigation and object-identification aid has…
Nakao, Yoshio; Tanigawa, Tohru; Murotani, Kenta; Yamashita, Jun-Ichi
2017-11-01
Otolaryngologists occasionally observe foreign bodies (FB) in the external auditory canal (EAC), although relatively few studies have focused on the role of age in this condition. We retrospectively compared the incidences, outcomes and complications of FB in the EAC in different age groups. The patients at our center included 24 children (19%), 46 adults (37%) and 56 older adults (44%). Compared with adults, older adults were significantly more likely to have FB (peak age 75-79 years), be women (18/46 vs 34/56, P = 0.0461) and be unaware of their FB (18/46 vs 34/56, P = 0.0461). We observed that all EAC FB were more common during the summer, and biotic FB were not observed during the winter. Complications were more common in cases of biotic FB, compared with abiotic FB (5/27 vs 6/99, P = 0.0421). Our findings show that older adults are particularly susceptible to FB, are frequently unaware of their FB and can develop complications. These characteristics should be considered before treating FB in the EAC. Geriatr Gerontol Int 2017; 17: 2131-2135. © 2017 Japan Geriatrics Society.
Abdel-Aziz, Mosaad
2013-07-01
Congenital aural atresia is a spectrum of ear deformities present at birth that involves some degree of failure of the development of the external auditory canal. This malformation may be associated with other congenital anomalies; it occurs as a result of abnormal development of the first and second branchial arches and the first branchial cleft and most often occurs sporadically, although the disease may be manifested in different syndromes. Congenital aural atresia is considered one of the most difficult and challenging surgeries for the otologic surgeon. The goals of atresia surgery are to restore functional hearing, preferably without the requirement of a hearing aid, and to reconstruct a patent, infection-free external auditory canal. The repair is usually done at the age of 6 years, so children with bilateral atresia may need hearing amplification in the first few weeks of life until the age at surgery. To optimize the surgical outcome, careful audiological and radiological evaluation of the patient should be performed preoperatively. Also, postoperative frequent packing and regular follow-up are mandatory to avoid restenosis and infection of the newly created canal. With careful intraoperative dissection and regular follow-up, complications of surgery can be avoided.
2016-12-01
To review reports of adenoid cystic carcinomas arising in the head and neck area outside of the major salivary glands, in order to enhance the care of patients with these unusual neoplasms. An international team of head and neck surgeons, pathologists, oncologists and radiation oncologists was assembled to explore the published experience and their own working experience of the diagnosis and treatment of adenoid cystic carcinomas arising in the vicinity of the sinonasal tract, nasopharynx, lacrimal glands and external auditory canal. The behaviour of adenoid cystic carcinoma arising in head and neck sites exclusive of the major salivary glands parallels that of tumours with a similar histology arising in the major salivary glands - these are relentless, progressive tumours, associated with high rates of mortality. Of 774 patients reviewed, at least 41 (5.3 per cent) developed documented regional node metastases. The relatively low overall incidence of nodal metastases in adenoid cystic carcinomas arising in the head and neck region outside of the major salivary glands suggests that routine elective regional lymph node dissection might not be indicated in most patients with these tumours.
2017-01-01
Objective To review reports of adenoid cystic carcinomas arising in the head and neck area outside of the major salivary glands, in order to enhance the care of patients with these unusual neoplasms. Methods An international team of head and neck surgeons, pathologists, oncologists and radiation oncologists was assembled to explore the published experience and their own working experience of the diagnosis and treatment of adenoid cystic carcinomas arising in the vicinity of the sinonasal tract, nasopharynx, lacrimal glands and external auditory canal. Results The behaviour of adenoid cystic carcinoma arising in head and neck sites exclusive of the major salivary glands parallels that of tumours with a similar histology arising in the major salivary glands – these are relentless, progressive tumours, associated with high rates of mortality. Of 774 patients reviewed, at least 41 (5.3 per cent) developed documented regional node metastases. Conclusion The relatively low overall incidence of nodal metastases in adenoid cystic carcinomas arising in the head and neck region outside of the major salivary glands suggests that routine elective regional lymph node dissection might not be indicated in most patients with these tumours. PMID:27839526
[Anatomic foundation of the lateral portal for radiotherapy of nasopharyngeal cancer (NPC)].
Wei, B Q; Feng, P B; Li, J Z
1987-05-01
Basing on 31 normal skulls, the lateral projections of some points relative to the bony structure near the nasopharynx were located under the simulator, followed by drawing it on a sheet of paper with the aid of geometry and trigonometry. Thus, the relation between external and internal structures is shown on the drawn projection, which can serve as the anatomic basis for designing the routine field and improving radiotherapy technique. In the light of data informed by this study and clinical experiences of the authors and others, it was found logical, in radiotherapy of NPC, that large opposing lateral pre-auriculo-cervical portals with their posterior margin extending beyond the external auditory meatus posteriorly be used in order to avoid geographic miss of the uppermost deep cervical lymph nodes usually involved beneath the jugular foramen and posterior portion of the nasopharynx. In addition, the upper margin of the lateral portal must be parallel but superior to the cantho-auditory line, on which the foramen ovale is projected. Actual locating the upper margin should depend on the extent of the intracranial invasion of the tumor as shown by the CT scan.
Beitel, Ralph E.; Schreiner, Christoph E.; Leake, Patricia A.
2016-01-01
In profoundly deaf cats, behavioral training with intracochlear electric stimulation (ICES) can improve temporal processing in the primary auditory cortex (AI). To investigate whether similar effects are manifest in the auditory midbrain, ICES was initiated in neonatally deafened cats either during development after short durations of deafness (8 wk of age) or in adulthood after long durations of deafness (≥3.5 yr). All of these animals received behaviorally meaningless, “passive” ICES. Some animals also received behavioral training with ICES. Two long-deaf cats received no ICES prior to acute electrophysiological recording. After several months of passive ICES and behavioral training, animals were anesthetized, and neuronal responses to pulse trains of increasing rates were recorded in the central (ICC) and external (ICX) nuclei of the inferior colliculus. Neuronal temporal response patterns (repetition rate coding, minimum latencies, response precision) were compared with results from recordings made in the AI of the same animals (Beitel RE, Vollmer M, Raggio MW, Schreiner CE. J Neurophysiol 106: 944–959, 2011; Vollmer M, Beitel RE. J Neurophysiol 106: 2423–2436, 2011). Passive ICES in long-deaf cats remediated severely degraded temporal processing in the ICC and had no effects in the ICX. In contrast to observations in the AI, behaviorally relevant ICES had no effects on temporal processing in the ICC or ICX, with the single exception of shorter latencies in the ICC in short-deaf cats. The results suggest that independent of deafness duration passive stimulation and behavioral training differentially transform temporal processing in auditory midbrain and cortex, and primary auditory cortex emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf cat. NEW & NOTEWORTHY Behaviorally relevant vs. passive electric stimulation of the auditory nerve differentially affects neuronal temporal processing in the central nucleus of the inferior colliculus (ICC) and the primary auditory cortex (AI) in profoundly short-deaf and long-deaf cats. Temporal plasticity in the ICC depends on a critical amount of electric stimulation, independent of its behavioral relevance. In contrast, the AI emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf auditory system. PMID:27733594
Petranovich, Christine L; Walz, Nicolay Chertkoff; Staat, Mary Allen; Chiu, Chung-Yiu Peter; Wade, Shari L
2015-01-01
The aim of this study was to investigate the association of neurocognitive functioning with internalizing and externalizing problems and school and social competence in children adopted internationally. Participants included girls between the ages of 6-12 years who were internationally adopted from China (n = 32) or Eastern Europe (n = 25) and a control group of never-adopted girls (n = 25). Children completed the Vocabulary and Matrix Reasoning subtests from the Wechsler Abbreviated Scale of Intelligence and the Score! and Sky Search subtests from the Test of Everyday Attention for Children. Parents completed the Child Behavior Checklist and the Home and Community Social Behavior Scales. Compared to the controls, the Eastern European group evidenced significantly more problems with externalizing behaviors and school and social competence and poorer performance on measures of verbal intelligence, perceptual reasoning, and auditory attention. More internalizing problems were reported in the Chinese group compared to the controls. Using generalized linear regression, interaction terms were examined to determine whether the associations of neurocognitive functioning with behavior varied across groups. Eastern European group status was associated with more externalizing problems and poorer school and social competence, irrespective of neurocognitive test performance. In the Chinese group, poorer auditory attention was associated with more problems with social competence. Neurocognitive functioning may be related to behavior in children adopted internationally. Knowledge about neurocognitive functioning may further our understanding of the impact of early institutionalization on post-adoption behavior.
Dy, Alexander Edward S; Lapeña, José Florencio F
2018-04-01
To investigate associations between age, external auditory canal (EAC) dimensions, and cerumen retention/impaction among persons with Down syndrome (DS). This cross-sectional study evaluated EAC dimensions, cerumen retention/impaction, and middle ear status with pneumatoscopy after extraction in 130 persons with DS. Descriptive and inferential statistics correlated age, presence of impacted/retained cerumen, and EAC diameter. Of 260 ears in 67 males and 63 females with average age of 9.48 years, 72.3% (188) had EAC of ≤4 mm. Those ≤1 year were 4.97 times more likely to have cerumen problems than those >1 year (95% CI, 1.45-17.02, P = .011). The odds of having cerumen problems with an EAC diameter of ≤4 mm were 3.31 times higher than with a diameter of 5 mm (95% CI, 1.46-7.50, P = .004), and odds of having cerumen impaction were as much as 6.19 times higher (95% CI, 2.38-16.08, P < .001). Male gender and low-lying external ear were also associated with increased odds of cerumen problems. There is a high prevalence of cerumen retention/impaction in persons with DS compared to the general Philippine population and a higher prevalence rate for EAC stenosis than elsewhere. A canal diameter of 4 mm and below and age 1 year or less are associated with a significantly higher likelihood of cerumen retention/impaction.
Zhang, Kaidi D.; Coate, Thomas M.
2016-01-01
In hearing, mechanically sensitive hair cells (HCs) in the cochlea release glutamate onto spiral ganglion neurons (SGNs) to relay auditory information to the central nervous system (CNS). There are two main SGN subtypes, which differ in morphology, number, synaptic targets, innervation patterns and firing properties. About 90-95% of SGNs are the type I SGNs, which make a single bouton connection with inner hair cells (IHCs) and have been well described in the canonical auditory pathway for sound detection. However, less attention has been given to the type II SGNs, which exclusively innervate outer hair cells (OHCs). In this review, we emphasize recent advances in the molecular mechanisms that control how type II SGNs develop and form connections with OHCs, and exciting new insights into the function of type II SGNs. PMID:27760385
A neural network model of ventriloquism effect and aftereffect.
Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro
2012-01-01
Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
On the Perceptual Subprocess of Absolute Pitch.
Kim, Seung-Goo; Knösche, Thomas R
2017-01-01
Absolute pitch (AP) is the rare ability of musicians to identify the pitch of tonal sound without external reference. While there have been behavioral and neuroimaging studies on the characteristics of AP, how the AP is implemented in human brains remains largely unknown. AP can be viewed as comprising of two subprocesses: perceptual (processing auditory input to extract a pitch chroma) and associative (linking an auditory representation of pitch chroma with a verbal/non-verbal label). In this review, we focus on the nature of the perceptual subprocess of AP. Two different models on how the perceptual subprocess works have been proposed: either via absolute pitch categorization (APC) or based on absolute pitch memory (APM). A major distinction between the two views is that whether the AP uses unique auditory processing (i.e., APC) that exists only in musicians with AP or it is rooted in a common phenomenon (i.e., APM), only with heightened efficiency. We review relevant behavioral and neuroimaging evidence that supports each notion. Lastly, we list open questions and potential ideas to address them.
On the Perceptual Subprocess of Absolute Pitch
Kim, Seung-Goo; Knösche, Thomas R.
2017-01-01
Absolute pitch (AP) is the rare ability of musicians to identify the pitch of tonal sound without external reference. While there have been behavioral and neuroimaging studies on the characteristics of AP, how the AP is implemented in human brains remains largely unknown. AP can be viewed as comprising of two subprocesses: perceptual (processing auditory input to extract a pitch chroma) and associative (linking an auditory representation of pitch chroma with a verbal/non-verbal label). In this review, we focus on the nature of the perceptual subprocess of AP. Two different models on how the perceptual subprocess works have been proposed: either via absolute pitch categorization (APC) or based on absolute pitch memory (APM). A major distinction between the two views is that whether the AP uses unique auditory processing (i.e., APC) that exists only in musicians with AP or it is rooted in a common phenomenon (i.e., APM), only with heightened efficiency. We review relevant behavioral and neuroimaging evidence that supports each notion. Lastly, we list open questions and potential ideas to address them. PMID:29085275
Jain, Chandni; Sahoo, Jitesh Prasad
Tinnitus is the perception of a sound without an external source. It can affect auditory perception abilities in individuals with normal hearing sensitivity. The aim of the study was to determine the effect of tinnitus on psychoacoustic abilities in individuals with normal hearing sensitivity. The study was conducted on twenty subjects with tinnitus and twenty subjects without tinnitus. Tinnitus group was again divided into mild and moderate tinnitus based on the tinnitus handicap inventory. Differential limen of intensity, differential limen of frequency, gap detection test, modulation detection thresholds were done through the mlp toolbox in Matlab and speech in noise test was done with the help of Quick SIN in Kannada. RESULTS of the study showed that the clinical group performed poorly in all the tests except for differential limen of intensity. Tinnitus affects aspects of auditory perception like temporal resolution, speech perception in noise and frequency discrimination in individuals with normal hearing. This could be due to subtle changes in the central auditory system which is not reflected in the pure tone audiogram.
Phantom auditory perception (tinnitus): mechanisms of generation and perception.
Jastreboff, P J
1990-08-01
Phantom auditory perception--tinnitus--is a symptom of many pathologies. Although there are a number of theories postulating certain mechanisms of its generation, none have been proven yet. This paper analyses the phenomenon of tinnitus from the point of view of general neurophysiology. Existing theories and their extrapolation are presented, together with some new potential mechanisms of tinnitus generation, encompassing the involvement of calcium and calcium channels in cochlear function, with implications for malfunction and aging of the auditory and vestibular systems. It is hypothesized that most tinnitus results from the perception of abnormal activity, defined as activity which cannot be induced by any combination of external sounds. Moreover, it is hypothesized that signal recognition and classification circuits, working on holographic or neuronal network-like representation, are involved in the perception of tinnitus and are subject to plastic modification. Furthermore, it is proposed that all levels of the nervous system, to varying degrees, are involved in tinnitus manifestation. These concepts are used to unravel the inexplicable, unique features of tinnitus and its masking. Some clinical implications of these theories are suggested.
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Mahendra Prashanth, K V; Venugopalachar, Sridhar
2011-01-01
Noise is a common occupational health hazard in most industrial settings. An assessment of noise and its adverse health effects based on noise intensity is inadequate. For an efficient evaluation of noise effects, frequency spectrum analysis should also be included. This paper aims to substantiate the importance of studying the contribution of noise frequencies in evaluating health effects and their association with physiological behavior within human body. Additionally, a review of studies published between 1988 and 2009 that investigate the impact of industrial/occupational noise on auditory and non-auditory effects and the probable association and contribution of noise frequency components to these effects is presented. The relevant studies in English were identified in Medknow, Medline, Wiley, Elsevier, and Springer publications. Data were extracted from the studies that fulfilled the following criteria: title and/or abstract of the given study that involved industrial/occupational noise exposure in relation to auditory and non-auditory effects or health effects. Significant data on the study characteristics, including noise frequency characteristics, for assessment were considered in the study. It is demonstrated that only a few studies have considered the frequency contributions in their investigations to study auditory effects and not non-auditory effects. The data suggest that significant adverse health effects due to industrial noise include auditory and heart-related problems. The study provides a strong evidence for the claims that noise with a major frequency characteristic of around 4 kHz has auditory effects and being deficient in data fails to show any influence of noise frequency components on non-auditory effects. Furthermore, specific noise levels and frequencies predicting the corresponding health impacts have not yet been validated. There is a need for advance research to clarify the importance of the dominant noise frequency contribution in evaluating health effects.
Surgical management of 2 different presentations of ear canal atresia in dogs
Béraud, Romain
2012-01-01
A 6-year-old French spaniel and a 14-month-old German shepherd dog were diagnosed with ear canal atresia. Based on presentation, computed tomography, and auditory function evaluation, the first dog underwent excision of the horizontal ear canal and bulla curettage, and the second underwent re-anastomosis of the vertical canal to the external meatus. Both dogs had successful outcomes. PMID:23024390
Isbary, G; Shimizu, T; Zimmermann, J L; Thomas, H M; Morfill, G E; Stolz, W
2013-01-01
Following surgery of cholesteatoma, a patient developed a chronic infection of the external auditory canal, including extended-spectrum β-lactamase producing Escherichia coli, which caused severe pain. The application of cold atmospheric plasma resulted in a significant reduction in pain and clearance of bacterial carriage, allowing antibiotics and analgesics to be ceased. PMID:25356328
[Manifestation of first branchial anomaly:56 cases reportrhinitis].
Zhang, B; Chen, L S; Huang, S L; Liang, L; Wu, P N; Zhang, S Y; L, Z M; Liang, L
2016-09-05
Objective: To sum up and conclude manifestation of congenital first branchial anomaly(CFBCA). Method: The clinical data of 56 patients from 2005 to 2015 in our hospital were retrospective reviewed. Result: Manifestation:mass without pain(26.8%),repeated sore and discharge(71.4%),otological symptom(external auditory discharge、hearing loss,28.6%).Eleven cases bacterial sample showed positive result,and most of them show pseudomonas aeruginosa and staphylococcus aureus.Auricular endoscopy typically performed stricture of external auditory canal,cholesteatoma samples accumulated in ear canal,fistula at the conjunction of the bone and cartilage and tympanic membranous attachment.Typical performance of CT(MRI)was that there were cystic,lobulated or tubular abnormal shadow related with ear canal in Pochet's triangle area whose cyst wall or pipe wall could been enhanced in enhanced CT(MRI) scans,and part of that could be connected with skin.The statistical difference between type Oslen and Work and clinical characteristics( P <0.01),and the relationship between type Oslen and Work( P <0.01).Most of Work Ⅰ were cyst type,and these two type often had no infected symptom.Most of them were young patients.Most of Work Ⅱ were sinus and fistula type ,and these two type often had infected symptom.Most of them were teenagers.Part of patients of type Work Ⅱ showed tympanic membranous attachment. Conclusion: CFBCA was rare,and it is more common in young patients and often in left part.It always performed as mass without pain、repeated sore and discharge、external auditory discharge.Most of Work Ⅰ were cyst type,and these two type often had no infected symptom and most of them were young patients .Most of Work Ⅱ were sinus and fistula type,and these two type often had infected symptom and most of them were teenagers.Auricular endoscopy,CT,MRI could help make diagnose.Doctors clinical need to differentiate it with related diseases according to different manifestations. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
NASA Technical Reports Server (NTRS)
Bargar, Robin
1995-01-01
The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.
Wilquin, Hélène; Delevoye-Turrell, Yvonne; Dione, Mariama; Giersch, Anne
2018-01-01
Objective: Basic temporal dysfunctions have been described in patients with schizophrenia, which may impact their ability to connect and synchronize with the outer world. The present study was conducted with the aim to distinguish between interval timing and synchronization difficulties and more generally the spatial-temporal organization disturbances for voluntary actions. A new sensorimotor synchronization task was developed to test these abilities. Method: Twenty-four chronic schizophrenia patients matched with 27 controls performed a spatial-tapping task in which finger taps were to be produced in synchrony with a regular metronome to six visual targets presented around a virtual circle on a tactile screen. Isochronous (time intervals of 500 ms) and non-isochronous auditory sequences (alternated time intervals of 300/600 ms) were presented. The capacity to produce time intervals accurately versus the ability to synchronize own actions (tap) with external events (tone) were measured. Results: Patients with schizophrenia were able to produce the tapping patterns of both isochronous and non-isochronous auditory sequences as accurately as controls producing inter-response intervals close to the expected interval of 500 and 900 ms, respectively. However, the synchronization performances revealed significantly more positive asynchrony means (but similar variances) in the patient group than in the control group for both types of auditory sequences. Conclusion: The patterns of results suggest that patients with schizophrenia are able to perceive and produce both simple and complex sequences of time intervals but are impaired in the ability to synchronize their actions with external events. These findings suggest a specific deficit in predictive timing, which may be at the core of early symptoms previously described in schizophrenia.
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
Ertmer, David J.; Jung, Jongmin
2012-01-01
This investigation examined the time course and sequence of prelinguistic vocal development during the first year of cochlear implant (CI) experience. Thirteen children who were implanted between 8 and 35 months and 11 typically developing (TD) infants participated in this longitudinal study. Adult–child play interactions were video- and audio-recorded at trimonthly intervals for each group, and child utterances were classified into categories representing progressively more mature productions: Precanonical Vocalizations, Basic Canonical Syllables, and Advanced Form vocalizations. Young CI recipients met the 20% criterion for establishment of the Basic Canonical Syllables and Advanced Forms levels with fewer months of robust hearing experience than the TD infants. Most CI recipients followed the sequence of development predicted by the Stark Assessment of Early Vocal Development—Revised. The relatively rapid progress of the CI children suggests that an earlier period of auditory deprivation did not have negative consequences for prelinguistic vocal development. It also supports the notion that young CI recipients comparatively advanced maturity facilitated expeditious auditory-guided speech development. PMID:21586617
Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki
2008-01-01
The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.
Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema
NASA Astrophysics Data System (ADS)
Manolas, Christos; Pauletto, Sandra
2014-09-01
Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.
Detecting wrong notes in advance: neuronal correlates of error monitoring in pianists.
Ruiz, María Herrojo; Jabusch, Hans-Christian; Altenmüller, Eckart
2009-11-01
Music performance is an extremely rapid process with low incidence of errors even at the fast rates of production required. This is possible only due to the fast functioning of the self-monitoring system. Surprisingly, no specific data about error monitoring have been published in the music domain. Consequently, the present study investigated the electrophysiological correlates of executive control mechanisms, in particular error detection, during piano performance. Our target was to extend the previous research efforts on understanding of the human action-monitoring system by selecting a highly skilled multimodal task. Pianists had to retrieve memorized music pieces at a fast tempo in the presence or absence of auditory feedback. Our main interest was to study the interplay between auditory and sensorimotor information in the processes triggered by an erroneous action, considering only wrong pitches as errors. We found that around 70 ms prior to errors a negative component is elicited in the event-related potentials and is generated by the anterior cingulate cortex. Interestingly, this component was independent of the auditory feedback. However, the auditory information did modulate the processing of the errors after their execution, as reflected in a larger error positivity (Pe). Our data are interpreted within the context of feedforward models and the auditory-motor coupling.
Auditory hallucinations: A review of the ERC “VOICE” project
Hugdahl, Kenneth
2015-01-01
In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the “VOICE” ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the “voices” having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app. PMID:26110121
A framework for testing and comparing binaural models.
Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M
2018-03-01
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Audiologic Assessment of Infants and Toddlers.
ERIC Educational Resources Information Center
Gravel, Judith S.
This paper provides guidelines for the audiologic assessment of infants and young children, highlighting recent technologic advances in auditory electrophysiology, acoustic immitance measure procedures, and behavioral audiometric techniques. First, audiologic assessment guidelines developed by the American Speech-Language-Hearing Association are…
Zhao, Yan; Nonnekes, Jorik; Storcken, Erik J M; Janssen, Sabine; van Wegen, Erwin E H; Bloem, Bastiaan R; Dorresteijn, Lucille D A; van Vugt, Jeroen P P; Heida, Tjitske; van Wezel, Richard J A
2016-06-01
New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory cueing in a laboratory setting with a custom-made application for the Google Glass. Twelve participants (mean age = 66.8; mean disease duration = 13.6 years) were tested at end of dose. We compared several key gait parameters (walking speed, cadence, stride length, and stride length variability) and freezing of gait for three types of external cues (metronome, flashing light, and optic flow) and a control condition (no-cue). For all cueing conditions, the subjects completed several walking tasks of varying complexity. Seven inertial sensors attached to the feet, legs and pelvis captured motion data for gait analysis. Two experienced raters scored the presence and severity of freezing of gait using video recordings. User experience was evaluated through a semi-open interview. During cueing, a more stable gait pattern emerged, particularly on complicated walking courses; however, freezing of gait did not significantly decrease. The metronome was more effective than rhythmic visual cues and most preferred by the participants. Participants were overall positive about the usability of the Google Glass and willing to use it at home. Thus, smartglasses like the Google Glass could be used to provide personalized mobile cueing to support gait; however, in its current form, auditory cues seemed more effective than rhythmic visual cues.
Developmental vision determines the reference frame for the multisensory control of action.
Röder, Brigitte; Kusmierek, Anna; Spence, Charles; Schicke, Tobias
2007-03-13
Both animal and human studies suggest that action goals are defined in external coordinates regardless of their sensory modality. The present study used an auditory-manual task to test whether the default use of such an external reference frame is innately determined or instead acquired during development because of the increasing dominance of vision over manual control. In Experiment I, congenitally blind, late blind, and age-matched sighted adults had to press a left or right response key depending on the bandwidth of pink noise bursts presented from either the left or right loudspeaker. Although the spatial location of the sounds was entirely task-irrelevant, all groups responded more efficiently with uncrossed hands when the sound was presented from the same side as the responding hand ("Simon effect"). This effect reversed with crossed hands only in the congenitally blind: They responded faster with the hand that was located contralateral to the sound source. In Experiment II, the instruction to the participants was changed: They now had to respond with the hand located next to the sound source. In contrast to Experiment I ("Simon-task"), this task required an explicit matching of the sound's location with the position of the responding hand. In Experiment II, the congenitally blind participants showed a significantly larger crossing deficit than both the sighted and late blind adults. This pattern of results implies that developmental vision induces the default use of an external coordinate frame for multisensory action control; this facilitates not only visual but also auditory-manual control.
Testing the importance of auditory detections in avian point counts
Brewster, J.P.; Simons, T.R.
2009-01-01
Recent advances in the methods used to estimate detection probability during point counts suggest that the detection process is shaped by the types of cues available to observers. For example, models of the detection process based on distance-sampling or time-of-detection methods may yield different results for auditory versus visual cues because of differences in the factors that affect the transmission of these cues from a bird to an observer or differences in an observer's ability to localize cues. Previous studies suggest that auditory detections predominate in forested habitats, but it is not clear how often observers hear birds prior to detecting them visually. We hypothesized that auditory cues might be even more important than previously reported, so we conducted an experiment in a forested habitat in North Carolina that allowed us to better separate auditory and visual detections. Three teams of three observers each performed simultaneous 3-min unlimited-radius point counts at 30 points in a mixed-hardwood forest. One team member could see, but not hear birds, one could hear, but not see, and the third was nonhandicapped. Of the total number of birds detected, 2.9% were detected by deafened observers, 75.1% by blinded observers, and 78.2% by nonhandicapped observers. Detections by blinded and nonhandicapped observers were the same only 54% of the time. Our results suggest that the detection of birds in forest habitats is almost entirely by auditory cues. Because many factors affect the probability that observers will detect auditory cues, the accuracy and precision of avian point count estimates are likely lower than assumed by most field ornithologists. ?? 2009 Association of Field Ornithologists.
Thalamocortical Dysrhythmia: A Theoretical Update in Tinnitus
De Ridder, Dirk; Vanneste, Sven; Langguth, Berthold; Llinas, Rodolfo
2015-01-01
Tinnitus is the perception of a sound in the absence of a corresponding external sound source. Pathophysiologically it has been attributed to bottom-up deafferentation and/or top-down noise-cancelling deficit. Both mechanisms are proposed to alter auditory thalamocortical signal transmission, resulting in thalamocortical dysrhythmia (TCD). In deafferentation, TCD is characterized by a slowing down of resting state alpha to theta activity associated with an increase in surrounding gamma activity, resulting in persisting cross-frequency coupling between theta and gamma activity. Theta burst-firing increases network synchrony and recruitment, a mechanism, which might enable long-range synchrony, which in turn could represent a means for finding the missing thalamocortical information and for gaining access to consciousness. Theta oscillations could function as a carrier wave to integrate the tinnitus-related focal auditory gamma activity in a consciousness enabling network, as envisioned by the global workspace model. This model suggests that focal activity in the brain does not reach consciousness, except if the focal activity becomes functionally coupled to a consciousness enabling network, aka the global workspace. In limited deafferentation, the missing information can be retrieved from the auditory cortical neighborhood, decreasing surround inhibition, resulting in TCD. When the deafferentation is too wide in bandwidth, it is hypothesized that the missing information is retrieved from theta-mediated parahippocampal auditory memory. This suggests that based on the amount of deafferentation TCD might change to parahippocampocortical persisting and thus pathological theta–gamma rhythm. From a Bayesian point of view, in which the brain is conceived as a prediction machine that updates its memory-based predictions through sensory updating, tinnitus is the result of a prediction error between the predicted and sensed auditory input. The decrease in sensory updating is reflected by decreased alpha activity and the prediction error results in theta–gamma and beta–gamma coupling. Thus, TCD can be considered as an adaptive mechanism to retrieve missing auditory input in tinnitus. PMID:26106362
The effects of alterations in the osseous external auditory canal on perceived sound quality.
van Spronsen, Erik; Brienesse, Patrick; Ebbens, Fenna A; Waterval, Jerome J; Dreschler, Wouter A
2015-10-01
To evaluate the perceptual effect of the altered shape of the osseous external auditory canal (OEAC) on sound quality. Prospective study. Twenty subjects with normal hearing were presented with six simulated sound conditions representing the acoustic properties of six different ear canals (three normal ears and three cavities). The six different real ear unaided responses of these ear canals were used to filter Dutch sentences, resulting in six simulated sound conditions. A seventh unfiltered reference condition was used for comparison. Sound quality was evaluated using paired comparison ratings and a visual analog scale (VAS). Significant differences in sound quality were found between the normal and cavity conditions (all P < .001) using both the seven-point paired comparison rating and the VAS. No significant differences were found between the reference and normal conditions. Sound quality deteriorates when the OEAC is altered into a cavity. This proof of concept study shows that the altered acoustic quality of the OEAC after radical cavity surgery may lead to a clearly perceived deterioration in sound quality. Nevertheless, some questions remain about the extent to which these changes are affected by habituation and by other changes in middle ear anatomy and functionality. 4 © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
Prognostic Value of Facial Nerve Antidromic Evoked Potentials in Bell Palsy: A Preliminary Study
WenHao, Zhang; Minjie, Chen; Chi, Yang; Weijie, Zhang
2012-01-01
To analyze the value of facial nerve antidromic evoked potentials (FNAEPs) in predicting recovery from Bell palsy. Study Design. Retrospective study using electrodiagnostic data and medical chart review. Methods. A series of 46 patients with unilateral Bell palsy treated were included. According to taste test, 26 cases were associated with taste disorder (Group 1) and 20 cases were not (Group 2). Facial function was established clinically by the Stennert system after monthly follow-up. The result was evaluated with clinical recovery rate (CRR) and FNAEP. FNAEPs were recorded at the posterior wall of the external auditory meatus of both sides. Results. Mean CRR of Group 1 and Group 2 was 61.63% and 75.50%. We discovered a statistical difference between two groups and also in the amplitude difference (AD) of FNAEP. Mean ± SD of AD was −6.96% ± 12.66% in patients with excellent result, −27.67% ± 27.70% with good result, and −66.05% ± 31.76% with poor result. Conclusions. FNAEP should be monitored in patients with intratemporal facial palsy at the early stage. FNAEP at posterior wall of external auditory meatus was sensitive to detect signs of taste disorder. There was close relativity between FNAEPs and facial nerve recovery. PMID:22164176
Gravitoinertial force magnitude and direction influence head-centric auditory localization
NASA Technical Reports Server (NTRS)
DiZio, P.; Held, R.; Lackner, J. R.; Shinn-Cunningham, B.; Durlach, N.
2001-01-01
We measured the influence of gravitoinertial force (GIF) magnitude and direction on head-centric auditory localization to determine whether a true audiogravic illusion exists. In experiment 1, supine subjects adjusted computer-generated dichotic stimuli until they heard a fused sound straight ahead in the midsagittal plane of the head under a variety of GIF conditions generated in a slow-rotation room. The dichotic stimuli were constructed by convolving broadband noise with head-related transfer function pairs that model the acoustic filtering at the listener's ears. These stimuli give rise to the perception of externally localized sounds. When the GIF was increased from 1 to 2 g and rotated 60 degrees rightward relative to the head and body, subjects on average set an acoustic stimulus 7.3 degrees right of their head's median plane to hear it as straight ahead. When the GIF was doubled and rotated 60 degrees leftward, subjects set the sound 6.8 degrees leftward of baseline values to hear it as centered. In experiment 2, increasing the GIF in the median plane of the supine body to 2 g did not influence auditory localization. In experiment 3, tilts up to 75 degrees of the supine body relative to the normal 1 g GIF led to small shifts, 1--2 degrees, of auditory setting toward the up ear to maintain a head-centered sound localization. These results show that head-centric auditory localization is affected by azimuthal rotation and increase in magnitude of the GIF and demonstrate that an audiogravic illusion exists. Sound localization is shifted in the direction opposite GIF rotation by an amount related to the magnitude of the GIF and its angular deviation relative to the median plane.
Kim, Seung-Goo; Knösche, Thomas R
2017-08-01
Absolute pitch (AP) is the ability to recognize pitch chroma of tonal sound without external references, providing a unique model of the human auditory system (Zatorre: Nat Neurosci 6 () 692-695). In a previous study (Kim and Knösche: Hum Brain Mapp () 3486-3501), we identified enhanced intracortical myelination in the right planum polare (PP) in musicians with AP, which could be a potential site for perceptional processing of pitch chroma information. We speculated that this area, which initiates the ventral auditory pathway, might be crucially involved in the perceptual stage of the AP process in the context of the "dual pathway hypothesis" that suggests the role of the ventral pathway in processing nonspatial information related to the identity of an auditory object (Rauschecker: Eur J Neurosci 41 () 579-585). To test our conjecture on the ventral pathway, we investigated resting state functional connectivity (RSFC) using functional magnetic resonance imaging (fMRI) from musicians with varying degrees of AP. Should our hypothesis be correct, RSFC via the ventral pathway is expected to be stronger in musicians with AP, whereas such group effect is not predicted in the RSFC via the dorsal pathway. In the current data, we found greater RSFC between the right PP and bilateral anteroventral auditory cortices in musicians with AP. In contrast, we did not find any group difference in the RSFC of the planum temporale (PT) between musicians with and without AP. We believe that these findings support our conjecture on the critical role of the ventral pathway in AP recognition. Hum Brain Mapp 38:3899-3916, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The effect of spatial auditory landmarks on ambulation.
Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E
2018-02-01
The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
2018-04-25
Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.
Music playing and memory trace: evidence from event-related potentials.
Kamiyama, Keiko; Katahira, Kentaro; Abla, Dilshat; Hori, Koji; Okanoya, Kazuo
2010-08-01
We examined the relationship between motor practice and auditory memory for sound sequences to evaluate the hypothesis that practice involving physical performance might enhance auditory memory. Participants learned two unfamiliar sound sequences using different training methods. Under the key-press condition, they learned a melody while pressing a key during auditory input. Under the no-key-press condition, they listened to another melody without any key pressing. The two melodies were presented alternately, and all participants were trained in both methods. Participants were instructed to pay attention under both conditions. After training, they listened to the two melodies again without pressing keys, and ERPs were recorded. During the ERP recordings, 10% of the tones in these melodies deviated from the originals. The grand-average ERPs showed that the amplitude of mismatch negativity (MMN) elicited by deviant stimuli was larger under the key-press condition than under the no-key-press condition. This effect appeared only in the high absolute pitch group, which included those with a pronounced ability to identify a note without external reference. This result suggests that the effect of training with key pressing was mediated by individual musical skills. Copyright 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Clark, Callie A M; Sacrey, Lori-Ann R; Whishaw, Ian Q
2009-09-15
External cues, including familiar music, can release Parkinson's disease patients from catalepsy but the neural basis of the effect is not well understood. In the present study, posturography, the study of posture and its allied reflexes, was used to develop an animal model that could be used to investigate the underlying neural mechanisms of this sound-induced behavioral activation. In the rat, akinetic catalepsy induced by a dopamine D2 receptor antagonist (haloperidol 5mg/kg) can model human catalepsy. Using this model, two experiments examined whether novel versus familiar sound stimuli could interrupt haloperidol-induced catalepsy in the rat. Rats were placed on a variably inclined grid and novel or familiar auditory cues (single key jingle or multiple key jingles) were presented. The dependent variable was movement by the rats to regain equilibrium as assessed with a movement notation score. The sound cues enhanced movements used to regain postural stability and familiar sound stimuli were more effective than unfamiliar sound stimuli. The results are discussed in relation to the idea that nonlemniscal and lemniscal auditory pathways differentially contribute to behavioral activation versus tonotopic processing of sound.
Mode-Locking Behavior of Izhikevich Neuron Under Periodic External Forcing
NASA Astrophysics Data System (ADS)
Farokhniaee, Amirali; Large, Edward
2015-03-01
In this study we obtained the regions of existence of various mode-locked states on the periodic-strength plane, which are called Arnold Tongues, for Izhikevich neurons. The study is based on the new model for neurons by Izhikevich (2003) which is the normal form of Hodgkin-Huxley neuron. This model is much simpler in terms of the dimension of the coupled non-linear differential equations compared to other existing models, but excellent for generating the complex spiking patterns observed in real neurons. Many neurons in the auditory system of the brain must encode amplitude variations of a periodic signal. These neurons under periodic stimulation display rich dynamical states including mode-locking and chaotic responses. Periodic stimuli such as sinusoidal waves and amplitude modulated (AM) sounds can lead to various forms of n : m mode-locked states, similar to mode-locking phenomenon in a LASER resonance cavity. Obtaining Arnold tongues provides useful insight into the organization of mode-locking behavior of neurons under periodic forcing. Hence we can describe the construction of harmonic and sub-harmonic responses in the early processing stages of the auditory system, such as the auditory nerve and cochlear nucleus.
Synchronized tapping facilitates learning sound sequences as indexed by the P300.
Kamiyama, Keiko S; Okanoya, Kazuo
2014-01-01
The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals' musical ability to coordinate their finger movements along with external auditory events.
Synchronized tapping facilitates learning sound sequences as indexed by the P300
Kamiyama, Keiko S.; Okanoya, Kazuo
2014-01-01
The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals’ musical ability to coordinate their finger movements along with external auditory events. PMID:25400564
Seacrist, Thomas; Balasubramanian, Sriram; García-España, J. Felipe; Maltese, Matthew R.; Arbogast, Kristy B.; Lopez-Valdes, Francisco J.; Kent, Richard W.; Tanji, Hiromasa; Higuchi, Kazuo
2010-01-01
The Hybrid III 6-year-old ATD has been benchmarked against adult-scaled component level tests but the lack of biomechanical data hinders the effectiveness of the procedures used to scale the adult data to the child. Whole body kinematic validation of the pediatric ATD through limited comparison to post mortem human subjects (PMHS) of similar age and size has revealed key differences attributed to the rigidity of the thoracic spine. As restraint systems continue to advance, they may become more effective at limiting peak loads applied to occupants, leading to lower impact environments for which the biofidelity of the ATD is not well established. Consequently, there is a growing need to further enhance the assessment of the pediatric ATD by evaluating its biofidelity at lower crash speeds. To this end, this study compared the kinematic response of the Hybrid III 6 year old ATD against size-matched male pediatric volunteers (PVs) (6–9 yrs) in low-speed frontal sled tests. A 3-D near-infrared target tracking system quantified the position of markers at seven locations on the ATD and PVs (head top, opisthocranion, nasion, external auditory meatus, C4, T1, and pelvis). Angular velocity of the head, seat belt forces, and reaction forces on the seat pan and foot rest were also measured. The ATD exhibited significantly greater shoulder and lap belt, foot rest, and seat pan normal reaction loads compared to the PVs. Contrarily, PVs exhibited significantly greater seat pan shear. The ATD experienced significantly greater head angular velocity (11.4 ± 1.7 rad/s vs. 8.1 ± 1.4 rad/s), resulting in a quicker time to maximum head rotation (280.4 ± 2.5 ms vs 334.2 ± 21.7 ms). The ATD exhibited significantly less forward excursions of the nasion (171.7 ± 7.8 mm vs. 199.5 ± 12.3 mm), external auditory meatus (194.5 ± 11.8 mm vs. 205.7 ± 10.3 mm), C4 (127.0 ± 5.2 mm vs. 183.3 ± 12.8 mm) and T1 (111.1 ± 6.5 mm vs. 153.8 ± 10.5 mm) compared to the PVs. These analyses provide insight into aspects of ATD biofidelity in low-speed crash environments. PMID:21050595
Cardemil, Felipe; Esquivel, Patricia; Aguayo, Lorena; Barría, Tamara; Fuente, Adrian; Carvajal, Rocío; Fromín, Rose; Villalobos, Iván; Yueh, Bevan
2013-01-01
It is becoming increasingly important to have reliable and valid questionnaires. This becomes especially important when evaluating hearing loss. the "Effectiveness of Auditory Rehabilitation" (EAR) questionnaire for the Spanish-speaking population. This instrument assesses quality of life and hearing aspects in patients using hearing aids. Cross-sectional validation study. A cultural adaptation through the use of English to Spanish translations and re-translations was carried out. The validity and reliability of the newly adapted instrument were evaluated. A total of 69 individuals (44 older adults and 25 younger adults) were examined. The pure-tone averages (PTA, 500, 1,000 and 2,000 Hz) were 47.3 dB HL and 47.1 dB HL for the left and right ears, respectively. The mean maximum speech discrimination in silence for monosyllables were 83.3% and 82.9% for the left and right ears, respectively. Internal consistency presented Cronbach alpha values of 0.85 and 0.77 for the internal and external dimensions, respectively. The intraclass correlation coefficients were 0.80 for the internal module and 0.85 for the external module. Construct validity reported a correlation coefficient of 0.71 at baseline and 0.76 at 3 months after the initial assessment for the internal module, and 0.62 at baseline and 0.74 at 3 months after the initial assessment for the external module. The size effects were 1.3 and 1.1 for the internal and external modules, respectively. The Spanish version of the EAR questionnaire seems to be a reliable and valid instrument. The evaluation of audiological aspects, as well as aspects relating to aesthetics and comfort are the main strengths of this instrument. Finally, the EAR scale is more sensitive to change than other scales. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Neurophysiological Studies of Auditory Verbal Hallucinations
Ford, Judith M.; Dierks, Thomas; Fisher, Derek J.; Herrmann, Christoph S.; Hubl, Daniela; Kindler, Jochen; Koenig, Thomas; Mathalon, Daniel H.; Spencer, Kevin M.; Strik, Werner; van Lutterveld, Remko
2012-01-01
We discuss 3 neurophysiological approaches to study auditory verbal hallucinations (AVH). First, we describe “state” (or symptom capture) studies where periods with and without hallucinations are compared “within” a patient. These studies take 2 forms: passive studies, where brain activity during these states is compared, and probe studies, where brain responses to sounds during these states are compared. EEG (electroencephalography) and MEG (magnetoencephalography) data point to frontal and temporal lobe activity, the latter resulting in competition with external sounds for auditory resources. Second, we discuss “trait” studies where EEG and MEG responses to sounds are recorded from patients who hallucinate and those who do not. They suggest a tendency to hallucinate is associated with competition for auditory processing resources. Third, we discuss studies addressing possible mechanisms of AVH, including spontaneous neural activity, abnormal self-monitoring, and dysfunctional interregional communication. While most studies show differences in EEG and MEG responses between patients and controls, far fewer show symptom relationships. We conclude that efforts to understand the pathophysiology of AVH using EEG and MEG have been hindered by poor anatomical resolution of the EEG and MEG measures, poor assessment of symptoms, poor understanding of the phenomenon, poor models of the phenomenon, decoupling of the symptoms from the neurophysiology due to medications and comorbidites, and the possibility that the schizophrenia diagnosis breeds truer than the symptoms it comprises. These problems are common to studies of other psychiatric symptoms and should be considered when attempting to understand the basic neural mechanisms responsible for them. PMID:22368236
González-García, Nadia; Rendón, Pablo L
2017-05-23
The neural correlates of consonance and dissonance perception have been widely studied, but not the neural correlates of consonance and dissonance production. The most straightforward manner of musical production is singing, but, from an imaging perspective, it still presents more challenges than listening because it involves motor activity. The accurate singing of musical intervals requires integration between auditory feedback processing and vocal motor control in order to correctly produce each note. This protocol presents a method that permits the monitoring of neural activations associated with the vocal production of consonant and dissonant intervals. Four musical intervals, two consonant and two dissonant, are used as stimuli, both for an auditory discrimination test and a task that involves first listening to and then reproducing given intervals. Participants, all female vocal students at the conservatory level, were studied using functional Magnetic Resonance Imaging (fMRI) during the performance of the singing task, with the listening task serving as a control condition. In this manner, the activity of both the motor and auditory systems was observed, and a measure of vocal accuracy during the singing task was also obtained. Thus, the protocol can also be used to track activations associated with singing different types of intervals or with singing the required notes more accurately. The results indicate that singing dissonant intervals requires greater participation of the neural mechanisms responsible for the integration of external feedback from the auditory and sensorimotor systems than does singing consonant intervals.
Efficient transformation of an auditory population code in a small sensory system.
Clemens, Jan; Kutzki, Olaf; Ronacher, Bernhard; Schreiber, Susanne; Wohlgemuth, Sandra
2011-08-16
Optimal coding principles are implemented in many large sensory systems. They include the systematic transformation of external stimuli into a sparse and decorrelated neuronal representation, enabling a flexible readout of stimulus properties. Are these principles also applicable to size-constrained systems, which have to rely on a limited number of neurons and may only have to fulfill specific and restricted tasks? We studied this question in an insect system--the early auditory pathway of grasshoppers. Grasshoppers use genetically fixed songs to recognize mates. The first steps of neural processing of songs take place in a small three-layer feed-forward network comprising only a few dozen neurons. We analyzed the transformation of the neural code within this network. Indeed, grasshoppers create a decorrelated and sparse representation, in accordance with optimal coding theory. Whereas the neuronal input layer is best read out as a summed population, a labeled-line population code for temporal features of the song is established after only two processing steps. At this stage, information about song identity is maximal for a population decoder that preserves neuronal identity. We conclude that optimal coding principles do apply to the early auditory system of the grasshopper, despite its size constraints. The inputs, however, are not encoded in a systematic, map-like fashion as in many larger sensory systems. Already at its periphery, part of the grasshopper auditory system seems to focus on behaviorally relevant features, and is in this property more reminiscent of higher sensory areas in vertebrates.
Direct recordings from the auditory cortex in a cochlear implant user.
Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A
2013-06-01
Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.
Jafari, Zahra; Esmaili, Mahdiye; Delbari, Ahmad; Mehrpour, Masoud; Mohajerani, Majid H
2016-06-01
There have been a few reports about the effects of chronic stroke on auditory temporal processing abilities and no reports regarding the effects of brain damage lateralization on these abilities. Our study was performed on 2 groups of chronic stroke patients to compare the effects of hemispheric lateralization of brain damage and of age on auditory temporal processing. Seventy persons with normal hearing, including 25 normal controls, 25 stroke patients with damage to the right brain, and 20 stroke patients with damage to the left brain, without aphasia and with an age range of 31-71 years were studied. A gap-in-noise (GIN) test and a duration pattern test (DPT) were conducted for each participant. Significant differences were found between the 3 groups for GIN threshold, overall GIN percent score, and DPT percent score in both ears (P ≤ .001). For all stroke patients, performance in both GIN and DPT was poorer in the ear contralateral to the damaged hemisphere, which was significant in DPT and in 2 measures of GIN (P ≤ .046). Advanced age had a negative relationship with temporal processing abilities for all 3 groups. In cases of confirmed left- or right-side stroke involving auditory cerebrum damage, poorer auditory temporal processing is associated with the ear contralateral to the damaged cerebral hemisphere. Replication of our results and the use of GIN and DPT tests for the early diagnosis of auditory processing deficits and for monitoring the effects of aural rehabilitation interventions are recommended. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., Computational, and Systems Biology [External Review Draft]'' (EPA/600/R-13/214A). EPA is also announcing that... Advances in Molecular, Computational, and Systems Biology [External Review Draft]'' is available primarily...
Gene therapy restores auditory and vestibular function in a mouse model of Usher syndrome type 1c.
Pan, Bifeng; Askew, Charles; Galvin, Alice; Heman-Ackah, Selena; Asai, Yukako; Indzhykulian, Artur A; Jodelka, Francine M; Hastings, Michelle L; Lentz, Jennifer J; Vandenberghe, Luk H; Holt, Jeffrey R; Géléoc, Gwenaëlle S
2017-03-01
Because there are currently no biological treatments for hearing loss, we sought to advance gene therapy approaches to treat genetic deafness. We focused on Usher syndrome, a devastating genetic disorder that causes blindness, balance disorders and profound deafness, and studied a knock-in mouse model, Ush1c c.216G>A, for Usher syndrome type IC (USH1C). As restoration of complex auditory and balance function is likely to require gene delivery systems that target auditory and vestibular sensory cells with high efficiency, we delivered wild-type Ush1c into the inner ear of Ush1c c.216G>A mice using a synthetic adeno-associated viral vector, Anc80L65, shown to transduce 80-90% of sensory hair cells. We demonstrate recovery of gene and protein expression, restoration of sensory cell function, rescue of complex auditory function and recovery of hearing and balance behavior to near wild-type levels. The data represent unprecedented recovery of inner ear function and suggest that biological therapies to treat deafness may be suitable for translation to humans with genetic inner ear disorders.
The temporal representation of speech in a nonlinear model of the guinea pig cochlea
NASA Astrophysics Data System (ADS)
Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray
2004-12-01
The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .
Jennings, M B; Shaw, L; Hodgins, H; Kuchar, D A; Bataghva, L Poost-Foroosh
2010-01-01
For older workers with acquired hearing loss, this loss as well as the changing nature of work and the workforce, may lead to difficulties and disadvantages in obtaining and maintaining employment. Currently there are very few instruments that can assist workplaces, employers and workers to prepare for older workers with hearing loss or with the evaluation of auditory perception demands of work, especially those relevant to communication, and safety sensitive workplaces that require high levels of communication. This paper introduces key theoretical considerations that informed the development of a new framework, The Audiologic Ergonomic (AE) Framework to guide audiologists, work rehabilitation professionals and workers in developing tools to support the identification and evaluation of auditory perception demands in the workplace, the challenges to communication and the subsequent productivity and safety in the performance of work duties by older workers with hearing loss. The theoretical concepts underpinning this framework are discussed along with next steps in developing tools such as the Canadian Hearing Demands Tool (C-HearD Tool) in advancing approaches to evaluate auditory perception and communication demands in the workplace.
Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.
Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming
2017-02-01
Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.
Perspectives on the Pure-Tone Audiogram.
Musiek, Frank E; Shinn, Jennifer; Chermak, Gail D; Bamiou, Doris-Eva
The pure-tone audiogram, though fundamental to audiology, presents limitations, especially in the case of central auditory involvement. Advances in auditory neuroscience underscore the considerably larger role of the central auditory nervous system (CANS) in hearing and related disorders. Given the availability of behavioral audiological tests and electrophysiological procedures that can provide better insights as to the function of the various components of the auditory system, this perspective piece reviews the limitations of the pure-tone audiogram and notes some of the advantages of other tests and procedures used in tandem with the pure-tone threshold measurement. To review and synthesize the literature regarding the utility and limitations of the pure-tone audiogram in determining dysfunction of peripheral sensory and neural systems, as well as the CANS, and to identify other tests and procedures that can supplement pure-tone thresholds and provide enhanced diagnostic insight, especially regarding problems of the central auditory system. A systematic review and synthesis of the literature. The authors independently searched and reviewed literature (journal articles, book chapters) pertaining to the limitations of the pure-tone audiogram. The pure-tone audiogram provides information as to hearing sensitivity across a selected frequency range. Normal or near-normal pure-tone thresholds sometimes are observed despite cochlear damage. There are a surprising number of patients with acoustic neuromas who have essentially normal pure-tone thresholds. In cases of central deafness, depressed pure-tone thresholds may not accurately reflect the status of the peripheral auditory system. Listening difficulties are seen in the presence of normal pure-tone thresholds. Suprathreshold procedures and a variety of other tests can provide information regarding other and often more central functions of the auditory system. The audiogram is a primary tool for determining type, degree, and configuration of hearing loss; however, it provides the clinician with information regarding only hearing sensitivity, and no information about central auditory processing or the auditory processing of real-world signals (i.e., speech, music). The pure-tone audiogram offers limited insight into functional hearing and should be viewed only as a test of hearing sensitivity. Given the limitations of the pure-tone audiogram, a brief overview is provided of available behavioral tests and electrophysiological procedures that are sensitive to the function and integrity of the central auditory system, which provide better diagnostic and rehabilitative information to the clinician and patient. American Academy of Audiology
Auditory verbal hallucinations: Social, but how?
Alderson-Day, Ben; Fernyhough, Charles
2017-01-01
Summary Auditory verbal hallucinations (AVH) are experiences of hearing voices in the absence of an external speaker. Standard explanatory models propose that AVH arise from misattributed verbal cognitions (i.e. inner speech), but provide little account of how heard voices often have a distinct persona and agency. Here we review the argument that AVH have important social and agent-like properties and consider how different neurocognitive approaches to AVH can account for these elements, focusing on inner speech, memory, and predictive processing. We then evaluate the possible role of separate social-cognitive processes in the development of AVH, before outlining three ways in which speech and language processes already involve socially important information, such as cues to interact with others. We propose that when these are taken into account, the social characteristics of AVH can be explained without an appeal to separate social-cognitive systems. PMID:29238264
Biomedical Simulation Models of Human Auditory Processes
NASA Technical Reports Server (NTRS)
Bicak, Mehmet M. A.
2012-01-01
Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.
NASA Astrophysics Data System (ADS)
Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques
Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.
Biasing the content of hippocampal replay during sleep
Bendor, Daniel; Wilson, Matthew A.
2013-01-01
The hippocampus plays an essential role in encoding self-experienced events into memory. During sleep, neural activity in the hippocampus related to a recent experience has been observed to spontaneously reoccur, and this “replay” has been postulated to be important for memory consolidation. Task-related cues can enhance memory consolidation when presented during a post-training sleep session, and if memories are consolidated by hippocampal replay, a specific enhancement for this replay should also be observed. To test this, we have trained rats on an auditory-spatial association task, while recording from neuronal ensembles in the hippocampus. Here we report that during sleep, a task-related auditory cue biases reactivation events towards replaying the spatial memory associated with that cue. These results indicate that sleep replay can be manipulated by external stimulation, and provide further evidence for the role of hippocampal replay in memory consolidation. PMID:22941111
Advanced Mathematics Communication beyond Modality of Sight
ERIC Educational Resources Information Center
Sedaghatjou, Mina
2018-01-01
This study illustrates how mathematical communication and learning are inherently multimodal and embodied; hence, sight-disabled students are also able to conceptualize visuospatial information and mathematical concepts through tactile and auditory activities. Adapting a perceptuomotor integration approach, the study shows that the lack of access…
NASA Technical Reports Server (NTRS)
Black, F. O.; Brackmann, D. E.; Hitselberger, W. E.; Purdy, J.
1995-01-01
The outcome of acoustic neuroma (vestibular schwannoma) surgery continues to improve rapidly. Advances can be attributed to several fields, but the most important contributions have arisen from the identification of the genes responsible for the dominant inheritance of neurofibromatosis types 1 (NF1) and 2 (NF2) and the development of magnetic resonance imaging with gadolinium enhancement for the early anatomic confirmation of the pathognomonic, bilateral vestibular schwannomas in NF2. These advances enable early diagnosis and treatment when the tumors are small in virtually all subjects at risk for NF2. The authors suggest that advising young NF2 patients to wait until complications develop, especially hearing loss, before diagnosing and operating for bilateral eighth nerve schwannomas may not always be in the best interest of the patient. To the authors' knowledge, this is the first reported case of preservation of both auditory and vestibular function in a patient after bilateral vestibular schwannoma excision.
Re-Design and Beat Testing of the Man-Machine Integration Design and Analysis System: MIDAS
NASA Technical Reports Server (NTRS)
Shively, R. Jay; Rutkowski, Michael (Technical Monitor)
1999-01-01
The Man-machine Design and Analysis System (MIDAS) is a human factors design and analysis system that combines human cognitive models with 3D CAD models and rapid prototyping and simulation techniques. MIDAS allows designers to ask 'what if' types of questions early in concept exploration and development prior to actual hardware development. The system outputs predictions of operator workload, situational awareness and system performance as well as graphical visualization of the cockpit designs interacting with models of the human in a mission scenario. Recently, MIDAS was re-designed to enhance functionality and usability. The goals driving the redesign include more efficient processing, GUI interface, advances in the memory structures, implementation of external vision models and audition. These changes were detailed in an earlier paper. Two Beta test sites with diverse applications have been chosen. One Beta test site is investigating the development of a new airframe and its interaction with the air traffic management system. The second Beta test effort will investigate 3D auditory cueing in conjunction with traditional visual cueing strategies including panel-mounted and heads-up displays. The progress and lessons learned on each of these projects will be discussed.
González-García, Nadia; González, Martha A; Rendón, Pablo L
2016-07-15
Relationships between musical pitches are described as either consonant, when associated with a pleasant and harmonious sensation, or dissonant, when associated with an inharmonious feeling. The accurate singing of musical intervals requires communication between auditory feedback processing and vocal motor control (i.e. audio-vocal integration) to ensure that each note is produced correctly. The objective of this study is to investigate the neural mechanisms through which trained musicians produce consonant and dissonant intervals. We utilized 4 musical intervals (specifically, an octave, a major seventh, a fifth, and a tritone) as the main stimuli for auditory discrimination testing, and we used the same interval tasks to assess vocal accuracy in a group of musicians (11 subjects, all female vocal students at conservatory level). The intervals were chosen so as to test for differences in recognition and production of consonant and dissonant intervals, as well as narrow and wide intervals. The subjects were studied using fMRI during performance of the interval tasks; the control condition consisted of passive listening. Singing dissonant intervals as opposed to singing consonant intervals led to an increase in activation in several regions, most notably the primary auditory cortex, the primary somatosensory cortex, the amygdala, the left putamen, and the right insula. Singing wide intervals as opposed to singing narrow intervals resulted in the activation of the right anterior insula. Moreover, we also observed a correlation between singing in tune and brain activity in the premotor cortex, and a positive correlation between training and activation of primary somatosensory cortex, primary motor cortex, and premotor cortex during singing. When singing dissonant intervals, a higher degree of training correlated with the right thalamus and the left putamen. Our results indicate that singing dissonant intervals requires greater involvement of neural mechanisms associated with integrating external feedback from auditory and sensorimotor systems than singing consonant intervals, and it would then seem likely that dissonant intervals are intoned by adjusting the neural mechanisms used for the production of consonant intervals. Singing wide intervals requires a greater degree of control than singing narrow intervals, as it involves neural mechanisms which again involve the integration of internal and external feedback. Copyright © 2016 Elsevier B.V. All rights reserved.
An unusual craniofacial cleft: amniotic band syndrome as a possible cause.
Eichhorn, Mitchell G; Iacobucci, John J; Turfe, Zaahir
2015-04-01
We report the case of a no. 4 Tessier cleft in association with an unknown cleft of the mandible extending to the external auditory meatus. This has not been previously published in the literature and its underlying pathology remains undetermined. The nature of the cleft, possible classifications, and potential embryologic origins will be discussed. Amniotic band syndrome is the most likely cause of the cleft. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Abdel-Aziz, Mosaad
2012-06-25
Congenital cholesteatoma may be expected in abnormally developed ear, it may cause bony erosion of the middle ear cleft and extend to the infratemporal fossa. We present the first case of congenital cholesteatoma of the infratemporal fossa in a patient with congenital aural atresia that has been complicated with acute mastoiditis. A sixteen year old Egyptian male patient presented with congenital cholesteatoma of the infratemporal fossa with congenital aural atresia complicated with acute mastoiditis. Two weeks earlier, the patient suffered pain necessitating hospital admission, magnetic resonance imaging revealed a soft tissue mass in the right infratemporal fossa. On presentation to our institute, Computerized tomography was done as a routine, it proved the diagnosis of mastoiditis, pure tone audiometry showed an air-bone gap of 60 dB. Cortical mastoidectomy was done for treatment of mastoiditis, removal of congenital cholesteatoma was carried out with reconstruction of external auditory canal. Follow-up of the patient for 2 years and 3 months showed a patent, infection free external auditory canal with an air-bone gap has been reduced to 35db. One year after the operation; MRI was done and it showed no residual or recurrent cholesteatoma. Congenital cholesteatoma of the infratemporal fossa in cases of congenital aural atresia can be managed safely even if it was associated with mastoiditis. It is an original case report of interest to the speciality of otolaryngology.
Wright, Rachel L.; Spurgeon, Laura C.; Elliott, Mark T.
2014-01-01
Humans can synchronize movements with auditory beats or rhythms without apparent effort. This ability to entrain to the beat is considered automatic, such that any perturbations are corrected for, even if the perturbation was not consciously noted. Temporal correction of upper limb (e.g., finger tapping) and lower limb (e.g., stepping) movements to a phase perturbed auditory beat usually results in individuals being back in phase after just a few beats. When a metronome is presented in more than one sensory modality, a multisensory advantage is observed, with reduced temporal variability in finger tapping movements compared to unimodal conditions. Here, we investigate synchronization of lower limb movements (stepping in place) to auditory, visual and combined auditory-visual (AV) metronome cues. In addition, we compare movement corrections to phase advance and phase delay perturbations in the metronome for the three sensory modality conditions. We hypothesized that, as with upper limb movements, there would be a multisensory advantage, with stepping variability being lowest in the bimodal condition. As such, we further expected correction to the phase perturbation to be quickest in the bimodal condition. Our results revealed lower variability in the asynchronies between foot strikes and the metronome beats in the bimodal condition, compared to unimodal conditions. However, while participants corrected substantially quicker to perturbations in auditory compared to visual metronomes, there was no multisensory advantage in the phase correction task—correction under the bimodal condition was almost identical to the auditory-only (AO) condition. On the whole, we noted that corrections in the stepping task were smaller than those previously reported for finger tapping studies. We conclude that temporal corrections are not only affected by the reliability of the sensory information, but also the complexity of the movement itself. PMID:25309397
Wright, Rachel L; Elliott, Mark T
2014-01-01
Humans can synchronize movements with auditory beats or rhythms without apparent effort. This ability to entrain to the beat is considered automatic, such that any perturbations are corrected for, even if the perturbation was not consciously noted. Temporal correction of upper limb (e.g., finger tapping) and lower limb (e.g., stepping) movements to a phase perturbed auditory beat usually results in individuals being back in phase after just a few beats. When a metronome is presented in more than one sensory modality, a multisensory advantage is observed, with reduced temporal variability in finger tapping movements compared to unimodal conditions. Here, we investigate synchronization of lower limb movements (stepping in place) to auditory, visual and combined auditory-visual (AV) metronome cues. In addition, we compare movement corrections to phase advance and phase delay perturbations in the metronome for the three sensory modality conditions. We hypothesized that, as with upper limb movements, there would be a multisensory advantage, with stepping variability being lowest in the bimodal condition. As such, we further expected correction to the phase perturbation to be quickest in the bimodal condition. Our results revealed lower variability in the asynchronies between foot strikes and the metronome beats in the bimodal condition, compared to unimodal conditions. However, while participants corrected substantially quicker to perturbations in auditory compared to visual metronomes, there was no multisensory advantage in the phase correction task-correction under the bimodal condition was almost identical to the auditory-only (AO) condition. On the whole, we noted that corrections in the stepping task were smaller than those previously reported for finger tapping studies. We conclude that temporal corrections are not only affected by the reliability of the sensory information, but also the complexity of the movement itself.
The Essential Complexity of Auditory Receptive Fields
Thorson, Ivar L.; Liénard, Jean; David, Stephen V.
2015-01-01
Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490
Hair cell regeneration in the avian auditory epithelium.
Stone, Jennifer S; Cotanche, Douglas A
2007-01-01
Regeneration of sensory hair cells in the mature avian inner ear was first described just over 20 years ago. Since then, it has been shown that many other non-mammalian species either continually produce new hair cells or regenerate them in response to trauma. However, mammals exhibit limited hair cell regeneration, particularly in the auditory epithelium. In birds and other non-mammals, regenerated hair cells arise from adjacent non-sensory (supporting) cells. Hair cell regeneration was initially described as a proliferative response whereby supporting cells re-enter the mitotic cycle, forming daughter cells that differentiate into either hair cells or supporting cells and thereby restore cytoarchitecture and function in the sensory epithelium. However, further analyses of the avian auditory epithelium (and amphibian vestibular epithelium) revealed a second regenerative mechanism, direct transdifferentiation, during which supporting cells change their gene expression and convert into hair cells without dividing. In the chicken auditory epithelium, these two distinct mechanisms show unique spatial and temporal patterns, suggesting they are differentially regulated. Current efforts are aimed at identifying signals that maintain supporting cells in a quiescent state or direct them to undergo direct transdifferentiation or cell division. Here, we review current knowledge about supporting cell properties and discuss candidate signaling molecules for regulating supporting cell behavior, in quiescence and after damage. While significant advances have been made in understanding regeneration in non-mammals over the last 20 years, we have yet to determine why the mammalian auditory epithelium lacks the ability to regenerate hair cells spontaneously and whether it is even capable of significant regeneration under additional circumstances. The continued study of mechanisms controlling regeneration in the avian auditory epithelium may lead to strategies for inducing significant and functional regeneration in mammals.
Low-Frequency Cortical Oscillations Entrain to Subthreshold Rhythmic Auditory Stimuli
Schroeder, Charles E.; Poeppel, David; van Atteveldt, Nienke
2017-01-01
Many environmental stimuli contain temporal regularities, a feature that can help predict forthcoming input. Phase locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (ten Oever et al., 2014). It is not known whether this “inaudible” rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography and electrocorticography in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this prethreshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in intertrial coherence, only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness. SIGNIFICANCE STATEMENT The environment is full of rhythmically structured signals that the nervous system can exploit for information processing. Thus, it is important to understand how the brain processes such temporally structured, regular features of external stimuli. Here we report the alignment of slowly fluctuating oscillatory brain activity to external rhythmic structure before its behavioral detection. These results indicate that phase alignment is a general mechanism of the brain to process rhythmic structure and can occur without the perceptual detection of this temporal structure. PMID:28411273
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors.
Vanarse, Anup; Osseiran, Adam; Rassau, Alexander
2016-01-01
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field.
Cell-specific gain modulation by synaptically released zinc in cortical circuits of audition.
Anderson, Charles T; Kumar, Manoj; Xiong, Shanshan; Tzounopoulos, Thanos
2017-09-09
In many excitatory synapses, mobile zinc is found within glutamatergic vesicles and is coreleased with glutamate. Ex vivo studies established that synaptically released (synaptic) zinc inhibits excitatory neurotransmission at lower frequencies of synaptic activity but enhances steady state synaptic responses during higher frequencies of activity. However, it remains unknown how synaptic zinc affects neuronal processing in vivo. Here, we imaged the sound-evoked neuronal activity of the primary auditory cortex in awake mice. We discovered that synaptic zinc enhanced the gain of sound-evoked responses in CaMKII-expressing principal neurons, but it reduced the gain of parvalbumin- and somatostatin-expressing interneurons. This modulation was sound intensity-dependent and, in part, NMDA receptor-independent. By establishing a previously unknown link between synaptic zinc and gain control of auditory cortical processing, our findings advance understanding about cortical synaptic mechanisms and create a new framework for approaching and interpreting the role of the auditory cortex in sound processing.
Chang, Young-Soo; Hong, Sung Hwa; Kim, Eun Yeon; Choi, Ji Eun; Chung, Won-Ho; Cho, Yang-Sun; Moon, Il Joon
2018-05-18
Despite recent advancement in the prediction of cochlear implant outcome, the benefit of bilateral procedures compared to bimodal stimulation and how we predict speech perception outcomes of sequential bilateral cochlear implant based on bimodal auditory performance in children remain unclear. This investigation was performed: (1) to determine the benefit of sequential bilateral cochlear implant and (2) to identify the associated factors for the outcome of sequential bilateral cochlear implant. Observational and retrospective study. We retrospectively analyzed 29 patients with sequential cochlear implant following bimodal-fitting condition. Audiological evaluations were performed; the categories of auditory performance scores, speech perception with monosyllable and disyllables words, and the Korean version of Ling. Audiological evaluations were performed before sequential cochlear implant with the bimodal fitting condition (CI1+HA) and one year after the sequential cochlear implant with bilateral cochlear implant condition (CI1+CI2). The good Performance Group (GP) was defined as follows; 90% or higher in monosyllable and bisyllable tests with auditory-only condition or 20% or higher improvement of the scores with CI1+CI2. Age at first implantation, inter-implant interval, categories of auditory performance score, and various comorbidities were analyzed by logistic regression analysis. Compared to the CI1+HA, CI1+CI2 provided significant benefit in categories of auditory performance, speech perception, and Korean version of Ling results. Preoperative categories of auditory performance scores were the only associated factor for being GP (odds ratio=4.38, 95% confidence interval - 95%=1.07-17.93, p=0.04). The children with limited language development in bimodal condition should be considered as the sequential bilateral cochlear implant and preoperative categories of auditory performance score could be used as the predictor in speech perception after sequential cochlear implant. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Audition and exhibition to toluene - a contribution for the theme
Augusto, Lívia Sanches Calvi; Kulay, Luiz Alexandre; Franco, Eloisa Sartori
2012-01-01
Summary Introduction: With the technological advances and the changes in the productive processes, the workers are displayed the different physical and chemical agents in its labor environment. The toluene is solvent an organic gift in glues, inks, oils, amongst others. Objective: To compare solvent the literary findings that evidence that diligent displayed simultaneously the noise and they have greater probability to develop an auditory loss of peripheral origin. Method: Revision of literature regarding the occupational auditory loss in displayed workers the noise and toluene. Results: The isolated exposition to the toluene also can unchain an alteration of the auditory thresholds. These audiometric findings, for ototoxicity the exposition to the toluene, present similar audiograms to the one for exposition to the noise, what it becomes difficult to differentiate a audiometric result of agreed exposition - noise and toluene - and exposition only to the noise. Conclusion: The majority of the studies was projected to generate hypotheses and would have to be considered as preliminary steps of an additional research. Until today the agents in the environment of work and its effect they have been studied in isolated way and the limits of tolerance of these, do not consider the agreed expositions. Considering that the workers are displayed the multiples agent and that the auditory loss is irreversible, the implemented tests must be more complete and all the workers must be part of the program of auditory prevention exactly displayed the low doses of the recommended limit of exposition. PMID:25991943
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2014-01-01
In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring, or not requiring, selective auditory attention. Appended to each stimulus presentation, and included in the calculation of each nSFOAE response, was a 30-ms silent period that was used to estimate the level of the inherent physiological noise in the ear canals of our subjects during each behavioral condition. Physiological-noise magnitudes were higher (noisier) for all subjects in the inattention task, and lower (quieter) in the selective auditory-attention tasks. These noise measures initially were made at the frequency of our nSFOAE probe tone (4.0 kHz), but the same attention effects also were observed across a wide range of frequencies. We attribute the observed differences in physiological-noise magnitudes between the inattention and attention conditions to different levels of efferent activation associated with the differing attentional demands of the behavioral tasks. One hypothesis is that when the attentional demand is relatively great, efferent activation is relatively high, and a decrease in the gain of the cochlear amplifier leads to lower-amplitude cochlear activity, and thus a smaller measure of noise from the ear. PMID:24732069
Mazzoni, A; Zanoletti, E; Faccioli, C; Martini, A
2017-05-01
Intracochlear schwannomas can occur either as an extension of a larger tumor from the internal auditory canal, or as a solitary labyrinthine tumor. They are currently removed via a translabyrinthine approach extended to the basal turn, adding a transotic approach for tumors lying beyond the basal turn. Facial bridge cochleostomy may be associated with the translabyrinthine approach to enable the whole cochlea to be approached without sacrificing the external auditory canal and tympanum. We describe seven cases, five of which underwent cochlear schwannoma resection with facial bridge cochleostomy, one case with the same procedure for a suspect tumor and one, previously subjected to radical tympanomastoidectomy, who underwent schwannoma resection via a transotic approach. Facial bridge cochleostomy involved removing the bone between the labyrinthine and tympanic portions of the fallopian canal, and exposing the cochlea from the basal to the apical turn. Patients' recovery was uneventful, and long-term magnetic resonance imaging showed no residual tumor. Facial bridge cochleostomy can be a flexible extension of the translabyrinthine approach for tumors extending from the internal auditory canal to the cochlea. The transcanal approach is suitable for the primary exclusive intralabyrinthine tumor. The indications for the different approaches are discussed.
The effect of auditory verbal imagery on signal detection in hallucination-prone individuals
Moseley, Peter; Smailes, David; Ellison, Amanda; Fernyhough, Charles
2016-01-01
Cognitive models have suggested that auditory hallucinations occur when internal mental events, such as inner speech or auditory verbal imagery (AVI), are misattributed to an external source. This has been supported by numerous studies indicating that individuals who experience hallucinations tend to perform in a biased manner on tasks that require them to distinguish self-generated from non-self-generated perceptions. However, these tasks have typically been of limited relevance to inner speech models of hallucinations, because they have not manipulated the AVI that participants used during the task. Here, a new paradigm was employed to investigate the interaction between imagery and perception, in which a healthy, non-clinical sample of participants were instructed to use AVI whilst completing an auditory signal detection task. It was hypothesized that AVI-usage would cause participants to perform in a biased manner, therefore falsely detecting more voices in bursts of noise. In Experiment 1, when cued to generate AVI, highly hallucination-prone participants showed a lower response bias than when performing a standard signal detection task, being more willing to report the presence of a voice in the noise. Participants not prone to hallucinations performed no differently between the two conditions. In Experiment 2, participants were not specifically instructed to use AVI, but retrospectively reported how often they engaged in AVI during the task. Highly hallucination-prone participants who retrospectively reported using imagery showed a lower response bias than did participants with lower proneness who also reported using AVI. Results are discussed in relation to prominent inner speech models of hallucinations. PMID:26435050
Auditory and audio-vocal responses of single neurons in the monkey ventral premotor cortex.
Hage, Steffen R
2018-03-20
Monkey vocalization is a complex behavioral pattern, which is flexibly used in audio-vocal communication. A recently proposed dual neural network model suggests that cognitive control might be involved in this behavior, originating from a frontal cortical network in the prefrontal cortex and mediated via projections from the rostral portion of the ventral premotor cortex (PMvr) and motor cortex to the primary vocal motor network in the brainstem. For the rapid adjustment of vocal output to external acoustic events, strong interconnections between vocal motor and auditory sites are needed, which are present at cortical and subcortical levels. However, the role of the PMvr in audio-vocal integration processes remains unclear. In the present study, single neurons in the PMvr were recorded in rhesus monkeys (Macaca mulatta) while volitionally producing vocalizations in a visual detection task or passively listening to monkey vocalizations. Ten percent of randomly selected neurons in the PMvr modulated their discharge rate in response to acoustic stimulation with species-specific calls. More than four-fifths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of the vocalization. Based on these audio-vocal interactions, the PMvr might be well positioned to mediate higher order auditory processing with cognitive control of the vocal motor output to the primary vocal motor network. Such audio-vocal integration processes in the premotor cortex might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2018 Elsevier B.V. All rights reserved.
Limbic-Auditory Interactions of Tinnitus: An Evaluation Using Diffusion Tensor Imaging.
Gunbey, H P; Gunbey, E; Aslan, K; Bulut, T; Unal, A; Incesu, L
2017-06-01
Tinnitus is defined as an imaginary subjective perception in the absence of an external sound. Convergent evidence proposes that tinnitus perception includes auditory, attentional and emotional components. The aim of this study was to investigate the thalamic, auditory and limbic interactions associated with tinnitus-related distress by Diffusion Tensor Imaging (DTI). A total of 36 tinnitus patients, 20 healthy controls underwent an audiological examination, as well as a magnetic resonance imaging protocol including structural and DTI sequences. All participants completed the Tinnitus Handicap Inventory (THI) and Visual Analog Scales (VAS) related with tinnitus. The fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values were obtained for the auditory cortex (AC), inferior colliculus (IC), lateral lemniscus (LL), medial geniculate body (MGB), thalamic reticular nucleus (TRN), amygdala (AMG), hippocampus (HIP), parahippocampus (PHIP) and prefrontal cortex (PFC). In tinnitus patients the FA values of IC, MGB, TRN, AMG, HIP decreased and the ADC values of IC, MGB, TRN, AMG, PHIP increased significantly. The contralateral IC-LL and bilateral MGB FA values correlated negatively with hearing loss. A negative relation was found between the AMG-HIP FA values and THI and VAS scores. Bilateral ADC values of PHIP and PFC significantly correlated with the attention deficiency-VAS scores. In conclusion, this is the first DTI study to investigate the grey matter structures related to tinnitus perception and the significant correlation of FA and ADC with clinical parameters suggests that DTI can provide helpful information for tinnitus. Magnifying the microstructures in DTI can help evaluate the three faces of tinnitus nature: hearing, emotion and attention.
Cognitive effects of rhythmic auditory stimulation in Parkinson's disease: A P300 study.
Lei, Juan; Conradi, Nadine; Abel, Cornelius; Frisch, Stefan; Brodski-Guerniero, Alla; Hildner, Marcel; Kell, Christian A; Kaiser, Jochen; Schmidt-Kassow, Maren
2018-05-16
Rhythmic auditory stimulation (RAS) may compensate dysfunctions of the basal ganglia (BG), involved with intrinsic evaluation of temporal intervals and action initiation or continuation. In the cognitive domain, RAS containing periodically presented tones facilitates young healthy participants' attention allocation to anticipated time points, indicated by better performance and larger P300 amplitudes to periodic compared to random stimuli. Additionally, active auditory-motor synchronization (AMS) leads to a more precise temporal encoding of stimuli via embodied timing encoding than stimulus presentation adapted to the participants' actual movements. Here we investigated the effect of RAS and AMS in Parkinson's disease (PD). 23 PD patients and 23 healthy age-matched controls underwent an auditory oddball task. We manipulated the timing (periodic/random/adaptive) and setting (pedaling/sitting still) of stimulation. While patients elicited a general timing effect, i.e., larger P300 amplitudes for periodic versus random tones for both, sitting and pedaling conditions, controls showed a timing effect only for the sitting but not for the pedaling condition. However, a correlation between P300 amplitudes and motor variability in the periodic pedaling condition was obtained in control participants only. We conclude that RAS facilitates attentional processing of temporally predictable external events in PD patients as well as healthy controls, but embodied timing encoding via body movement does not affect stimulus processing due to BG impairment in patients. Moreover, even with intact embodied timing encoding, such as healthy elderly, the effect of AMS depends on the degree of movement synchronization performance, which is very low in the current study. Copyright © 2018 Elsevier B.V. All rights reserved.
Resting-state brain networks revealed by granger causal connectivity in frogs.
Xue, Fei; Fang, Guangzhan; Yue, Xizi; Zhao, Ermi; Brauth, Steven E; Tang, Yezhong
2016-10-15
Resting-state networks (RSNs) refer to the spontaneous brain activity generated under resting conditions, which maintain the dynamic connectivity of functional brain networks for automatic perception or higher order cognitive functions. Here, Granger causal connectivity analysis (GCCA) was used to explore brain RSNs in the music frog (Babina daunchina) during different behavioral activity phases. The results reveal that a causal network in the frog brain can be identified during the resting state which reflects both brain lateralization and sexual dimorphism. Specifically (1) ascending causal connections from the left mesencephalon to both sides of the telencephalon are significantly higher than those from the right mesencephalon, while the right telencephalon gives rise to the strongest efferent projections among all brain regions; (2) causal connections from the left mesencephalon in females are significantly higher than those in males and (3) these connections are similar during both the high and low behavioral activity phases in this species although almost all electroencephalograph (EEG) spectral bands showed higher power in the high activity phase for all nodes. The functional features of this network match important characteristics of auditory perception in this species. Thus we propose that this causal network maintains auditory perception during the resting state for unexpected auditory inputs as resting-state networks do in other species. These results are also consistent with the idea that females are more sensitive to auditory stimuli than males during the reproductive season. In addition, these results imply that even when not behaviorally active, the frogs remain vigilant for detecting external stimuli. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback.
Behroozmand, Roozbeh; Larson, Charles R
2011-06-06
The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback. Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.
Petersen, Christopher L; Hurley, Laura M
2017-10-01
Context is critical to the adaptive value of communication. Sensory systems such as the auditory system represent an important juncture at which information on physiological state or social valence can be added to communicative information. However, the neural pathways that convey context to the auditory system are not well understood. The serotonergic system offers an excellent model to address these types of questions. Serotonin fluctuates in the mouse inferior colliculus (IC), an auditory midbrain region important for species-specific vocalizations, during specific social and non-social contexts. Furthermore, serotonin is an indicator of the valence of event-based changes within individual social interactions. We propose a model in which the brain's social behavior network serves as an afferent effector of the serotonergic dorsal raphe nucleus in order to gate contextual release of serotonin in the IC. Specifically, discrete vasopressinergic nuclei within the hypothalamus and extended amygdala that project to the dorsal raphe are functionally engaged during contexts in which serotonin fluctuates in the IC. Since serotonin strongly influences the responses of IC neurons to social vocalizations, this pathway could serve as a feedback loop whereby integrative social centers modulate their own sources of input. The end result of this feedback would be to produce a process that is geared, from sensory input to motor output, toward responding appropriately to a dynamic external world. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Process Timing and Its Relation to the Coding of Tonal Harmony
ERIC Educational Resources Information Center
Aksentijevic, Aleksandar; Barber, Paul J.; Elliott, Mark A.
2011-01-01
Advances in auditory research suggest that gamma-band synchronization of frequency-specific cortical loci could be responsible for the integration of pure tones (harmonics) into harmonic complex tones. Thus far, evidence for such a mechanism has been revealed in neurophysiological studies, with little corroborative psychophysical evidence. In six…
Auditory Brainstem Responses and EMFs Generated by Mobile Phones.
Khullar, Shilpa; Sood, Archana; Sood, Sanjay
2013-12-01
There has been a manifold increase in the number of mobile phone users throughout the world with the current number of users exceeding 2 billion. However this advancement in technology like many others is accompanied by a progressive increase in the frequency and intensity of electromagnetic waves without consideration of the health consequences. The aim of our study was to advance our understanding of the potential adverse effects of GSM mobile phones on auditory brainstem responses (ABRs). 60 subjects were selected for the study and divided into three groups of 20 each based on their usage of mobile phones. Their ABRs were recorded and analysed for latency of waves I-V as well as interpeak latencies I-III, I-V and III-V (in ms). Results revealed no significant difference in the ABR parameters between group A (control group) and group B (subjects using mobile phones for maximum 30 min/day for 5 years). However the latency of waves was significantly prolonged in group C (subjects using mobile phones for 10 years for a maximum of 30 min/day) as compared to the control group. Based on our findings we concluded that long term exposure to mobile phones may affect conduction in the peripheral portion of the auditory pathway. However more research needs to be done to study the long term effects of mobile phones particularly of newer technologies like smart phones and 3G.
Differential effects of Cdh23(753A) on auditory and vestibular functional aging in C57BL/6J mice.
Mock, Bruce E; Vijayakumar, Sarath; Pierce, Jessica; Jones, Timothy A; Jones, Sherri M
2016-07-01
The C57BL/6J (B6) mouse strain carries a cadherin 23 mutation (Cdh23(753A), also known as Ahl), which affects inner ear structures and results in age-related hearing loss. The B6.CAST strain harbors the wild type Cdh23 gene, and hence, the influence of Ahl is absent. The purpose of the present study was to characterize the effect of age and gender on gravity receptor function in B6 and B6.CAST strains and to compare functional aging between auditory and vestibular modalities. Auditory sensitivity declined at significantly faster rates than gravity receptor sensitivity for both strains. Indeed, vestibular functional aging was minimal for both strains. The comparatively smaller loss of macular versus cochlear sensitivity in both the B6 and B6.CAST strains suggests that the contribution of Ahl to the aging of the vestibular system is minimal, and thus very different than its influence on aging of the auditory system. Alternatively, there exist unidentified genes or gene modifiers that serve to slow the degeneration of gravity receptor structures and maintain gravity receptor sensitivity into advanced age. Copyright © 2016 Elsevier Inc. All rights reserved.
Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun
2017-10-25
Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study provides two lines of converging evidence, for the first time, that working memory cannot only enhance the perception of vocal feedback errors but also exert inhibitory control over vocal motor behavior. These findings represent a major advance in our understanding of the top-down modulatory mechanisms that support the detection and correction of prediction-feedback mismatches during sensorimotor control of speech production driven by working memory. Rather than being an exclusively bottom-up and automatic process, auditory-motor integration for voice control can be modulated by top-down influences arising from working memory. Copyright © 2017 the authors 0270-6474/17/3710324-11$15.00/0.
Hamilton, Caroline; D'Arcy, Shona; Pearlmutter, Barak A; Crispino, Gloria; Lalor, Edmund C; Conlon, Brendan J
2016-12-01
Tinnitus is the perception of sound in the absence of an external auditory stimulus. It is widely believed that tinnitus, in patients with associated hearing loss, is a neurological phenomenon primarily affecting the central auditory structures. However, there is growing evidence for the involvement of the somatosensory system in this form of tinnitus. For this reason it has been suggested that the condition may be amenable to bi-modal stimulation of the auditory and somatosensory systems. We conducted a pilot study to investigate the feasibility and safety of a device that delivers simultaneous auditory and somatosensory stimulation to treat the symptoms of chronic tinnitus. A cohort of 54 patients used the stimulation device for 10 weeks. Auditory stimulation was delivered via headphones and somatosensory stimulation was delivered via electrical stimulation of the tongue. Patient usage, logged by the device, was used to classify patients as compliant or noncompliant. Safety was assessed by reported adverse events and changes in tinnitus outcome measures. Response to treatment was assessed using tinnitus outcome measures: Minimum Masking Level (MML), Tinnitus Loudness Matching (TLM), and Tinnitus Handicap Inventory (THI). The device was well tolerated by patients and no adverse events or serious difficulties using the device were reported. Overall, 68% of patients met the defined compliance threshold. Compliant patients (N = 30) demonstrated statistically significant improvements in mean outcome measures after 10 weeks of treatment: THI (-11.7 pts, p < 0.001), TLM (-7.5dB, p < 0.001), and MML (-9.7dB, p < 0.001). The noncompliant group (N = 14) demonstrated no statistical improvements. This study demonstrates the feasibility and safety of a new bi-modal stimulation device and supports the potential efficacy of this new treatment for tinnitus. © 2016 Neuromod Devices Ltd. Neuromodulation: Technology at the Neural Interface published by Wiley Periodicals, Inc. on behalf of International Neuromodulation Society.
Lailach, S; Zahnert, T
2016-12-01
The present article about the basics of ear surgery is a short overview of current indications, the required diagnostics and surgical procedures of common otologic diseases. In addition to plastic and reconstructive surgery of the auricle, principles of surgery of the external auditory canal, basics of middle ear surgery and the tumor surgery of the temporal bone are shown. Additionally, aspects of the surgical hearing rehabilitation (excluding implantable hearing systems) are presented considering current study results. Georg Thieme Verlag KG Stuttgart · New York.
First branchial cleft anomaly.
Al-Fallouji, M. A.; Butler, M. F.
1983-01-01
A 15-year-old girl presented with a cystic swelling since birth behind the ramus of the right mandible and diagnosed clinically as a dermoid cyst. Surgical exploration, however, showed that it was closely related to the external auditory canal, with an extension running medially behind the parotid gland and ending in the bony middle ear. The facial nerve was closely related to the deep part of the cyst. Such an anatomical position indicates that this was a first branchial cleft anomaly. Surgical excision of the cyst was performed. PMID:6622327
Tuberculous otitis media developing as a complication of tympanostomy tube insertion.
Kim, Chang Woo; Jin, Jae Won; Rho, Young-Soo
2007-03-01
Primary tuberculous otitis media of which infection focus cannot be found elsewhere in the body is a rare disease. Route of the infection has been hypothesized as Eustachian tube or external auditory canal with tympanic membrane perforation but it is hard to ascertain in the patient. We present a case of an 8-year-old child who suffered chronic otorrhea after tympanostomy tube insertion. The radiological and histopathological findings revealed tuberculous otitis media, which occurred as a complication of tympanostomy tube insertion.
Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K
2018-05-01
The trends in cochlear implantation candidacy and benefit have changed rapidly in the last two decades. It is now widely accepted that early implantation leads to better postimplant outcomes. Although some generalizations can be made about postimplant auditory and language performance, neural mechanisms need to be studied to predict individual prognosis. The aim of this study was to use functional magnetic resonance imaging (fMRI) to identify preimplant neuroimaging biomarkers that predict children's postimplant auditory and language outcomes as measured by parental observation/reports. This is a pre-post correlational measures study. Twelve possible cochlear implant candidates with bilateral severe to profound hearing loss were recruited via referrals for a clinical magnetic resonance imaging to ensure structural integrity of the auditory nerve for implantation. Participants underwent cochlear implantation at a mean age of 19.4 mo. All children used the advanced combination encoder strategy (ACE, Cochlear Corporation™, Nucleus ® Freedom cochlear implants). Three participants received an implant in the right ear; one in the left ear whereas eight participants received bilateral implants. Participants' preimplant neuronal activation in response to two auditory stimuli was studied using an event-related fMRI method. Blood oxygen level dependent contrast maps were calculated for speech and noise stimuli. The general linear model was used to create z-maps. The Auditory Skills Checklist (ASC) and the SKI-HI Language Development Scale (SKI-HI LDS) were administered to the parents 2 yr after implantation. A nonparametric correlation analysis was implemented between preimplant fMRI activation and postimplant auditory and language outcomes based on ASC and SKI-HI LDS. Statistical Parametric Mapping software was used to create regression maps between fMRI activation and scores on the aforementioned tests. Regression maps were overlaid on the Imaging Research Center infant template and visualized in MRIcro. Regression maps revealed two clusters of brain activation for the speech versus silence contrast and five clusters for the noise versus silence contrast that were significantly correlated with the parental reports. These clusters included auditory and extra-auditory regions such as the middle temporal gyrus, supramarginal gyrus, precuneus, cingulate gyrus, middle frontal gyrus, subgyral, and middle occipital gyrus. Both positive and negative correlations were observed. Correlation values for the different clusters ranged from -0.90 to 0.95 and were significant at a corrected p value of <0.05. Correlations suggest that postimplant performance may be predicted by activation in specific brain regions. The results of the present study suggest that (1) fMRI can be used to identify neuroimaging biomarkers of auditory and language performance before implantation and (2) activation in certain brain regions may be predictive of postimplant auditory and language performance as measured by parental observation/reports. American Academy of Audiology.
Todd, Neil P. M.; Lee, Christopher S.
2015-01-01
Some 20 years ago Todd and colleagues proposed that rhythm perception is mediated by the conjunction of a sensory representation of the auditory input and a motor representation of the body (Todd, 1994a, 1995), and that a sense of motion from sound is mediated by the vestibular system (Todd, 1992a, 1993b). These ideas were developed into a sensory-motor theory of rhythm and beat induction (Todd et al., 1999). A neurological substrate was proposed which might form the biological basis of the theory (Todd et al., 2002). The theory was implemented as a computational model and a number of experiments conducted to test it. In the following time there have been several key developments. One is the demonstration that the vestibular system is primal to rhythm perception, and in related work several experiments have provided further evidence that rhythm perception is body dependent. Another is independent advances in imaging, which have revealed the brain areas associated with both vestibular processing and rhythm perception. A third is the finding that vestibular receptors contribute to auditory evoked potentials (Todd et al., 2014a,b). These behavioral and neurobiological developments demand a theoretical overview which could provide a new synthesis over the domain of rhythm perception. In this paper we suggest four propositions as the basis for such a synthesis. (1) Rhythm perception is a form of vestibular perception; (2) Rhythm perception evokes both external and internal guidance of somatotopic representations; (3) A link from the limbic system to the internal guidance pathway mediates the “dance habit”; (4) The vestibular reward mechanism is innate. The new synthesis provides an explanation for a number of phenomena not often considered by rhythm researchers. We discuss these along with possible computational implementations and alternative models and propose a number of new directions for future research. PMID:26379522
Todd, Neil P M; Lee, Christopher S
2015-01-01
Some 20 years ago Todd and colleagues proposed that rhythm perception is mediated by the conjunction of a sensory representation of the auditory input and a motor representation of the body (Todd, 1994a, 1995), and that a sense of motion from sound is mediated by the vestibular system (Todd, 1992a, 1993b). These ideas were developed into a sensory-motor theory of rhythm and beat induction (Todd et al., 1999). A neurological substrate was proposed which might form the biological basis of the theory (Todd et al., 2002). The theory was implemented as a computational model and a number of experiments conducted to test it. In the following time there have been several key developments. One is the demonstration that the vestibular system is primal to rhythm perception, and in related work several experiments have provided further evidence that rhythm perception is body dependent. Another is independent advances in imaging, which have revealed the brain areas associated with both vestibular processing and rhythm perception. A third is the finding that vestibular receptors contribute to auditory evoked potentials (Todd et al., 2014a,b). These behavioral and neurobiological developments demand a theoretical overview which could provide a new synthesis over the domain of rhythm perception. In this paper we suggest four propositions as the basis for such a synthesis. (1) Rhythm perception is a form of vestibular perception; (2) Rhythm perception evokes both external and internal guidance of somatotopic representations; (3) A link from the limbic system to the internal guidance pathway mediates the "dance habit"; (4) The vestibular reward mechanism is innate. The new synthesis provides an explanation for a number of phenomena not often considered by rhythm researchers. We discuss these along with possible computational implementations and alternative models and propose a number of new directions for future research.
SPECT imaging in evaluating extent of malignant external otitis: case report
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, R.J.; Tu'Meh, S.S.; Piwnica-Worms, D.
1987-03-01
Otitis externa, a benign inflammatory process of the external auditory canal, is general responsive to local therapy. Some patients however, develop a less controllable disease leading to chondritis and osteomyelitis of the base of the skull. The direct invasive characteristic of the disease has led to the descriptive term malignant external otitis (MEO), more appropriately called necrotizing or invasive external otitis. Malignant external otitis is caused by an aggressive pseudomonas or proteus infection that almost exclusively occurs in elderly diabetic patients. The primary imaging modalities previously used in the diagnosis and evaluation of MEO were standard planar scintigraphic techniques withmore » technetium-99M (/sup 99m/Tc) bone agents and gallium-67 (/sup 67/Ga), and pluridirectional tomography. The advent of high resolution computed tomography (CT) effectively allowed demonstration of the soft tissue extension and bone destruction associated with MEO, but still suffered from the low sensitivity constraints of all radiographic techniques in determining early inflammatory bone involvement. Recent work suggests that scintigraphic detection of MEO with /sup 99m/Tc-MDP and /sup 67/Ga, combined with the cross-sectional resolution of single photon emission computed tomography (SPECT) may be of value in planning treatment of this inflammatory condition.« less
1985-04-01
evaluation is predominantly based on the impressions he gets from the stimulation of his sensual receptors, i.e. visual, motional and auditorial cues. For...Exchanging of scientific and technical information; - Continuously stimulating advances in the aerospace sciences relevant to strengthening the...extented. International cooperation has always been stimulating . Strong technology transfer restrictions could result in a technical isolation with
Tone Series and the Nature of Working Memory Capacity Development
ERIC Educational Resources Information Center
Clark, Katherine M.; Hardman, Kyle O.; Schachtman, Todd R.; Saults, J. Scott; Glass, Bret A.; Cowan, Nelson
2018-01-01
Recent advances in understanding visual working memory, the limited information held in mind for use in ongoing processing, are extended here to examine auditory working memory development. Research with arrays of visual objects has shown how to distinguish the capacity, in terms of the "number" of objects retained, from the…
Leech, Robert; Aydelott, Jennifer; Symons, Germaine; Carnevale, Julia; Dick, Frederic
2007-11-01
How does the development and consolidation of perceptual, attentional, and higher cognitive abilities interact with language acquisition and processing? We explored children's (ages 5-17) and adults' (ages 18-51) comprehension of morphosyntactically varied sentences under several competing speech conditions that varied in the degree of attentional demands, auditory masking, and semantic interference. We also evaluated the relationship between subjects' syntactic comprehension and their word reading efficiency and general 'speed of processing'. We found that the interactions between perceptual and attentional processes and complex sentence interpretation changed considerably over the course of development. Perceptual masking of the speech signal had an early and lasting impact on comprehension, particularly for more complex sentence structures. In contrast, increased attentional demand in the absence of energetic auditory masking primarily affected younger children's comprehension of difficult sentence types. Finally, the predictability of syntactic comprehension abilities by external measures of development and expertise is contingent upon the perceptual, attentional, and semantic milieu in which language processing takes place.
Magnetic stem cell targeting to the inner ear
NASA Astrophysics Data System (ADS)
Le, T. N.; Straatman, L.; Yanai, A.; Rahmanian, R.; Garnis, C.; Häfeli, U. O.; Poblete, T.; Westerberg, B. D.; Gregory-Evans, K.
2017-12-01
Severe sensorineural deafness is often accompanied by a loss of auditory neurons in addition to injury of the cochlear epithelium and hair cell loss. Cochlear implant function however depends on a healthy complement of neurons and their preservation is vital in achieving optimal results. We have developed a technique to target mesenchymal stem cells (MSCs) to a deafened rat cochlea. We then assessed the neuroprotective effect of systematically delivered MSCs on the survival and function of spiral ganglion neurons (SGNs). MSCs were labeled with superparamagnetic nanoparticles, injected via the systemic circulation, and targeted using a magnetized cochlea implant and external magnet. Neurotrophic factor concentrations, survival of SGNs, and auditory function were assessed at 1 week and 4 weeks after treatments and compared against multiple control groups. Significant numbers of magnetically targeted MSCs (>30 MSCs/section) were present in the cochlea with accompanied elevation of brain-derived neurotrophic factor and glial cell-derived neurotrophic factor levels (p < 0.001). In addition we saw improved survival of SGNs (approximately 80% survival at 4 weeks). Hearing threshold levels in magnetically targeted rats were found to be significantly better than those of control rats (p < 0.05). These results indicate that magnetic targeting of MSCs to the cochlea can be accomplished with a magnetized cochlear permalloy implant and an external magnet. The targeted stem cells release neurotrophic factors which results in improved SGN survival and hearing recovery. Combining magnetic cell-based therapy and cochlear implantation may improve cochlear implant function in treating deafness.
3D fiber deposited polymeric scaffolds for external auditory canal wall.
Mota, Carlos; Milazzo, Mario; Panetta, Daniele; Trombi, Luisa; Gramigna, Vera; Salvadori, Piero A; Giannotti, Stefano; Bruschini, Luca; Stefanini, Cesare; Moroni, Lorenzo; Berrettini, Stefano; Danti, Serena
2018-05-07
The external auditory canal (EAC) is an osseocartilaginous structure extending from the auricle to the eardrum, which can be affected by congenital, inflammatory, and neoplastic diseases, thus reconstructive materials are needed. Current biomaterial-based approaches for the surgical reconstruction of EAC posterior wall still suffer from resorption (biological) and extrusion (synthetic). In this study, 3D fiber deposited scaffolds based on poly(ethylene oxide terephthalate)/poly(butylene terephthalate) were designed and fabricated to replace the EAC wall. Fiber diameter and scaffold porosity were optimized, leading to 200 ± 33 µm and 55% ± 5%, respectively. The mechanical properties were evaluated, resulting in a Young's modulus of 25.1 ± 7.0 MPa. Finally, the EAC scaffolds were tested in vitro with osteo-differentiated human mesenchymal stromal cells (hMSCs) with different seeding methods to produce homogeneously colonized replacements of interest for otologic surgery. This study demonstrated the fabrication feasibility of EAC wall scaffolds aimed to match several important requirements for biomaterial application to the ear under the Tissue Engineering paradigm, including shape, porosity, surface area, mechanical properties and favorable in vitro interaction with osteoinduced hMSCs. This study demonstrated the fabrication feasibility of outer ear canal wall scaffolds via additive manufacturing. Aimed to match several important requirements for biomaterial application to ear replacements under the Tissue Engineering paradigm, including shape, porosity and pore size, surface area, mechanical properties and favorable in vitro interaction with osteo-differentiated mesenchymal stromal cells.
Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos
2013-01-01
Acoustic signals are transmitted through the external and middle ear mechanically to the cochlea where they are transduced into electrical impulse for further transmission via the auditory nerve. The auditory nerve encodes the acoustic sounds that are conveyed to the auditory brainstem. Multiple brainstem nuclei, the cochlea, the midbrain, the thalamus, and the cortex constitute the central auditory system. In clinical practice, auditory brainstem responses (ABRs) to simple stimuli such as click or tones are widely used. Recently, complex stimuli or complex auditory brain responses (cABRs), such as monosyllabic speech stimuli and music, are being used as a tool to study the brainstem processing of speech sounds. We have used the classic 'click' as well as, for the first time, the artificial successive complex stimuli 'ba', which constitutes the Greek word 'baba' corresponding to the English 'daddy'. Twenty young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) comprised the diseased group. Twenty sex-, age-, education-, hearing sensitivity-, and IQ-matched normal subjects comprised the control group. Measurements included the absolute latencies of waves I through V, the interpeak latencies elicited by the classical acoustic click, the negative peak latencies of A and C waves, as well as the interpeak latencies of A-C elicited by the verbal stimulus 'baba' created on a digital speech synthesizer. The absolute peak latencies of waves I, III, and V in response to monoaural rarefaction clicks as well as the interpeak latencies I-III, III-V, and I-V in the dyslexic subjects, although increased in comparison with normal subjects, did not reach the level of a significant difference (p<0.05). However, the absolute peak latencies of the negative wave C and the interpeak latencies of A-C elicited by verbal stimuli were found to be increased in the dyslexic group in comparison with the control group (p=0.0004 and p=0.045, respectively). In the subgroup consisting of 10 patients suffering from 'other learning disabilities' and who were characterized as with 'light' dyslexia according to dyslexia tests, no significant delays were found in peak latencies A and C and interpeak latencies A-C in comparison with the control group. Acoustic representation of a speech sound and, in particular, the disyllabic word 'baba' was found to be abnormal, as low as the auditory brainstem. Because ABRs mature in early life, this can help to identify subjects with acoustically based learning problems and apply early intervention, rehabilitation, and treatment. Further studies and more experience with more patients and pathological conditions such as plasticity of the auditory system, cochlear implants, hearing aids, presbycusis, or acoustic neuropathy are necessary until this type of testing is ready for clinical application. © 2013 Elsevier Inc. All rights reserved.
Bendixen, Alexandra; Scharinger, Mathias; Strauß, Antje; Obleser, Jonas
2014-04-01
Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pinto, Hyorrana Priscila Pereira; Carvalho, Vinícius Rezende; Medeiros, Daniel de Castro; Almeida, Ana Flávia Santos; Mendes, Eduardo Mazoni Andrade Marçal; Moraes, Márcio Flávio Dutra
2017-04-07
Epilepsy is a neurological disease related to the occurrence of pathological oscillatory activity, but the basic physiological mechanisms of seizure remain to be understood. Our working hypothesis is that specific sensory processing circuits may present abnormally enhanced predisposition for coordinated firing in the dysfunctional brain. Such facilitated entrainment could share a similar mechanistic process as those expediting the propagation of epileptiform activity throughout the brain. To test this hypothesis, we employed the Wistar audiogenic rat (WAR) reflex animal model, which is characterized by having seizures triggered reliably by sound. Sound stimulation was modulated in amplitude to produce an auditory steady-state-evoked response (ASSR; -53.71Hz) that covers bottom-up and top-down processing in a time scale compatible with the dynamics of the epileptic condition. Data from inferior colliculus (IC) c-Fos immunohistochemistry and electrographic recordings were gathered for both the control Wistar group and WARs. Under 85-dB SLP auditory stimulation, compared to controls, the WARs presented higher number of Fos-positive cells (at IC and auditory temporal lobe) and a significant increase in ASSR-normalized energy. Similarly, the 110-dB SLP sound stimulation also statistically increased ASSR-normalized energy during ictal and post-ictal periods. However, at the transition from the physiological to pathological state (pre-ictal period), the WAR ASSR analysis demonstrated a decline in normalized energy and a significant increase in circular variance values compared to that of controls. These results indicate an enhanced coordinated firing state for WARs, except immediately before seizure onset (suggesting pre-ictal neuronal desynchronization with external sensory drive). These results suggest a competing myriad of interferences among different networks that after seizure onset converge to a massive oscillatory circuit. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
2012-05-01
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback
2011-01-01
Background The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback. Results Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Conclusions Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds. PMID:21645406
Immediate source-monitoring, self-focused attention and the positive symptoms of schizophrenia.
Startup, Mike; Startup, Sue; Sedgman, Adele
2008-10-01
Previous research suggests that tendencies to misattribute one's own thoughts to an external source, as assessed by an immediate source-monitoring test, are associated with auditory verbal hallucinations (AVHs). However, recent research suggests that such tendencies are associated instead with symptoms of thought interference. The main aim of the present study was to examine whether such tendencies are differentially associated with different types of thought interference, with AVHs, or with both. It has also been suggested that external misattributions are especially likely to occur with emotionally salient material and if the individual's focus is on the self. These suggestions were also tested. The positive psychotic symptoms of 57 individuals with a diagnosis of schizophrenia were assessed and they then completed the Self-Focus Sentence Completion blank. Immediately after completing each sentence they were asked to indicate to what extent the sentence was their own. The number of sentences that were not rated as completely their own served as their externalization score. Externalization scores correlated significantly with the severity of three symptoms: voices commenting, delusions of being controlled, and thought insertion. In a logistic regression analysis, all three of these symptoms were significantly and independently related to externalization. Externalization was not associated with either a negative or a neutral self-focus. Thus tendencies to misattribute one's own thoughts to an external source are associated with AVHs and some, but not all, symptoms of thought interference. The importance for externalization of self-focused attention and of the emotional salience of the elicited thoughts was not supported.
Cook, Peter; Rouse, Andrew; Wilson, Margaret; Reichmuth, Colleen
2013-11-01
Is the ability to entrain motor activity to a rhythmic auditory stimulus, that is "keep a beat," dependent on neural adaptations supporting vocal mimicry? That is the premise of the vocal learning and synchronization hypothesis, recently advanced to explain the basis of this behavior (A. Patel, 2006, Musical Rhythm, Linguistic Rhythm, and Human Evolution, Music Perception, 24, 99-104). Prior to the current study, only vocal mimics, including humans, cockatoos, and budgerigars, have been shown to be capable of motoric entrainment. Here we demonstrate that a less vocally flexible animal, a California sea lion (Zalophus californianus), can learn to entrain head bobbing to an auditory rhythm meeting three criteria: a behavioral response that does not reproduce the stimulus; performance transfer to a range of novel tempos; and entrainment to complex, musical stimuli. These findings show that the capacity for entrainment of movement to rhythmic sounds does not depend on a capacity for vocal mimicry, and may be more widespread in the animal kingdom than previously hypothesized.
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors
Vanarse, Anup; Osseiran, Adam; Rassau, Alexander
2016-01-01
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field. PMID:27065784
Unpredicted Pitch Modulates Beta Oscillatory Power during Rhythmic Entrainment to a Tone Sequence.
Chang, Andrew; Bosnyak, Dan J; Trainor, Laurel J
2016-01-01
Extracting temporal regularities in external stimuli in order to predict upcoming events is an essential aspect of perception. Fluctuations in induced power of beta band (15-25 Hz) oscillations in auditory cortex are involved in predictive timing during rhythmic entrainment, but whether such fluctuations are affected by prediction in the spectral (frequency/pitch) domain remains unclear. We tested whether unpredicted (i.e., unexpected) pitches in a rhythmic tone sequence modulate beta band activity by recording EEG while participants passively listened to isochronous auditory oddball sequences with occasional unpredicted deviant pitches at two different presentation rates. The results showed that the power in low-beta (15-20 Hz) was larger around 200-300 ms following deviant tones compared to standard tones, and this effect was larger when the deviant tones were less predicted. Our results suggest that the induced beta power activities in auditory cortex are consistent with a role in sensory prediction of both "when" (timing) upcoming sounds will occur as well as the prediction precision error of "what" (spectral content in this case). We suggest, further, that both timing and content predictions may co-modulate beta oscillations via attention. These findings extend earlier work on neural oscillations by investigating the functional significance of beta oscillations for sensory prediction. The findings help elucidate the functional significance of beta oscillations in perception.
The sense of agency is action-effect causality perception based on cross-modal grouping.
Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya
2013-07-22
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action-effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action-effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.
The sense of agency is action–effect causality perception based on cross-modal grouping
Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya
2013-01-01
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes. PMID:23740784
Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M
2017-06-01
The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.
Unpredicted Pitch Modulates Beta Oscillatory Power during Rhythmic Entrainment to a Tone Sequence
Chang, Andrew; Bosnyak, Dan J.; Trainor, Laurel J.
2016-01-01
Extracting temporal regularities in external stimuli in order to predict upcoming events is an essential aspect of perception. Fluctuations in induced power of beta band (15–25 Hz) oscillations in auditory cortex are involved in predictive timing during rhythmic entrainment, but whether such fluctuations are affected by prediction in the spectral (frequency/pitch) domain remains unclear. We tested whether unpredicted (i.e., unexpected) pitches in a rhythmic tone sequence modulate beta band activity by recording EEG while participants passively listened to isochronous auditory oddball sequences with occasional unpredicted deviant pitches at two different presentation rates. The results showed that the power in low-beta (15–20 Hz) was larger around 200–300 ms following deviant tones compared to standard tones, and this effect was larger when the deviant tones were less predicted. Our results suggest that the induced beta power activities in auditory cortex are consistent with a role in sensory prediction of both “when” (timing) upcoming sounds will occur as well as the prediction precision error of “what” (spectral content in this case). We suggest, further, that both timing and content predictions may co-modulate beta oscillations via attention. These findings extend earlier work on neural oscillations by investigating the functional significance of beta oscillations for sensory prediction. The findings help elucidate the functional significance of beta oscillations in perception. PMID:27014138
Computer-Based Auditory Training Programs for Children with Hearing Impairment - A Scoping Review.
Nanjundaswamy, Manohar; Prabhu, Prashanth; Rajanna, Revathi Kittur; Ningegowda, Raghavendra Gulaganji; Sharma, Madhuri
2018-01-01
Introduction Communication breakdown, a consequence of hearing impairment (HI), is being fought by fitting amplification devices and providing auditory training since the inception of audiology. The advances in both audiology and rehabilitation programs have led to the advent of computer-based auditory training programs (CBATPs). Objective To review the existing literature documenting the evidence-based CBATPs for children with HIs. Since there was only one such article, we also chose to review the commercially available CBATPs for children with HI. The strengths and weaknesses of the existing literature were reviewed in order to improve further researches. Data Synthesis Google Scholar and PubMed databases were searched using various combinations of keywords. The participant, intervention, control, outcome and study design (PICOS) criteria were used for the inclusion of articles. Out of 124 article abstracts reviewed, 5 studies were shortlisted for detailed reading. One among them satisfied all the criteria, and was taken for review. The commercially available programs were chosen based on an extensive search in Google. The reviewed article was well-structured, with appropriate outcomes. The commercially available programs cover many aspects of the auditory training through a wide range of stimuli and activities. Conclusions There is a dire need for extensive research to be performed in the field of CBATPs to establish their efficacy, also to establish them as evidence-based practices.
Computer-Based Auditory Training Programs for Children with Hearing Impairment – A Scoping Review
Nanjundaswamy, Manohar; Prabhu, Prashanth; Rajanna, Revathi Kittur; Ningegowda, Raghavendra Gulaganji; Sharma, Madhuri
2018-01-01
Introduction Communication breakdown, a consequence of hearing impairment (HI), is being fought by fitting amplification devices and providing auditory training since the inception of audiology. The advances in both audiology and rehabilitation programs have led to the advent of computer-based auditory training programs (CBATPs). Objective To review the existing literature documenting the evidence-based CBATPs for children with HIs. Since there was only one such article, we also chose to review the commercially available CBATPs for children with HI. The strengths and weaknesses of the existing literature were reviewed in order to improve further researches. Data Synthesis Google Scholar and PubMed databases were searched using various combinations of keywords. The participant, intervention, control, outcome and study design (PICOS) criteria were used for the inclusion of articles. Out of 124 article abstracts reviewed, 5 studies were shortlisted for detailed reading. One among them satisfied all the criteria, and was taken for review. The commercially available programs were chosen based on an extensive search in Google. The reviewed article was well-structured, with appropriate outcomes. The commercially available programs cover many aspects of the auditory training through a wide range of stimuli and activities. Conclusions There is a dire need for extensive research to be performed in the field of CBATPs to establish their efficacy, also to establish them as evidence-based practices. PMID:29371904
A case of direct intracranial extension of tuberculous otitis media.
Kim, Dong-Kee; Park, Shi-Nae; Park, Kyung-Ho; Yeo, Sang Won
2014-02-01
We describe a very rare case of tuberculous otitis media (TOM) with direct intracranial extension. The patient was a 55-year-old man who presented to our ENT clinic for evaluation of severe headaches and right-sided otorrhea. A biopsy of granulation tissue obtained from the right external auditory canal demonstrated chronic inflammation that was suggestive of mycobacterial infection. Magnetic resonance imaging of the brain indicated intracranial extension of TOM through a destroyed tegmen mastoideum. After 2 months of antituberculous medication, the headaches and otorrhea were controlled, and the swelling in the external ear canal subsided greatly. Rarely does TOM spread intracranially. In most such cases, intracranial extension of tuberculosis occurs as the result of hematogenous or lymphogenous spread. In rare cases, direct spread through destroyed bone can occur, as it did in our patient.
Representation Elements of Spatial Thinking
NASA Astrophysics Data System (ADS)
Fiantika, F. R.
2017-04-01
This paper aims to add a reference in revealing spatial thinking. There several definitions of spatial thinking but it is not easy to defining it. We can start to discuss the concept, its basic a forming representation. Initially, the five sense catch the natural phenomenon and forward it to memory for processing. Abstraction plays a role in processing information into a concept. There are two types of representation, namely internal representation and external representation. The internal representation is also known as mental representation; this representation is in the human mind. The external representation may include images, auditory and kinesthetic which can be used to describe, explain and communicate the structure, operation, the function of the object as well as relationships. There are two main elements, representations properties and object relationships. These elements play a role in forming a representation.
Effect of Auditory Constraints on Motor Performance Depends on Stage of Recovery Post-Stroke
Aluru, Viswanath; Lu, Ying; Leung, Alan; Verghese, Joe; Raghavan, Preeti
2014-01-01
In order to develop evidence-based rehabilitation protocols post-stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post-stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in 20 subjects with chronic hemiparesis. We used a bimanual wrist extension task, performed with a custom-made wrist trainer, to facilitate learning of wrist extension in the paretic hand under four auditory conditions: (1) without auditory cueing; (2) to non-musical happy sounds; (3) to self-selected music; and (4) to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each) were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics, and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups, which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post-stroke. PMID:25002859
Nishikimi, Kyoko; Tate, Shinichi; Matsuoka, Ayumu; Shozu, Makio
2017-08-01
Locally advanced ovarian carcinomas may be fixed to the pelvic sidewall, and although these often involve the internal iliac vessels, they rarely involve the external iliac vessels. Such tumors are mostly considered inoperable. We present a surgical technique for complete resection of locally advanced ovarian carcinoma fixed to the pelvic sidewall and involving external and internal iliac vessels. A 69-year-old woman presented with ovarian carcinoma fixed to the right pelvic sidewall, which involved the right external and internal iliac arteries and veins and the right lower ureter, rectum, and vagina. We cut the external iliac artery and vein at the bifurcation and at the inguinal ligament to resect the external artery and vein. Then, we reconstructed the arterial and venous supplies of the right external artery and vein with grafts. After creating a wide space immediately inside of the sacral plexus to allow the tumor fixed to pelvic sidewall with the internal iliac vessels to move medially, we performed total internal iliac vessel resection. We achieved complete en bloc tumor resection with the right external and internal artery and vein, right ureter, vagina, and rectum adhering to the tumor. There were no intra- or postoperative complications, such as bleeding, graft occlusion, infection, or limb edema. Exfoliation from the sacral plexus and total resection with external and internal iliac vessels enables complete resection of the tumor fixed to the pelvic sidewall. Copyright © 2017 Elsevier Inc. All rights reserved.
Dense Neighborhoods and Mechanisms of Learning: Evidence from Children with Phonological Delay
ERIC Educational Resources Information Center
Gierut, Judith A.; Morrisette, Michele L.
2015-01-01
There is a noted advantage of dense neighborhoods in language acquisition, but the learning mechanism that drives the effect is not well understood. Two hypotheses--long-term auditory word priming and phonological working memory--have been advanced in the literature as viable accounts. These were evaluated in two treatment studies enrolling twelve…
A Temporal Model of Level-Invariant, Tone-in-Noise Detection
ERIC Educational Resources Information Center
Berg, Bruce G.
2004-01-01
Level-invariant detection refers to findings that thresholds in tone-in-noise detection are unaffected by roving-level procedures that degrade energy cues. Such data are inconsistent with ideas that detection is based on the energy passed by an auditory filter. A hypothesis that detection is based on a level-invariant temporal cue is advanced.…
Fungistatic activity of some perfumes against otomycotic pathogens.
Jain, S K; Agrawal, S C
2002-04-01
The sporostatic effect of five otomycotic pathogens, i.e. Aspergillus niger, A. flavus, Absidia corymbifera, Penicillium nigricans and Candida albicans to nine different perfumes was determined on the basis of their spore germination. These organisms were isolated from patients suffering from fungal infection of the external auditory canal. Volatile vapours emanating from musk, phulwari, jasmine, nagchampa and bela caused approximately 100% inhibition in spore germination of all the test fungi. Volatiles emanating from chandan, khas and hina showed no inhibition for the test pathogens, displaying their resistant character to these perfumes.
The Cellular Basis of a Corollary Discharge
NASA Astrophysics Data System (ADS)
Poulet, James F. A.; Hedwig, Berthold
2006-01-01
How do animals discriminate self-generated from external stimuli during behavior and prevent desensitization of their sensory pathways? A fundamental concept in neuroscience states that neural signals, termed corollary discharges or efference copies, are forwarded from motor to sensory areas. Neurons mediating these signals have proved difficult to identify. We show that a single, multisegmental interneuron is responsible for the pre- and postsynaptic inhibition of auditory neurons in singing crickets (Gryllus bimaculatus). Therefore, this neuron represents a corollary discharge interneuron that provides a neuronal basis for the central control of sensory responses.
[Antimycotic therapy in otomycosis with tympanic membrane perforation].
Dyckhoff, G; Hoppe-Tichy, T; Kappe, R; Dietz, A
2000-01-01
Especially after prolonged antibiotic ototopic therapy otomycosis is not rare. An inoculation of fungi into the tympanic cavity however may have serious sequelae. Therefore an eradication of fungi from the external auditory canal is imperative before surgery. In addition to thorough cleaning of the outer ear canal antimycotic preparations are recommended in treating otomycosis. However, all of the commercially available ear drops contain ototoxic agents. In the case of defects of the tympanic membrane a damage of the inner ear may result. Alternatively, we suggest an aqueous solution of Miconazol 0,5%.
Effect of Auditory-Perceptual Training With Natural Voice Anchors on Vocal Quality Evaluation.
Dos Santos, Priscila Campos Martins; Vieira, Maurílio Nunes; Sansão, João Pedro Hallack; Gama, Ana Cristina Côrtes
2018-01-10
To analyze the effects of auditory-perceptual training with anchor stimuli of natural voices on inter-rater agreement during the assessment of vocal quality. This is a quantitative nature study. An auditory-perceptual training site was developed consisting of Programming Interface A, an auditory training activity, and Programming Interface B, a control activity. Each interface had three stages: pre-training/pre-interval evaluation, training/interval, and post-training/post-interval evaluation. Two experienced evaluators classified 381 voices according to the GRBASI scale (G-grade, R-roughness, B-breathiness, A-asthenia, S-strain, I-instability). Voices were selected that received the same evaluation by both evaluators: 57 voices for evaluation and 56 for training were selected, with varying degrees of deviation across parameters. Fifteen inexperienced evaluators were then selected. In the pre-, post-training, pre-, and postinterval stages, evaluators listened to the voices and classified them via the GRBASI scale. In the stage interval evaluators read a text. In the stage training each parameter was trained separately. Evaluators analyzed the degrees of deviation of the GRBASI parameters based on anchor stimuli, and could only advance after correctly classifying the voices. To quantify inter-rater agreement and provide statistical analyses, the AC1 coefficient, confidence intervals, and percentage variation of agreement were employed. Except for the asthenia parameter, decreased agreement was observed in the control condition. Improved agreement was observed with auditory training, but this improvement did not achieve statistical significance. Training with natural voice anchors suggest an increased inter-rater agreement during perceptual voice analysis, potentially indicating that new internal references were established. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Dynamic speech representations in the human temporal lobe.
Leonard, Matthew K; Chang, Edward F
2014-09-01
Speech perception requires rapid integration of acoustic input with context-dependent knowledge. Recent methodological advances have allowed researchers to identify underlying information representations in primary and secondary auditory cortex and to examine how context modulates these representations. We review recent studies that focus on contextual modulations of neural activity in the superior temporal gyrus (STG), a major hub for spectrotemporal encoding. Recent findings suggest a highly interactive flow of information processing through the auditory ventral stream, including influences of higher-level linguistic and metalinguistic knowledge, even within individual areas. Such mechanisms may give rise to more abstract representations, such as those for words. We discuss the importance of characterizing representations of context-dependent and dynamic patterns of neural activity in the approach to speech perception research. Copyright © 2014 Elsevier Ltd. All rights reserved.
Schabus, Manuel; Dang-Vu, Thien Thanh; Heib, Dominik Philip Johannes; Boly, Mélanie; Desseilles, Martin; Vandewalle, Gilles; Schmidt, Christina; Albouy, Geneviève; Darsaud, Annabelle; Gais, Steffen; Degueldre, Christian; Balteau, Evelyne; Phillips, Christophe; Luxen, André; Maquet, Pierre
2012-01-01
The present study aimed at identifying the neurophysiological responses associated with auditory stimulation during non-rapid eye movement (NREM) sleep using simultaneous electroencephalography (EEG)/functional magnetic resonance imaging (fMRI) recordings. It was reported earlier that auditory stimuli produce bilateral activation in auditory cortex, thalamus, and caudate during both wakefulness and NREM sleep. However, due to the spontaneous membrane potential fluctuations cortical responses may be highly variable during NREM. Here we now examine the modulation of cerebral responses to tones depending on the presence or absence of sleep spindles and the phase of the slow oscillation. Thirteen healthy young subjects were scanned successfully during stage 2-4 NREM sleep in the first half of the night in a 3 T scanner. Subjects were not sleep-deprived and sounds were post hoc classified according to (i) the presence of sleep spindles or (ii) the phase of the slow oscillation during (±300 ms) tone delivery. These detected sounds were then entered as regressors of interest in fMRI analyses. Interestingly wake-like responses - although somewhat altered in size and location - persisted during NREM sleep, except during present spindles (as previously published in Dang-Vu et al., 2011) and the negative going phase of the slow oscillation during which responses became less consistent or even absent. While the phase of the slow oscillation did not alter brain responses in primary sensory cortex, it did modulate responses at higher cortical levels. In addition EEG analyses show a distinct N550 response to tones during the presence of light sleep spindles and suggest that in deep NREM sleep the brain is more responsive during the positive going slope of the slow oscillation. The presence of short temporal windows during which the brain is open to external stimuli is consistent with the fact that even during deep sleep meaningful events can be detected. Altogether, our results emphasize the notion that spontaneous fluctuations of brain activity profoundly modify brain responses to external information across all behavioral states, including deep NREM sleep.
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-05-15
When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.
Sample, Camille H.; Martin, Ashley A.; Jones, Sabrina; Hargrave, Sara L.; Davidson, Terry L.
2015-01-01
In western and westernized societies, large portions of the population live in what are considered to be “obesogenic” environments. Among other things, obesogenic environments are characterized by a high prevalence of external cues that are associated with highly palatable, energy-dense foods. One prominent hypothesis suggests that these external cues become such powerful conditioned elicitors of appetitive and eating behavior that they overwhelm the internal, physiological mechanisms that serve to maintain energy balance. The present research investigated a learning mechanism that may underlie this loss of internal relative to external control. In Experiment 1, rats were provided with both auditory cues (external stimuli) and varying levels of food deprivation (internal stimuli) that they could use to solve a simple discrimination task. Despite having access to clearly discriminable external cues, we found that the deprivation cues gained substantial discriminative control over conditioned responding. Experiment 2 found that, compared to standard chow, maintenance on a “western-style” diet high in saturated fat and sugar weakened discriminative control by food deprivation cues, but did not impair learning when external cues were also trained as relevant discriminative signals for sucrose. Thus, eating a western-style diet contributed to a loss of internal control over appetitive behavior relative to external cues. We discuss how this relative loss of control by food deprivation signals may result from interference with hippocampal-dependent learning and memory processes, forming the basis of a vicious-cycle of excessive intake, body weight gain, and progressive cognitive decline that may begin very early in life. PMID:26002280
Malignant otitis externa in a healthy non-diabetic patient.
Liu, Xiao-Long; Peng, Hong; Mo, Ting-Ting; Liang, Yong
2016-08-01
A healthy 60-year-old male was initially treated for external otitis, and subsequently received multiple surgeries including abscess drainage, temporal bone debridement, canaloplasty of the external auditory meatus, and fistula excision and was treated with numerous antibiotics at another hospital over a 1-year period. He was seen at our hospital on February 14, 2014 with a complaint of a non-healing wound behind the left ear and drainage of purulent fluid. He had no history of diabetes mellitus or compromised immune function. Computed tomography (CT) and magnetic resonance imaging (MRI) studies at our hospital showed osteomyelitis involving the left temporal, occipital, and sphenoid bones, the mandible, and an epidural abscess. Routine blood testing and tests of immune function were normal, and no evidence of other infectious processes was found. He was diagnosed with malignant otitis externa (MOE). Bone debridement and incision and drainage of the epidural abscess were performed, and vancomycin was administered because culture results revealed Corynebacterium jeikeium, Corynebacterium xerosis, and Enterococcus faecalis. MOE should be considered in healthy patients with external otitis who fail initial treatment.
Rochester, Lynn; Baker, Katherine; Nieuwboer, Alice; Burn, David
2011-02-15
Independence of certain gait characteristics from dopamine replacement therapies highlights its complex pathophysiology in Parkinson's disease (PD). We explored the effect of two different cue strategies on gait characteristics in relation to their response to dopaminergic medications. Fifty people with PD (age 69.22 ± 6.6 years) were studied. Participants walked with and without cues presented in a randomized order. Cue strategies were: (1) internal cue (attention to increase step length) and (2) external cue (auditory cue with instruction to take large step to the beat). Testing was carried out two times at home (on and off medication). Gait was measured using a Stride Analyzer (B&L Engineering). Gait outcomes were walking speed, stride length, step frequency, and coefficient of variation (CV) of stride time and double limb support duration (DLS). Walking speed, stride length, and stride time CV improved on dopaminergic medications, whereas step frequency and DLS CV did not. Internal and external cues increased stride time and walking speed (on and off dopaminergic medications). Only the external cue significantly improved stride time CV and DLS CV, whereas the internal cue had no effect (on and off dopaminergic medications). Internal and external cues selectively modify gait characteristics in relation to the type of gait disturbance and its dopa-responsiveness. Although internal (attention) and external cues target dopaminergic gait dysfunction (stride length), only external cues target stride to stride fluctuations in gait. Despite an overlap with dopaminergic pathways, external cues may effectively address nondopaminergic gait dysfunction and potentially increase mobility and reduce gait instability and falls. Copyright © 2010 Movement Disorder Society.
Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D
2017-03-01
Existing evidence suggests a strong relationship between tinnitus and emotion. The objective of this study was to examine the effects of short-term emotional changes along valence and arousal dimensions on tinnitus outcomes. Emotional stimuli were presented in two different modalities: auditory and visual. The authors hypothesized that (1) negative valence (unpleasant) stimuli and/or high arousal stimuli will lead to greater tinnitus loudness and annoyance than positive valence and/or low arousal stimuli, and (2) auditory emotional stimuli, which are in the same modality as the tinnitus, will exhibit a greater effect on tinnitus outcome measures than visual stimuli. Auditory and visual emotive stimuli were administered to 22 participants (12 females and 10 males) with chronic tinnitus, recruited via email invitations send out to the University of Auckland Tinnitus Research Volunteer Database. Emotional stimuli used were taken from the International Affective Digital Sounds- Version 2 (IADS-2) and the International Affective Picture System (IAPS) (Bradley and Lang, 2007a, 2007b). The Emotion Regulation Questionnaire (Gross and John, 2003) was administered alongside subjective ratings of tinnitus loudness and annoyance, and psychoacoustic sensation level matches to external sounds. Males had significantly different emotional regulation scores than females. Negative valence emotional auditory stimuli led to higher tinnitus loudness ratings in males and females and higher annoyance ratings in males only; loudness matches of tinnitus remained unchanged. The visual stimuli did not have an effect on tinnitus ratings. The results are discussed relative to the Adaptation Level Theory Model of Tinnitus. The results indicate that the negative valence dimension of emotion is associated with increased tinnitus magnitude judgements and gender effects may also be present, but only when the emotional stimulus is in the auditory modality. Sounds with emotional associations may be used for sound therapy for tinnitus relief; it is of interest to determine whether the emotional component of sound treatments can play a role in reversing the negative responses discussed in this paper. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kuo, Li-Li
2009-01-01
Declared the year of YouTube, 2007 was hailed as bringing a technological revolution in relation to pedagogy, one that may provide more convenient access to materials for language input, such as auditory, visual, and other types of authentic resources in order to promote advancement in all four language learning skills--listening, speaking,…
Attentional models of multitask pilot performance using advanced display technology.
Wickens, Christopher D; Goh, Juliana; Helleberg, John; Horrey, William J; Talleur, Donald A
2003-01-01
In the first part of the reported research, 12 instrument-rated pilots flew a high-fidelity simulation, in which air traffic control presentation of auditory (voice) information regarding traffic and flight parameters was compared with advanced display technology presentation of equivalent information regarding traffic (cockpit display of traffic information) and flight parameters (data link display). Redundant combinations were also examined while pilots flew the aircraft simulation, monitored for outside traffic, and read back communications messages. The data suggested a modest cost for visual presentation over auditory presentation, a cost mediated by head-down visual scanning, and no benefit for redundant presentation. The effects in Part 1 were modeled by multiple-resource and preemption models of divided attention. In the second part of the research, visual scanning in all conditions was fit by an expected value model of selective attention derived from a previous experiment. This model accounted for 94% of the variance in the scanning data and 90% of the variance in a second validation experiment. Actual or potential applications of this research include guidance on choosing the appropriate modality for presenting in-cockpit information and understanding task strategies induced by introducing new aviation technology.
Comparison of different speech tasks among adults who stutter and adults who do not stutter
Ritto, Ana Paula; Costa, Julia Biancalana; Juste, Fabiola Staróbole; de Andrade, Claudia Regina Furquim
2016-01-01
OBJECTIVES: In this study, we compared the performance of both fluent speakers and people who stutter in three different speaking situations: monologue speech, oral reading and choral reading. This study follows the assumption that the neuromotor control of speech can be influenced by external auditory stimuli in both speakers who stutter and speakers who do not stutter. METHOD: Seventeen adults who stutter and seventeen adults who do not stutter were assessed in three speaking tasks: monologue, oral reading (solo reading aloud) and choral reading (reading in unison with the evaluator). Speech fluency and rate were measured for each task. RESULTS: The participants who stuttered had a lower frequency of stuttering during choral reading than during monologue and oral reading. CONCLUSIONS: According to the dual premotor system model, choral speech enhanced fluency by providing external cues for the timing of each syllable compensating for deficient internal cues. PMID:27074176
2018-01-01
Background First branchial cleft anomalies (FBCA) are rare clinical entities of the head and neck. Typically, the tract of the FBCA begins in the external auditory canal and ends in the postauricular or submandibular region. Case Presentation We present a case of a 23-year-old man who had a first branchial cleft fistula with atypical opening on the root of the helical crus. Complete excision of the tract, including the cuff of surrounding cartilage, was performed. Histopathology revealed a fistular tract lined with squamous epithelium. To our knowledge, this is the first case to be reported of type I FBCA with an opening on the root of the helical crus. The low incidence and varied presentation often result in misdiagnosis and inappropriate treatment. Conclusions In the patients with FBCA, careful recognition of atypical variants is essential for complete excision. PMID:29560006
Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S
2013-01-01
Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.
Mulazimoglu, S; Flury, R; Kapila, S; Linder, T
2017-04-01
A distinct nerve innervating the external auditory canal can often be identified in close relation to the facial nerve when gradually thinning the posterior canal wall. This nerve has been attributed to coughing during cerumen removal, neuralgic pain, Hitselberger's sign and vesicular eruptions described in Ramsay Hunt's syndrome. This study aimed to demonstrate the origin and clinical impact of this nerve. In patients with intractable otalgia or severe coughing whilst inserting a hearing aid, who responded temporarily to local anaesthesia, the symptoms could be resolved by sectioning a sensory branch to the posterior canal. In a temporal bone specimen, it was revealed that this nerve is predominantly a continuation of Arnold's nerve, also receiving fibres from the glossopharyngeal nerve and facial nerve. Histologically, the communicating branch from the facial nerve was confirmed. Surgeons should be aware of the posterior auricular sensory branch and its clinical implications.
Tympanic plate fractures in temporal bone trauma: prevalence and associated injuries.
Wood, C P; Hunt, C H; Bergen, D C; Carlson, M L; Diehn, F E; Schwartz, K M; McKenzie, G A; Morreale, R F; Lane, J I
2014-01-01
The prevalence of tympanic plate fractures, which are associated with an increased risk of external auditory canal stenosis following temporal bone trauma, is unknown. A review of posttraumatic high-resolution CT temporal bone examinations was performed to determine the prevalence of tympanic plate fractures and to identify any associated temporal bone injuries. A retrospective review was performed to evaluate patients with head trauma who underwent emergent high-resolution CT examinations of the temporal bone from July 2006 to March 2012. Fractures were identified and assessed for orientation; involvement of the tympanic plate, scutum, bony labyrinth, facial nerve canal, and temporomandibular joint; and ossicular chain disruption. Thirty-nine patients (41.3 ± 17.2 years of age) had a total of 46 temporal bone fractures (7 bilateral). Tympanic plate fractures were identified in 27 (58.7%) of these 46 fractures. Ossicular disruption occurred in 17 (37.0%). Fractures involving the scutum occurred in 25 (54.4%). None of the 46 fractured temporal bones had a mandibular condyle dislocation or fracture. Of the 27 cases of tympanic plate fractures, 14 (51.8%) had ossicular disruption (P = .016) and 18 (66.6%) had a fracture of the scutum (P = .044). Temporomandibular joint gas was seen in 15 (33%) but was not statistically associated with tympanic plate fracture (P = .21). Tympanic plate fractures are commonly seen on high-resolution CT performed for evaluation of temporal bone trauma. It is important to recognize these fractures to avoid the preventable complication of external auditory canal stenosis and the potential for conductive hearing loss due to a fracture involving the scutum or ossicular chain.
Neoadjuvant chemotherapy in technically unresectable carcinoma of external auditory canal
Joshi, Amit; Tandon, Nidhi; Noronha, Vanita; Dhumal, Sachin; Patil, Vijay; Arya, Supreeta; Juvekar, Shashikant; Agarwal, Jaiprakash; DCruz, Anil; Pai, Prathmesh; Prabhash, Kumar
2015-01-01
Background: Carcinoma of external auditory canal (EAC) is a very rare malignancy with surgical resection as the main modality of treatment. The outcomes with nonsurgical modalities are very dismal. We present a retrospective analysis of 4 patients evaluating the role of neoadjuvant chemotherapy in technically unresectable cancers. Materials and Methods: This is a retrospective analysis of 4 patients from our institute from 2010 to 2014 with carcinoma EAC who were deemed unfit for surgery due to extensive disease involving occipital bone with soft tissue infiltration (n = 2), temporal dura (n = 1), left temporal lobe, and extensive soft tissue involvement (n = 1). All these patients received neoadjuvant chemotherapy with docetaxel, cisplatin and 5 fluorouracil (n = 3) and paclitaxel and cisplatin (n = 1). Results: Response evaluation showed a partial response (PR) in 3 and stable disease (SD) in 1 patient by Response Evaluation Criteria in Solid Tumors criteria. All 3 patients who received 3 drug chemotherapy had PR while 1 patient who received 2 drug chemotherapy had SD. Two of these patients underwent surgery, and other 2 underwent definitive chemoradiation. One of 3 patients who achieved PR underwent surgical resection; the other 2 remained unresectable in view of the persistent intradural extension and infratemporal fossa involvement. One patient who had SD could undergo surgery in view of clearance of infraatemporal fossa. Recent follow-up shows that 3 out of these 4 patients are alive. Conclusion: This indicates that there may be a role of induction chemotherapy in converting potentially unresectable tumors to resectable disease that could produce better outcomes in carcinoma EAC. PMID:26855526
McDonald, Skye; Dalton, Katie I; Rushby, Jacqueline A; Landin-Romero, Ramon
2018-06-14
Adults with severe traumatic brain injury (TBI) often suffer poor social cognition. Social cognition is complex, requiring verbal, non-verbal, auditory, visual and affective input and integration. While damage to focal temporal and frontal areas has been implicated in disorders of social cognition after TBI, the role of white matter pathology has not been examined. In this study 17 adults with chronic, severe TBI and 17 control participants underwent structural MRI scans and Diffusion Tensor Imaging. The Awareness of Social Inference Test (TASIT) was used to assess their ability to understand emotional states, thoughts, intentions and conversational meaning in everyday exchanges. Track-based spatial statistics were used to perform voxelwise analysis of Fractional Anisotropy (FA) and Mean Diffusivity (MD) of white matter tracts associated with poor social cognitive performance. FA suggested a wide range of tracts were implicated in poor TASIT performance including tracts known to mediate, auditory localisation (planum temporale) communication between nonverbal and verbal processes in general (corpus callosum) and in memory in particular (fornix) as well as tracts and structures associated with semantics and verbal recall (left temporal lobe and hippocampus), multimodal processing and integration (thalamus, external capsule, cerebellum) and with social cognition (orbitofrontal cortex, frontopolar cortex, right temporal lobe). Even when controlling for non-social cognition, the corpus callosum, fornix, bilateral thalamus, right external capsule and right temporal lobe remained significant contributors to social cognitive performance. This study highlights the importance of loss of white matter connectivity in producing complex social information processing deficits after TBI.
Clinical characteristics of keratosis obturans and external auditory canal cholesteatoma.
Park, So Young; Jung, Young Hoon; Oh, Jeong-Hoon
2015-02-01
Keratosis obturans (KO) and external auditory canal cholesteatoma (EACC) have been considered separate entities. While the disorders are distinct, they share many overlapping characteristics, making a correct diagnosis difficult. In the present study, we compared their clinical characteristics and radiological features to clarify the diagnostic criteria. Retrospective case series. Academic medical center. The clinical data of 23 cases of EACC and KO were retrospectively reviewed. The following clinical characteristics were compared between the 2 groups: sex, age, onset of symptoms, follow-up period, audiometric results, and imaging findings on temporal bone computed tomography including bilaterality, location, and the presence of extension to adjacent tissue. The mean age of the EACC group was significantly older than that of the KO group. All of the cases of EACC occurred unilaterally, and bilateral occurrences of KO were observed in 4 of 9 cases. All of the lesions in the KO group were circumferential, and no lesion in the EACC group invaded the superior canal wall. No significant differences in symptoms, such as acute otalgia, otorrhea, and hearing loss, were noted between the 2 groups. The incidence of conductive hearing impairment more than 10 dB was higher in the KO group than in the EACC group. Thus, KO and EACC are 2 distinct disease entities that share common features in clinical characteristics except for predominant age and bilaterality. Conservative treatment with meticulous cleaning of the lesion was successful in most cases with a long-term follow-up. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
Eye closure in darkness animates olfactory and gustatory cortical areas.
Wiesmann, M; Kopietz, R; Albrecht, J; Linn, J; Reime, U; Kara, E; Pollatos, O; Sakar, V; Anzinger, A; Fesl, G; Brückmann, H; Kobal, G; Stephan, T
2006-08-01
In two previous fMRI studies, it was reported that eyes-open and eyes-closed conditions in darkness had differential effects on brain activity, and typical patterns of cortical activity were identified. Without external stimulation, ocular motor and attentional systems were activated when the eyes were open. On the contrary, the visual, somatosensory, vestibular, and auditory systems were activated when the eyes were closed. In this study, we investigated whether cortical areas related to the olfactory and gustatory system are also animated by eye closure without any other external stimulation. In a first fMRI experiment (n = 22), we identified cortical areas including the piriform cortex activated by olfactory stimulation. In a second experiment (n = 12) subjects lying in darkness in the MRI scanner alternately opened and closed their eyes. In accordance to previous studies, we found activation clusters bilaterally in visual, somatosensory, vestibular and auditory cortical areas for the contrast eyes-closed vs. eyes-open. In addition, we were able to show that cortical areas related to the olfactory and gustatory system were also animated by eye closure. These results support the hypothesis that there are two different states of mental activity: with the eyes closed, an "interoceptive" state characterized by imagination and multisensory activity and with the eyes open, an "exteroceptive" state characterized by attention and ocular motor activity. Our study also suggests that the chosen baseline condition may have a considerable impact on activation patterns and on the interpretation of brain activation studies. This needs to be considered for studies of the olfactory and gustatory system.
Pane, Gianluca; Cacciola, Gabriele; Giacco, Elisabetta; Mariottini, Gian Luigi; Coppo, Erika
2015-01-01
External otitis is a diffuse inflammation around the external auditory canal and auricle, which is often occurred by microbial infection. This disease is generally treated using antibiotics, but the frequent occurrence of antibiotic resistance requires the development of new antibiotic agents. In this context, unexplored bioactive natural candidates could be a chance for the production of targeted drugs provided with antimicrobial activity. In this paper, microbial pathogens were isolated from patients with external otitis using ear swabs for over one year, and the antimicrobial activity of the two methanol extracts from selected marine (Dunaliella salina) and freshwater (Pseudokirchneriella subcapitata) microalgae was tested on the isolated pathogens. Totally, 114 bacterial and 11 fungal strains were isolated, of which Staphylococcus spp. (28.8%) and Pseudomonas aeruginosa (P. aeruginosa) (24.8%) were the major pathogens. Only three Staphylococcus aureus (S. aureus) strains and 11 coagulase-negative Staphylococci showed resistance to methicillin. The two algal extracts showed interesting antimicrobial properties, which mostly inhibited the growth of isolated S. aureus, P. aeruginosa, Escherichia coli, and Klebsiella spp. with MICs range of 1.4 × 109 to 2.2 × 1010 cells/mL. These results suggest that the two algae have potential as resources for the development of antimicrobial agents. PMID:26492256
Emerging technologies with potential for objectively evaluating speech recognition skills.
Rawool, Vishakha Waman
2016-01-01
Work-related exposure to noise and other ototoxins can cause damage to the cochlea, synapses between the inner hair cells, the auditory nerve fibers, and higher auditory pathways, leading to difficulties in recognizing speech. Procedures designed to determine speech recognition scores (SRS) in an objective manner can be helpful in disability compensation cases where the worker claims to have poor speech perception due to exposure to noise or ototoxins. Such measures can also be helpful in determining SRS in individuals who cannot provide reliable responses to speech stimuli, including patients with Alzheimer's disease, traumatic brain injuries, and infants with and without hearing loss. Cost-effective neural monitoring hardware and software is being rapidly refined due to the high demand for neurogaming (games involving the use of brain-computer interfaces), health, and other applications. More specifically, two related advances in neuro-technology include relative ease in recording neural activity and availability of sophisticated analysing techniques. These techniques are reviewed in the current article and their applications for developing objective SRS procedures are proposed. Issues related to neuroaudioethics (ethics related to collection of neural data evoked by auditory stimuli including speech) and neurosecurity (preservation of a person's neural mechanisms and free will) are also discussed.
Planning music-based amelioration and training in infancy and childhood based on neural evidence.
Huotilainen, Minna; Tervaniemi, Mari
2018-05-04
Music-based amelioration and training of the developing auditory system has a long tradition, and recent neuroscientific evidence supports using music in this manner. Here, we present the available evidence showing that various music-related activities result in positive changes in brain structure and function, becoming helpful for auditory cognitive processes in everyday life situations for individuals with typical neural development and especially for individuals with hearing, learning, attention, or other deficits that may compromise auditory processing. We also compare different types of music-based training and show how their effects have been investigated with neural methods. Finally, we take a critical position on the multitude of error sources found in amelioration and training studies and on publication bias in the field. We discuss some future improvements of these issues in the field of music-based training and their potential results at the neural and behavioral levels in infants and children for the advancement of the field and for a more complete understanding of the possibilities and significance of the training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Huotilainen, Minna
2015-03-01
Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications. © 2015 New York Academy of Sciences.
Bottom-up influences of voice continuity in focusing selective auditory attention
Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara
2015-01-01
Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings. PMID:24633644
Bottom-up influences of voice continuity in focusing selective auditory attention.
Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara
2014-01-01
Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.
A Device for Human Ultrasonic Echolocation.
Sohl-Dickstein, Jascha; Teng, Santani; Gaub, Benjamin M; Rodgers, Chris C; Li, Crystal; DeWeese, Michael R; Harper, Nicol S
2015-06-01
We present a device that combines principles of ultrasonic echolocation and spatial hearing to provide human users with environmental cues that are 1) not otherwise available to the human auditory system, and 2) richer in object and spatial information than the more heavily processed sonar cues of other assistive devices. The device consists of a wearable headset with an ultrasonic emitter and stereo microphones with affixed artificial pinnae. The goal of this study is to describe the device and evaluate the utility of the echoic information it provides. The echoes of ultrasonic pulses were recorded and time stretched to lower their frequencies into the human auditory range, then played back to the user. We tested performance among naive and experienced sighted volunteers using a set of localization experiments, in which the locations of echo-reflective surfaces were judged using these time-stretched echoes. Naive subjects were able to make laterality and distance judgments, suggesting that the echoes provide innately useful information without prior training. Naive subjects were generally unable to make elevation judgments from recorded echoes. However, trained subjects demonstrated an ability to judge elevation as well. This suggests that the device can be used effectively to examine the environment and that the human auditory system can rapidly adapt to these artificial echolocation cues. Interpreting and interacting with the external world constitutes a major challenge for persons who are blind or visually impaired. This device has the potential to aid blind people in interacting with their environment.
A device for human ultrasonic echolocation
Gaub, Benjamin M.; Rodgers, Chris C.; Li, Crystal; DeWeese, Michael R.; Harper, Nicol S.
2015-01-01
Objective We present a device that combines principles of ultrasonic echolocation and spatial hearing to provide human users with environmental cues that are 1) not otherwise available to the human auditory system and 2) richer in object, and spatial information than the more heavily processed sonar cues of other assistive devices. The device consists of a wearable headset with an ultrasonic emitter and stereo microphones with affixed artificial pinnae. The goal of this study is to describe the device and evaluate the utility of the echoic information it provides. Methods The echoes of ultrasonic pulses were recorded and time-stretched to lower their frequencies into the human auditory range, then played back to the user. We tested performance among naive and experienced sighted volunteers using a set of localization experiments in which the locations of echo-reflective surfaces were judged using these time stretched echoes. Results Naive subjects were able to make laterality and distance judgments, suggesting that the echoes provide innately useful information without prior training. Naive subjects were generally unable to make elevation judgments from recorded echoes. However trained subjects demonstrated an ability to judge elevation as well. Conclusion This suggests that the device can be used effectively to examine the environment and that the human auditory system can rapidly adapt to these artificial echolocation cues. Significance Interpreting and interacting with the external world constitutes a major challenge for persons who are blind or visually impaired. This device has the potential to aid blind people in interacting with their environment. PMID:25608301
Moseley, Peter; Fernyhough, Charles; Ellison, Amanda
2013-01-01
Auditory verbal hallucinations (AVHs) are the experience of hearing voices in the absence of any speaker, often associated with a schizophrenia diagnosis. Prominent cognitive models of AVHs suggest they may be the result of inner speech being misattributed to an external or non-self source, due to atypical self- or reality monitoring. These arguments are supported by studies showing that people experiencing AVHs often show an externalising bias during monitoring tasks, and neuroimaging evidence which implicates superior temporal brain regions, both during AVHs and during tasks that measure verbal self-monitoring performance. Recently, efficacy of noninvasive neurostimulation techniques as a treatment option for AVHs has been tested. Meta-analyses show a moderate effect size in reduction of AVH frequency, but there has been little attempt to explain the therapeutic effect of neurostimulation in relation to existing cognitive models. This article reviews inner speech models of AVHs, and argues that a possible explanation for reduction in frequency following treatment may be modulation of activity in the brain regions involving the monitoring of inner speech. PMID:24125858
McCarthy-Jones, Simon; Resnick, Phillip J
2014-01-01
The experience of hearing a voice in the absence of an appropriate external stimulus, formally termed an auditory verbal hallucination (AVH), may be malingered for reasons such as personal financial gain, or, in criminal cases, to attempt a plea of not guilty by reason of insanity. An accurate knowledge of the phenomenology of AVHs is central to assessing the veracity of claims to such experiences. We begin by demonstrating that some contemporary criminal cases still employ inaccurate conceptions of the phenomenology of AVHs to assess defendants' claims. The phenomenology of genuine, malingered, and atypical AVHs is then examined. We argue that, due to the heterogeneity of AVHs, the use of typical properties of AVHs as a yardstick against which to evaluate the veracity of a defendant's claims is likely to be less effective than the accumulation of instances of defendants endorsing statements of atypical features of AVHs. We identify steps towards the development of a formal tool for this purpose, and examine other conceptual issues pertinent to criminal cases arising from the phenomenology of AVHs. Copyright © 2013 Elsevier Ltd. All rights reserved.
Investigating attentional processes in depressive-like domestic horses (Equus caballus).
Rochais, C; Henry, S; Fureix, C; Hausberger, M
2016-03-01
Some captive/domestic animals respond to confinement by becoming inactive and unresponsive to external stimuli. Human inactivity is one of the behavioural markers of clinical depression, a mental disorder diagnosed by the co-occurrence of symptoms including deficit in selective attention. Some riding horses display 'withdrawn' states of inactivity and low responsiveness to stimuli that resemble the reduced engagement with their environment of some depressed patients. We hypothesized that 'withdrawn' horses experience a depressive-like state and evaluated their level of attention by confronting them with auditory stimuli. Five novel auditory stimuli were broadcasted to 27 horses, including 12 'withdrawn' horses, for 5 days. The horses' reactions and durations of attention were recorded. Non-withdrawn horses reacted more and their attention lasted longer than that of withdrawn horses on the first day, but their durations of attention decreased over days, but those of withdrawn horses remained stable. These results suggest that the withdrawn horses' selective attention is altered, adding to already evidenced common features between this horses' state and human depression. Copyright © 2016. Published by Elsevier B.V.
INTERPOL survey of the use of speaker identification by law enforcement agencies.
Morrison, Geoffrey Stewart; Sahito, Farhan Hyder; Jardine, Gaëlle; Djokic, Djordje; Clavet, Sophie; Berghs, Sabine; Goemans Dorny, Caroline
2016-06-01
A survey was conducted of the use of speaker identification by law enforcement agencies around the world. A questionnaire was circulated to law enforcement agencies in the 190 member countries of INTERPOL. 91 responses were received from 69 countries. 44 respondents reported that they had speaker identification capabilities in house or via external laboratories. Half of these came from Europe. 28 respondents reported that they had databases of audio recordings of speakers. The clearest pattern in the responses was that of diversity. A variety of different approaches to speaker identification were used: The human-supervised-automatic approach was the most popular in North America, the auditory-acoustic-phonetic approach was the most popular in Europe, and the spectrographic/auditory-spectrographic approach was the most popular in Africa, Asia, the Middle East, and South and Central America. Globally, and in Europe, the most popular framework for reporting conclusions was identification/exclusion/inconclusive. In Europe, the second most popular framework was the use of verbal likelihood ratio scales. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Presutti, L.; Bonali, M.; Marchioni, D.; Pavesi, G.; Feletti, A.; Alicandri-Ciufelli, M.
2017-01-01
SUMMARY The aim of this paper is to describe and evaluate the feasibility of an expanded endoscopic transcanal transpromotorial approach (ExpTTA) to the internal auditory canal and the cerebellopontine angle. To this end, we performed a cadaveric dissection study in September 2015. In total, 2 heads (4 sides) were dissected focusing on anatomical landmarks and surgical feasibility. Data from dissections were reviewed and analysed for further consideration. In all 4 sides of the cadavers the procedure was feasible. In all cadavers, it was necessary to extensively drill the temporo-mandibular joint and to calibrate the external ear canal to allow adequate room to manoeuver the instruments and optics and to comfortably access the cerebellopontine angle. In addition, thorough skeletonisation of the carotid artery and the jugular bulb were necessary for the same purpose. In conclusion, ExpTTA appeared to be successful to access the internal auditory canal and cerebellopontine angle region. Potential extensive and routine application of this type of approach in lateral skull base surgery will depend on the development of technology and surgical refinements and on the diffusion of skull base endoscopic skills among otolaryngologists and neurosurgical community. PMID:28516966
Rosa, Francisco; Coutinho, Miguel Bebiano; Ferreira, João Pinto; Sousa, Cecilia Almeida
2016-01-01
The aim of this study was to assess the main ear malformations, hearing loss and auditory rehabilitation in children with Treacher Collins syndrome. We performed a retrospective study of 9 children with Treacher Collins syndrome treated in a central hospital between January 2003 and January 2013. This study showed a high incidence of malformations of the outer and middle ear, such as microtia, atresia or stenosis of the external auditory canal, hypoplastic middle ear cavity, dysmorphic or missing ossicular chain. Most patients had bilateral hearing loss of moderate or high degree. In the individuals studied, there was functional improvement in patients with bone-anchored hearing aids in relation to conventional hearing aids by bone conduction. Treacher Collins syndrome is characterized by bilateral malformations of the outer and middle ear. Hearing rehabilitation in these children is of utmost importance, and bone-anchored hearing aids is the method of choice. Copyright © 2014 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
Le Fort III Distraction With Internal vs External Distractors: A Cephalometric Analysis.
Robertson, Kevin J; Mendez, Bernardino M; Bruce, William J; McDonnell, Brendan D; Chiodo, Michael V; Patel, Parit A
2018-05-01
This study compares the change in midface position following Le Fort III advancement using either rigid external distraction (group 1) or internal distraction (group 2). We hypothesized that, with reference to right-facing cephalometry, internal distraction would result in increased clockwise rotation and inferior displacement of the midface. Le Fort III osteotomies and standardized distraction protocols were performed on 10 cadaveric specimens per group. Right-facing lateral cephalograms were traced and compared across time points to determine change in position at points orbitale, anterior nasal spine (ANS), A-point, and angle ANB. Institutional. Twenty cadaveric head specimens. Standard subcranial Le Fort III osteotomies were performed from a coronal approach and adequately mobilized. The specified distraction mechanism was applied and advanced by 15 mm. Changes of position were calculated at various skeletal landmarks: orbitale, ANS, A-point, and ANB. Group 1 demonstrated relatively uniform x-axis advancement with minimal inferior repositioning at the A-point, ANS, and orbitale. Group 2 demonstrated marked variation in x-axis advancement among the 3 points, along with a significant inferior repositioning and clockwise rotation of the midface ( P < .0001). External distraction resulted in more uniform advancement of the midface, whereas internal distraction resulted in greater clockwise rotation and inferior displacement. External distraction appears to provide increased vector control of the midface, which is important in creating a customized distraction plan based on the patient's individual occlusal and skeletal needs.
Flight Dynamic Simulation of Fighter In the Asymmetric External Store Release Process
NASA Astrophysics Data System (ADS)
Safi’i, Imam; Arifianto, Ony; Nurohman, Chandra
2018-04-01
In the fighter design, it is important to evaluate and analyze the flight dynamic of the aircraft earlier in the development process. One of the case is the dynamics of external store release process. A simulation tool can be used to analyze the fighter/external store system’s dynamics in the preliminary design stage. This paper reports the flight dynamics of Jet Fighter Experiment (JF-1 E) in asymmetric Advance Medium Range Air to Air Missile (AMRAAM) release process through simulations. The JF-1 E and AIM 120 AMRAAAM models are built by using Advanced Aircraft Analysis (AAA) and Missile Datcom software. By using these softwares, the aerodynamic stability and control derivatives can be obtained and used to model the dynamic characteristic of the fighter and the external store. The dynamic system is modeled by using MATLAB/Simulink software. By using this software, both the fighter/external store integration and the external store release process is simulated, and the dynamic of the system can be analyzed.
ERIC Educational Resources Information Center
Santos-Sacchi, Joseph; Allen, Jont B.; Dorman, Michael; Bergeson-Dana, Tonya R.
2012-01-01
These are the proceedings of 2012 AG Bell Research Symposium, presented July 1, 2012, as part of the AG Bell 2012 Convention. The session was moderated by Tamala S. Bradham, Ph.D., CCC-A. The papers presented at the proceedings are the following: (1) The Queens of Audition; (2) Speech Perception and Hearing Loss; (3) The Restoration of Speech…
HEaDS-UP Phase IV Assessment: Headgear Effects on Auditory Perception
2015-02-01
8 Fig. 6 Average attenuation measured for the CIPHER and INTERCPT helmets as a function of noise level, mandible/ eyewear ...impulsive noise consistent with the US Occupational Safety and Health Administration (OSHA 1981), the National Institute for Occupational Safety and... eyewear , or HPDs) (Fig. 5) show that the CIPHER and INTERCPT compared favorably with the currently fielded advanced combat helmet (ACH). Figure 6
Juvenile psittacine environmental enrichment.
Simone-Freilicher, Elisabeth; Rupley, Agnes E
2015-05-01
Environmental enrichment is of great import to the emotional, intellectual, and physical development of the juvenile psittacine and their success in the human home environment. Five major types of enrichment include social, occupational, physical, sensory, and nutritional. Occupational enrichment includes exercise and psychological enrichment. Physical enrichment includes the cage and accessories and the external home environment. Sensory enrichment may be visual, auditory, tactile, olfactory, or taste oriented. Nutritional enrichment includes variations in appearance, type, and frequency of diet, and treats, novelty, and foraging. Two phases of the preadult period deserve special enrichment considerations: the development of autonomy and puberty. Copyright © 2015 Elsevier Inc. All rights reserved.
Recent Advances in Pharmacotherapies for the Externalizing Disorders
ERIC Educational Resources Information Center
Brown, Ronald T.
2006-01-01
This article provides a review of various psychotropic agents employed for children and adolescents with externalizing disorders. With the exception of the stimulants, clinical use of psychotropic medications for children with externalizing disorders far exceeds the available empirical literature. Further, there are insufficient data pertaining to…
Response profiles of murine spiral ganglion neurons on multi-electrode arrays
NASA Astrophysics Data System (ADS)
Hahnewald, Stefan; Tscherter, Anne; Marconi, Emanuele; Streit, Jürg; Widmer, Hans Rudolf; Garnham, Carolyn; Benav, Heval; Mueller, Marcus; Löwenheim, Hubert; Roccio, Marta; Senn, Pascal
2016-02-01
Objective. Cochlear implants (CIs) have become the gold standard treatment for deafness. These neuroprosthetic devices feature a linear electrode array, surgically inserted into the cochlea, and function by directly stimulating the auditory neurons located within the spiral ganglion, bypassing lost or not-functioning hair cells. Despite their success, some limitations still remain, including poor frequency resolution and high-energy consumption. In both cases, the anatomical gap between the electrode array and the spiral ganglion neurons (SGNs) is believed to be an important limiting factor. The final goal of the study is to characterize response profiles of SGNs growing in intimate contact with an electrode array, in view of designing novel CI devices and stimulation protocols, featuring a gapless interface with auditory neurons. Approach. We have characterized SGN responses to extracellular stimulation using multi-electrode arrays (MEAs). This setup allows, in our view, to optimize in vitro many of the limiting interface aspects between CIs and SGNs. Main results. Early postnatal mouse SGN explants were analyzed after 6-18 days in culture. Different stimulation protocols were compared with the aim to lower the stimulation threshold and the energy needed to elicit a response. In the best case, a four-fold reduction of the energy was obtained by lengthening the biphasic stimulus from 40 μs to 160 μs. Similarly, quasi monophasic pulses were more effective than biphasic pulses and the insertion of an interphase gap moderately improved efficiency. Finally, the stimulation with an external electrode mounted on a micromanipulator showed that the energy needed to elicit a response could be reduced by a factor of five with decreasing its distance from 40 μm to 0 μm from the auditory neurons. Significance. This study is the first to show electrical activity of SGNs on MEAs. Our findings may help to improve stimulation by and to reduce energy consumption of CIs and thereby contribute to the development of fully implantable devices with better auditory resolution in the future.
Pyff - a pythonic framework for feedback applications and stimulus presentation in neuroscience.
Venthur, Bastian; Scholler, Simon; Williamson, John; Dähne, Sven; Treder, Matthias S; Kramarek, Maria T; Müller, Klaus-Robert; Blankertz, Benjamin
2010-01-01
This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain-computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation.
Pyff – A Pythonic Framework for Feedback Applications and Stimulus Presentation in Neuroscience
Venthur, Bastian; Scholler, Simon; Williamson, John; Dähne, Sven; Treder, Matthias S.; Kramarek, Maria T.; Müller, Klaus-Robert; Blankertz, Benjamin
2010-01-01
This paper introduces Pyff, the Pythonic feedback framework for feedback applications and stimulus presentation. Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e., feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain–computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open-source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation. PMID:21160550
NASA Astrophysics Data System (ADS)
Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit; Fitzpatrick, J. Michael
2007-03-01
In cochlear implant surgery an electrode array is permanently implanted to stimulate the auditory nerve and allow deaf people to hear. Current surgical techniques require wide excavation of the mastoid region of the temporal bone and one to three hours time to avoid damage to vital structures. Recently a far less invasive approach has been proposed-percutaneous cochlear access, in which a single hole is drilled from skull surface to the cochlea. The drill path is determined by attaching a fiducial system to the patient's skull and then choosing, on a pre-operative CT, an entry point and a target point. The drill is advanced to the target, the electrodes placed through the hole, and a stimulator implanted at the surface of the skull. The major challenge is the determination of a safe and effective drill path, which with high probability avoids specific vital structures-the facial nerve, the ossicles, and the external ear canal-and arrives at the basal turn of the cochlea. These four features lie within a few millimeters of each other, the drill is one millimeter in diameter, and errors in the determination of the target position are on the order of 0.5mm root-mean square. Thus, path selection is both difficult and critical to the success of the surgery. This paper presents a method for finding optimally safe and effective paths while accounting for target positioning error.
Attias, Joseph; Greenstein, Tally; Peled, Miriam; Ulanovski, David; Wohlgelernter, Jay; Raveh, Eyal
The aim of the study was to compare auditory and speech outcomes and electrical parameters on average 8 years after cochlear implantation between children with isolated auditory neuropathy (AN) and children with sensorineural hearing loss (SNHL). The study was conducted at a tertiary, university-affiliated pediatric medical center. The cohort included 16 patients with isolated AN with current age of 5 to 12.2 years who had been using a cochlear implant for at least 3.4 years and 16 control patients with SNHL matched for duration of deafness, age at implantation, type of implant, and unilateral/bilateral implant placement. All participants had had extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Most patients received Cochlear Nucleus devices, and the remainder either Med-El or Advanced Bionics devices. Unaided pure-tone audiograms were evaluated before and after implantation. Implantation outcomes were assessed by auditory and speech recognition tests in quiet and in noise. Data were also collected on the educational setting at 1 year after implantation and at school age. The electrical stimulation measures were evaluated only in the Cochlear Nucleus implant recipients in the two groups. Similar mapping and electrical measurement techniques were used in the two groups. Electrical thresholds, comfortable level, dynamic range, and objective neural response telemetry threshold were measured across the 22-electrode array in each patient. Main outcome measures were between-group differences in the following parameters: (1) Auditory and speech tests. (2) Residual hearing. (3) Electrical stimulation parameters. (4) Correlations of residual hearing at low frequencies with electrical thresholds at the basal, middle, and apical electrodes. The children with isolated AN performed equally well to the children with SNHL on auditory and speech recognition tests in both quiet and noise. More children in the AN group than the SNHL group were attending mainstream educational settings at school age, but the difference was not statistically significant. Significant between-group differences were noted in electrical measurements: the AN group was characterized by a lower current charge to reach subjective electrical thresholds, lower comfortable level and dynamic range, and lower telemetric neural response threshold. Based on pure-tone audiograms, the children with AN also had more residual hearing before and after implantation. Highly positive coefficients were found on correlation analysis between T levels across the basal and midcochlear electrodes and low-frequency acoustic thresholds. Prelingual children with isolated AN who fail to show expected oral and auditory progress after extensive rehabilitation with conventional hearing aids should be considered for cochlear implantation. Children with isolated AN had similar pattern as children with SNHL on auditory performance tests after cochlear implantation. The lower current charge required to evoke subjective and objective electrical thresholds in children with AN compared with children with SNHL may be attributed to the contribution to electrophonic hearing from the remaining neurons and hair cells. In addition, it is also possible that mechanical stimulation of the basilar membrane, as in acoustic stimulation, is added to the electrical stimulation of the cochlear implant.
Neural coding of syntactic structure in learned vocalizations in the songbird.
Fujimoto, Hisataka; Hasegawa, Taku; Watanabe, Dai
2011-07-06
Although vocal signals including human languages are composed of a finite number of acoustic elements, complex and diverse vocal patterns can be created from combinations of these elements, linked together by syntactic rules. To enable such syntactic vocal behaviors, neural systems must extract the sequence patterns from auditory information and establish syntactic rules to generate motor commands for vocal organs. However, the neural basis of syntactic processing of learned vocal signals remains largely unknown. Here we report that the basal ganglia projecting premotor neurons (HVC(X) neurons) in Bengalese finches represent syntactic rules that generate variable song sequences. When vocalizing an alternative transition segment between song elements called syllables, sparse burst spikes of HVC(X) neurons code the identity of a specific syllable type or a specific transition direction among the alternative trajectories. When vocalizing a variable repetition sequence of the same syllable, HVC(X) neurons not only signal the initiation and termination of the repetition sequence but also indicate the progress and state-of-completeness of the repetition. These different types of syntactic information are frequently integrated within the activity of single HVC(X) neurons, suggesting that syntactic attributes of the individual neurons are not programmed as a basic cellular subtype in advance but acquired in the course of vocal learning and maturation. Furthermore, some auditory-vocal mirroring type HVC(X) neurons display transition selectivity in the auditory phase, much as they do in the vocal phase, suggesting that these songbirds may extract syntactic rules from auditory experience and apply them to form their own vocal behaviors.
Neurobiology of rhythmic motor entrainment.
Molinari, Marco; Leggio, Maria G; De Martin, Martina; Cerasa, Antonio; Thaut, Michael
2003-11-01
Timing is extremely important for movement, and understanding the neurobiological basis of rhythm perception and reproduction can be helpful in addressing motor recovery after brain lesions. In this quest, the science of music might provide interesting hints for better understanding the brain timing mechanism. The report focuses on the neurobiological substrate of sensorimotor transformation of time data, highlighting the power of auditory rhythmic stimuli in guiding motor acts. The cerebellar role of timing is addressed in subjects with cerebellar damage; subsequently, cerebellar timing processing is highlighted through an fMRI study of professional musicians. The two approaches converge to demonstrate that different levels of time processing exist, one conscious and one not, and to support the idea that timing is a distributed function. The hypothesis that unconscious motor responses to auditory rhythmic stimuli can be relevant in guiding motor recovery and modulating music perception is advanced and discussed.
Variables affecting learning in a simulation experience: a mixed methods study.
Beischel, Kelly P
2013-02-01
The primary purpose of this study was to test a hypothesized model describing the direct effects of learning variables on anxiety and cognitive learning outcomes in a high-fidelity simulation (HFS) experience. The secondary purpose was to explain and explore student perceptions concerning the qualities and context of HFS affecting anxiety and learning. This study used a mixed methods quantitative-dominant explanatory design with concurrent qualitative data collection to examine variables affecting learning in undergraduate, beginning nursing students (N = 124). Being ready to learn, having a strong auditory-verbal learning style, and being prepared for simulation directly affected anxiety, whereas learning outcomes were directly affected by having strong auditory-verbal and hands-on learning styles. Anxiety did not quantitatively mediate cognitive learning outcomes as theorized, although students qualitatively reported debilitating levels of anxiety. This study advances nursing education science by providing evidence concerning variables affecting learning outcomes in HFS.
Display technology - Human factors concepts
NASA Astrophysics Data System (ADS)
Stokes, Alan; Wickens, Christopher; Kite, Kirsten
1990-03-01
Recent advances in the design of aircraft cockpit displays are reviewed, with an emphasis on their applicability to automobiles. The fundamental principles of display technology are introduced, and individual chapters are devoted to selective visual attention, command and status displays, foveal and peripheral displays, navigational displays, auditory displays, color and pictorial displays, head-up displays, automated systems, and dual-task performance and pilot workload. Diagrams, drawings, and photographs of typical displays are provided.
Auditory-motor learning influences auditory memory for music.
Brown, Rachel M; Palmer, Caroline
2012-05-01
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Updating concepts of first branchial cleft defects: a literature review.
D'Souza, Alwyn R; Uppal, Harpreet S; De, Ranit; Zeitoun, Hisham
2002-02-01
The Sinuses and fistulae of first branchial cleft origin have been widely reported in the literature and their variable relationship to the facial nerve has been described. Most published series however are too small to allow a detailed analysis of the relative frequency of various relationships of these lesions to the facial nerve and therefore enabling the determination of risks to the nerve at surgery. The aim of this study was to perform a comprehensive review of literature in an attempt to identify those patients with a deep tract (lying deep to the main trunk of the facial nerve and/or its branches, and/or between the branches) and to recognize the incidence of the complications of surgical management. Available English, French and German literature between 1923 and 2000 was reviewed and variables including patient's age, sex, side and type of anomaly, opening of the lesion and the relationship of the tract are analyzed in relation to the position of the facial nerve. The complications due to their surgical excision are also reported. Of the total number of cases with fistulae and sinuses identified (n=158) fistulous tracts were more likely to lie deep to the facial nerve compared with sinus tracts (P=0.01). Lesions with openings in the external auditory meatus are associated with a tract superficial to the facial nerve (P=0.05). Patients presenting at a younger age were more likely to have a deep tract with consequent increased risk of facial nerve damage. Identification of the facial nerve trunk at an early stage of dissection is critical. Extra care and caution should be exercised in younger patients (<6 months), those with fistulous tracts and in patients with a tract opening elsewhere other than the external auditory canal.
Translation, Adaptation and Cross Language Validation of Tinnitus Handicap Inventory in Urdu.
Aqeel, Muhammad; Ahmed, Ammar
2017-12-01
Tinnitus is characterized as a perception of numerous auditory sounds in absence of external stimulus. Tinnitus can have a considerable consequence on a person's quality of life, and is considered to be very complicated to quantify. The aim of this study was to investigate the reliability and validity of Urdu translation of the Tinnitus Handicap Inventory (THI) in Pakistan. It was designed to assess the presence of various auditory sounds without the external stimulus. Scale consisted of 25 items having three subscales functional, emotional, and catastrophic. The study comprised into two stages, preliminary and main studies. The results of preliminary study revealed that the overall scale had high internal consistency [alpha coefficient of Urdu version of THI (THI-U)= 0.99, alpha coefficient of English version of THI=0.98]. The overall scale had test-retest correlation over a fifteen days period of interval (0.99). Main study was performed on 110 tinnitus patients. The results of main study showed that the internal consistency and reliability of Urdu version was (α=0.93). The THI-U and its subscales demonstrated good internal consistency reliability ( α =0.81 to 0.86). High to moderate correlations were noted between tinnitus symptom ratings. A confirmatory factor analysis was used to validate the three subscales of THI-U, and high inter-correlations were found between the subscales also results revealed that a three-factor model for the THI-U was most tenable. The results displayed that the confirmatory factor analysis confirmed to validate the three subscales of THI-U. THI-U might present important information about precise facets of tinnitus distress along with diagnostic interviews in clinical practice.
A Bird’s Eye View of Human Language Evolution
Berwick, Robert C.; Beckers, Gabriël J. L.; Okanoya, Kazuo; Bolhuis, Johan J.
2012-01-01
Comparative studies of linguistic faculties in animals pose an evolutionary paradox: language involves certain perceptual and motor abilities, but it is not clear that this serves as more than an input–output channel for the externalization of language proper. Strikingly, the capability for auditory–vocal learning is not shared with our closest relatives, the apes, but is present in such remotely related groups as songbirds and marine mammals. There is increasing evidence for behavioral, neural, and genetic similarities between speech acquisition and birdsong learning. At the same time, researchers have applied formal linguistic analysis to the vocalizations of both primates and songbirds. What have all these studies taught us about the evolution of language? Is the comparative study of an apparently species-specific trait like language feasible? We argue that comparative analysis remains an important method for the evolutionary reconstruction and causal analysis of the mechanisms underlying language. On the one hand, common descent has been important in the evolution of the brain, such that avian and mammalian brains may be largely homologous, particularly in the case of brain regions involved in auditory perception, vocalization, and auditory memory. On the other hand, there has been convergent evolution of the capacity for auditory–vocal learning, and possibly for structuring of external vocalizations, such that apes lack the abilities that are shared between songbirds and humans. However, significant limitations to this comparative analysis remain. While all birdsong may be classified in terms of a particularly simple kind of concatenation system, the regular languages, there is no compelling evidence to date that birdsong matches the characteristic syntactic complexity of human language, arising from the composition of smaller forms like words and phrases into larger ones. PMID:22518103
NASA Astrophysics Data System (ADS)
Andoh, Masayoshi; Nakajima, Chihiro; Wada, Hiroshi
2005-09-01
Although the auditory transduction process is dependent on neural excitation of the auditory nerve in relation to motion of the basilar membrane (BM) in the organ of Corti (OC), specifics of this process are unclear. In this study, therefore, an attempt was made to estimate the phase of the neural excitation relative to the BM motion using a finite-element model of the OC at the basal turn of the gerbil, including the fluid-structure interaction with the lymph fluid. It was found that neural excitation occurs when the BM exhibits a maximum velocity toward the scala vestibuli at 10 Hz and shows a phase delay relative to the BM motion with increasing frequency up to 800 Hz. It then shows a phase advance until the frequency reaches 2 kHz. From 2 kHz, neural excitation again shows a phase delay with increasing frequency. From 800 Hz up to 2 kHz, the phase advances because the dominant force exerted on the hair bundle shifts from a velocity-dependent Couette flow-induced force to a displacement-dependent force induced by the pressure difference. The phase delay that occurs from 2 kHz is caused by the resonance process of the hair bundle of the IHC.
Feasibility of and Design Parameters for a Computer-Based Attitudinal Research Information System
1975-08-01
Auditory Displays Auditory Evoked Potentials Auditory Feedback Auditory Hallucinations Auditory Localization Auditory Maski ng Auditory Neurons...surprising to hear these prob- lems e:qpressed once again and in the same old refrain. The Navy attitude surveyors were frustrated when they...Audiolcgy Audiometers Aud iometry Audiotapes Audiovisual Communications Media Audiovisual Instruction Auditory Cortex Auditory
[Epidemiology of otomycoses at the University Hospital of Yopougon (Abidjan-Ivory Coast)].
Adoubryn, K D; N'Gattia, V K; Kouadio-Yapo, G C; Nigué, L; Zika, D K; Ouhon, J
2014-06-01
Otomycosis is a fungal infection, which leads to a damage of the external auditory meatus. The disease is worldwide in distribution but is said to be more common in tropical countries. Though otomycosis presumably occurs frequently in Africa, reports on its incidence and etiology are rare from Côte d'Ivoire. The objective of this study was to evaluate the prevalence of the disease and to identify aetiological agents as well as the risk factors. A cross-sectional study was carried out in the Otorhinolaryngology Department of the University Teaching Hospital of Yopougon from September 2007 to February 2008. For laboratory investigation, specimens were collected by means of a sterile swab. Samples were inoculated on Sabouraud's Dextrose Agar with and without antibiotics and incubated at 30°C for a period of 1 to 2 weeks. Identification was performed by direct microscopic examination on Cotton Blue Mount preparation and slide culture examination was used for differentiation of morphology. Biotyping was performed using Carbohydrate Fermentation tests, Carbohydrate Assimilation Tests (galerie Api 20 CAux TM - Sanofi Pasteur), Germ tube Test, detection of chlamydospore formation on corn meal agar. A total of 110 patients (sex-ratio=1.2) with suspected cases of otomycosis were investigated. Itching, otalgia, and hypoacusis were the symptoms reported by the patients and the apparent signs were debris in the ear, scabs and inflammation of the external auditory meatus. Of these, 88 cases (80%) were confirmed specifically of mycotic etiology on the basis of positive culture with 92 isolates consisting of yeasts (65.2%) and moulds (34.8%). The predominant etiological agents were Aspergillus flavus (28.4%), Candida guilliermondii (19.3%) and Candida parapsilosis (18.2%). The predisposing factors included previous otological pathology (P=0.010), frequent scratching of the external ear canal and use of ear drops (RR=3.47; IC 95%=1.3-9.27). This study revealed the great prevalence of otomycosis in Abidjan, some predisposing factors and the aetiological agents. Management of otomycosis must include mycological examination for diagnosis and information for changing behaviour patterns leading to infection. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano
2017-01-01
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631
Congenital Auricular Malformations: Description of Anomalies and Syndromes.
Bartel-Friedrich, Sylva
2015-12-01
Half of the malformations in the ear, nose, and throat region affect the ear. Malformations of the external ear (pinna or auricle with external auditory canal [EAC]) are collectively termed microtia. Microtia is a congenital anomaly that ranges in severity from mild structural abnormalities to complete absence of the external ear (anotia). Microtia occurs more frequently in males (∼2 or 3:1), is predominantly unilateral (∼70-90%), and more often involves the right ear (∼60%). The reported prevalence varies geographically from 0.83 to 17.4 per 10,000 births. Microtia may be genetic (with family history, spontaneous mutations) or acquired. Malformations of the external ear can also involve the middle ear and/or inner ear. Microtia may be an isolated birth defect, but associated anomalies or syndromes are described in 20 to 60% of cases, depending on study design. These generally fit within the oculo-auriculo-vertebral spectrum; defects are located most frequently in the facial skeleton, facial soft tissues, heart, and vertebral column, or comprise a syndrome (e.g., Treacher Collins syndrome). Diagnostic investigation of microtia includes clinical examination, audiologic testing, genetic analysis and, especially in higher grade malformations with EAC deformities, computed tomography (CT) or cone-beam CT for the planning of surgery and rehabilitation procedures, including implantation of hearing aids. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Strain, George M; Fernandes, Asia J
2015-06-01
Otitis externa is frequently accompanied by otitis media, yet it can be difficult to evaluate the tympanum, middle ear and auditory tube without the use of advanced radiographic imaging. The objective was to develop techniques for tympanometry testing in conscious dogs and to present normative data for clinical use of this equipment to enable assessment of the tympanum, middle ear and auditory tube. Sixteen hounds (14 female) from a school teaching colony. Dogs were gently restrained in a standing position. After cleaning of the ear canal, a tympanometer probe tip extension was placed in the vertical canal and automated testing performed using a handheld device. Both ears were tested in all dogs. Acceptable recordings were obtained from both ears of 13 dogs, from one ear in each of two dogs and from neither ear of one dog, resulting in data from 28 of 32 (88%) ears. Otoscopic examination confirmed the absence of inflammation or any other obvious explanation for the noncompliant dogs. No significant differences were seen between ears for any measure. Normative data are reported for peak compliance, peak compliance pressure, gradient and ear canal volume. Tympanograms can be recorded in conscious dogs to assist in the evaluation of the middle ear structures. © 2015 ESVD and ACVD.
Constantinidou, Fofi; Zaganas, Ioannis; Papastefanakis, Emmanouil; Kasselimis, Dimitrios; Nidos, Andreas; Simos, Panagiotis G
2014-09-01
Age-related memory changes are highly varied and heterogeneous. The study examined the rate of decline in verbal episodic memory as a function of education level, auditory attention span and verbal working memory capacity, and diagnosis of amnestic mild cognitive impairment (a-MCI). Data were available on a community sample of 653 adults aged 17-86 years and 70 patients with a-MCI recruited from eight broad geographic areas in Greece and Cyprus. Measures of auditory attention span and working memory capacity (digits forward and backward) and verbal episodic memory (Auditory Verbal Learning Test [AVLT]) were used. Moderated mediation regressions on data from the community sample did not reveal significant effects of education level on the rate of age-related decline in AVLT indices. The presence of a-MCI was a significant moderator of the direct effect of Age on both immediate and delayed episodic memory indices. The rate of age-related decline in verbal episodic memory is normally mediated by working memory capacity. Moreover, in persons who display poor episodic memory capacity (a-MCI group), age-related memory decline is expected to advance more rapidly for those who also display relatively poor verbal working memory capacity.
NASA Astrophysics Data System (ADS)
Neuhoff, John G.
2003-04-01
Increasing acoustic intensity is a primary cue to looming auditory motion. Perceptual overestimation of increasing intensity could provide an evolutionary selective advantage by specifying that an approaching sound source is closer than actual, thus affording advanced warning and more time than expected to prepare for the arrival of the source. Here, multiple lines of converging evidence for this evolutionary hypothesis are presented. First, it is shown that intensity change specifying accelerating source approach changes in loudness more than equivalent intensity change specifying decelerating source approach. Second, consistent with evolutionary hunter-gatherer theories of sex-specific spatial abilities, it is shown that females have a significantly larger bias for rising intensity than males. Third, using functional magnetic resonance imaging in conjunction with approaching and receding auditory motion, it is shown that approaching sources preferentially activate a specific neural network responsible for attention allocation, motor planning, and translating perception into action. Finally, it is shown that rhesus monkeys also exhibit a rising intensity bias by orienting longer to looming tones than to receding tones. Together these results illustrate an adaptive perceptual bias that has evolved because it provides a selective advantage in processing looming acoustic sources. [Work supported by NSF and CDC.
Prior Knowledge Guides Speech Segregation in Human Auditory Cortex.
Wang, Yuanye; Zhang, Jianfeng; Zou, Jiajie; Luo, Huan; Ding, Nai
2018-05-18
Segregating concurrent sound streams is a computationally challenging task that requires integrating bottom-up acoustic cues (e.g. pitch) and top-down prior knowledge about sound streams. In a multi-talker environment, the brain can segregate different speakers in about 100 ms in auditory cortex. Here, we used magnetoencephalographic (MEG) recordings to investigate the temporal and spatial signature of how the brain utilizes prior knowledge to segregate 2 speech streams from the same speaker, which can hardly be separated based on bottom-up acoustic cues. In a primed condition, the participants know the target speech stream in advance while in an unprimed condition no such prior knowledge is available. Neural encoding of each speech stream is characterized by the MEG responses tracking the speech envelope. We demonstrate that an effect in bilateral superior temporal gyrus and superior temporal sulcus is much stronger in the primed condition than in the unprimed condition. Priming effects are observed at about 100 ms latency and last more than 600 ms. Interestingly, prior knowledge about the target stream facilitates speech segregation by mainly suppressing the neural tracking of the non-target speech stream. In sum, prior knowledge leads to reliable speech segregation in auditory cortex, even in the absence of reliable bottom-up speech segregation cue.
Age-Related Changes in Binaural Interaction at Brainstem Level.
Van Yper, Lindsey N; Vermeire, Katrien; De Vel, Eddy F J; Beynon, Andy J; Dhooge, Ingeborg J M
2016-01-01
Age-related hearing loss hampers the ability to understand speech in adverse listening conditions. This is attributed to a complex interaction of changes in the peripheral and central auditory system. One aspect that may deteriorate across the lifespan is binaural interaction. The present study investigates binaural interaction at the level of the auditory brainstem. It is hypothesized that brainstem binaural interaction deteriorates with advancing age. Forty-two subjects of various age participated in the study. Auditory brainstem responses (ABRs) were recorded using clicks and 500 Hz tone-bursts. ABRs were elicited by monaural right, monaural left, and binaural stimulation. Binaural interaction was investigated in two ways. First, grand averages of the binaural interaction component were computed for each age group. Second, wave V characteristics of the binaural ABR were compared with those of the summed left and right ABRs. Binaural interaction in the click ABR was demonstrated by shorter latencies and smaller amplitudes in the binaural compared with the summed monaural responses. For 500 Hz tone-burst ABR, no latency differences were found. However, amplitudes were significantly smaller in the binaural than summed monaural condition. An age-effect was found for 500 Hz tone-burst, but not for click ABR. Brainstem binaural interaction seems to decline with age. Interestingly, these changes seem to be stimulus-dependent.
Stropahl, Maren; Chen, Ling-Chia; Debener, Stefan
2017-01-01
With the advances of cochlear implant (CI) technology, many deaf individuals can partially regain their hearing ability. However, there is a large variation in the level of recovery. Cortical changes induced by hearing deprivation and restoration with CIs have been thought to contribute to this variation. The current review aims to identify these cortical changes in postlingually deaf CI users and discusses their maladaptive or adaptive relationship to the CI outcome. Overall, intra-modal and cross-modal reorganization patterns have been identified in postlingually deaf CI users in visual and in auditory cortex. Even though cross-modal activation in auditory cortex is considered as maladaptive for speech recovery in CI users, a similar activation relates positively to lip reading skills. Furthermore, cross-modal activation of the visual cortex seems to be adaptive for speech recognition. Currently available evidence points to an involvement of further brain areas and suggests that a focus on the reversal of visual take-over of the auditory cortex may be too limited. Future investigations should consider expanded cortical as well as multi-sensory processing and capture different hierarchical processing steps. Furthermore, prospective longitudinal designs are needed to track the dynamics of cortical plasticity that takes place before and after implantation. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Hanson, Jessica L.; Hurley, Laura M.
2014-01-01
In the face of changing behavioral situations, plasticity of sensory systems can be a valuable mechanism to facilitate appropriate behavioral responses. In the auditory system, the neurotransmitter serotonin is an important messenger for context-dependent regulation because it is sensitive to both external events and internal state, and it modulates neural activity. In male mice, serotonin increases in the auditory midbrain region, the inferior colliculus (IC), in response to changes in behavioral context such as restriction stress and social contact. Female mice have not been measured in similar contexts, although the serotonergic system is sexually dimorphic in many ways. In the present study, we investigated the effects of sex, experience and estrous state on the fluctuation of serotonin in the IC across contexts, as well as potential relationships between behavior and serotonin. Contrary to our expectation, there were no sex differences in increases of serotonin in response to a restriction stimulus. Both sexes had larger increases in second exposures, suggesting experience plays a role in serotonergic release in the IC. In females, serotonin increased during both restriction and interactions with males; however, the increase was more rapid during restriction. There was no effect of female estrous phase on the serotonergic change for either context, but serotonin was related to behavioral activity in females interacting with males. These results show that changes in behavioral context induce increases in serotonin in the IC by a mechanism that appears to be uninfluenced by sex or estrous state, but may depend on experience and behavioral activity. PMID:24198252
Effective connectivity associated with auditory error detection in musicians with absolute pitch
Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.
2014-01-01
It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Williamson, Ross S.; Hancock, Kenneth E.; Shinn-Cunningham, Barbara G.; Polley, Daniel B.
2015-01-01
SUMMARY Active search is a ubiquitous goal-driven behavior wherein organisms purposefully investigate the sensory environment to locate a target object. During active search, brain circuits analyze a stream of sensory information from the external environment, adjusting for internal signals related to self-generated movement or “top-down” weighting of anticipated target and distractor properties. Sensory responses in the cortex can be modulated by internal state [1–9], though the extent and form of modulation arising in the cortex de novo versus an inheritance from subcortical stations is not clear [4, 8–12]. We addressed this question by simultaneously recording from auditory and visual regions of the thalamus (MG and LG, respectively) while mice used dynamic auditory or visual feedback to search for a hidden target within an annular track. Locomotion was associated with strongly suppressed responses and reduced decoding accuracy in MG but a subtle increase in LG spiking. Because stimuli in one modality provided critical information about target location while the other served as a distractor, we could also estimate the importance of task relevance in both thalamic subdivisions. In contrast to the effects of locomotion, we found that LG responses were reduced overall yet decoded stimuli more accurately when vision was behaviorally relevant, whereas task relevance had little effect on MG responses. This double dissociation between the influences of task relevance and movement in MG and LG highlights a role for extrasensory modulation in the thalamus but also suggests key differences in the organization of modulatory circuitry between the auditory and visual pathways. PMID:26119749
Corti's organ physiology-based cochlear model: a microelectronic prosthetic implant
NASA Astrophysics Data System (ADS)
Rios, Francisco; Fernandez-Ramos, Raquel; Romero-Sanchez, Jorge; Martin, Jose Francisco
2003-04-01
Corti"s Organ is an Electro-Mechanical transducer that allows the energy coupling between acoustical stimuli and auditory nerve. Although the structure and funtionality of this organ are complex, state of the art models have been currently developed and tested. Cochlea model presented in this paper is based on the theories of Bekesy and others and concerns on the behaviour of auditory system on frequency-place domain and mechanisms of lateral inhibition. At the same time, present state of technology will permit us developing a microsystem that reproduce this phenomena applied to hearing aid prosthesis. Corti"s Organ is composed of more than 20.000 cilia excited by mean of travelling waves. These waves produce relative pressures distributed along the cochlea, exciting an specific number of cilia in a local way. Nonlinear mechanisms of local adaptation to the intensity (external cilia cells) and lateral inhibition (internal cilia cells) allow the selection of very few elements excited. These transmit a very precise intensity and frequency information. These signals are the only ones coupled to the auditory nerve. Distribution of pressure waves matches a quasilogaritmic law due to Cochlea morphology. Microsystem presented in this paper takes Bark"s law as an approximation to this behaviour consisting on grouped arbitrary elements composed of a set of selective coupled exciters (bank of filters according to Patterson"s model).These sets apply the intensity adaptation principles and lateral inhibition. Elements excited during the process generate a bioelectric signal in the same way than cilia cell. A microelectronic solution is presented for the development of an implantable prosthesis device.
Action planning and predictive coding when speaking
Wang, Jun; Mathalon, Daniel H.; Roach, Brian J.; Reilly, James; Keedy, Sarah; Sweeney, John A.; Ford, Judith M.
2014-01-01
Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps are filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders. PMID:24423729
Cognitive, sensory, and psychosocial characteristics in patients with Bardet-Biedl syndrome.
Brinckman, Danielle D; Keppler-Noreuil, Kim M; Blumhorst, Catherine; Biesecker, Leslie G; Sapp, Julie C; Johnston, Jennifer J; Wiggs, Edythe A
2013-12-01
Forty-two patients with a clinical diagnosis of Bardet-Biedl syndrome ages 2-61 years were given a neuropsychological test battery to evaluate cognitive, sensory, and behavioral functioning. These tests included the Wechsler scales of intelligence, Rey Auditory Verbal Learning Test, Boston Naming Test, D-KEFS Verbal Fluency Test, D-KEFS Color-Word Interference Test, D-KEFS Sorting Test, Wide Range Achievement Test: Math and Reading Subtests, Purdue Pegboard, The University of Pennsylvania Smell Identification Test, Social Communication Questionnaire, Social Responsiveness Scale, and Behavior Assessment System for Children, Second Edition, Parent Rating Scale. On the age appropriate Wechsler scale, the mean Verbal Comprehension was 81 (n = 36), Working Memory was 81 (n = 36), Perceptual Reasoning was 78 (n = 24) and Full Scale IQ was 75 (n = 26). Memory for a word list (Rey Auditory Verbal Learning Test) was in the average range with a mean of 89 (n = 19). Fine motor speed was slow on the Purdue with mean scores 3-4 standard deviations below norms. All subjects were microsmic on the University of Pennsylvania Smell Identification Test. Of these 42 patients, only 6 were able to complete all auditory and visual tests; 52% were unable to complete the visual tests due to impaired vision. A wide range of behavioral issues were endorsed on questionnaires given to parents. Most had social skill deficits but no pattern of either externalizing or internalizing problems. We identify a characteristic neuro-behavioral profile in our cohort comprised of reduced IQ, impaired fine-motor function, and decreased olfaction. © 2013 Wiley Periodicals, Inc.
Populin, Luis C; Tollin, Daniel J; Yin, Tom C T
2004-10-01
We examined the motor error hypothesis of visual and auditory interaction in the superior colliculus (SC), first tested by Jay and Sparks in the monkey. We trained cats to direct their eyes to the location of acoustic sources and studied the effects of eye position on both the ability of cats to localize sounds and the auditory responses of SC neurons with the head restrained. Sound localization accuracy was generally not affected by initial eye position, i.e., accuracy was not proportionally affected by the deviation of the eyes from the primary position at the time of stimulus presentation, showing that eye position is taken into account when orienting to acoustic targets. The responses of most single SC neurons to acoustic stimuli in the intact cat were modulated by eye position in the direction consistent with the predictions of the "motor error" hypothesis, but the shift accounted for only two-thirds of the initial deviation of the eyes. However, when the average horizontal sound localization error, which was approximately 35% of the target amplitude, was taken into account, the magnitude of the horizontal shifts in the SC auditory receptive fields matched the observed behavior. The modulation by eye position was not due to concomitant movements of the external ears, as confirmed by recordings carried out after immobilizing the pinnae of one cat. However, the pattern of modulation after pinnae immobilization was inconsistent with the observations in the intact cat, suggesting that, in the intact animal, information about the position of the pinnae may be taken into account.
The Medial Paralemniscal Nucleus and Its Afferent Neuronal Connections in Rat
VARGA, TAMÁS; PALKOVITS, MIKLÓS; USDIN, TED BJÖRN; DOBOLYI, ARPÁD
2009-01-01
Previously, we described a cell group expressing tuberoinfundibular peptide of 39 residues (TIP39) in the lateral pontomesencephalic tegmentum, and referred to it as the medial paralemniscal nucleus (MPL). To identify this nucleus further in rat, we have now characterized the MPL cytoarchitectonically on coronal, sagittal, and horizontal serial sections. Neurons in the MPL have a columnar arrangement distinct from adjacent areas. The MPL is bordered by the intermediate nucleus of the lateral lemniscus nucleus laterally, the oral pontine reticular formation medially, and the rubrospinal tract ventrally, whereas the A7 noradrenergic cell group is located immediately mediocaudal to the MPL. TIP39-immunoreactive neurons are distributed throughout the cytoarchitectonically defined MPL and constitute 75% of its neurons as assessed by double labeling of TIP39 with a fluorescent Nissl dye or NeuN. Furthermore, we investigated the neuronal inputs to the MPL by using the retrograde tracer cholera toxin B subunit. The MPL has afferent neuronal connections distinct from adjacent brain regions including major inputs from the auditory cortex, medial part of the medial geniculate body, superior colliculus, external and dorsal cortices of the inferior colliculus, periolivary area, lateral preoptic area, hypothalamic ventromedial nucleus, lateral and dorsal hypothalamic areas, subparafascicular and posterior intralaminar thalamic nuclei, periaqueductal gray, and cuneiform nucleus. In addition, injection of the anterograde tracer biotinylated dextran amine into the auditory cortex and the hypothalamic ventromedial nucleus confirmed projections from these areas to the distinct MPL. The afferent neuronal connections of the MPL suggest its involvement in auditory and reproductive functions. PMID:18770870
The medial paralemniscal nucleus and its afferent neuronal connections in rat.
Varga, Tamás; Palkovits, Miklós; Usdin, Ted Björn; Dobolyi, Arpád
2008-11-10
Previously, we described a cell group expressing tuberoinfundibular peptide of 39 residues (TIP39) in the lateral pontomesencephalic tegmentum, and referred to it as the medial paralemniscal nucleus (MPL). To identify this nucleus further in rat, we have now characterized the MPL cytoarchitectonically on coronal, sagittal, and horizontal serial sections. Neurons in the MPL have a columnar arrangement distinct from adjacent areas. The MPL is bordered by the intermediate nucleus of the lateral lemniscus nucleus laterally, the oral pontine reticular formation medially, and the rubrospinal tract ventrally, whereas the A7 noradrenergic cell group is located immediately mediocaudal to the MPL. TIP39-immunoreactive neurons are distributed throughout the cytoarchitectonically defined MPL and constitute 75% of its neurons as assessed by double labeling of TIP39 with a fluorescent Nissl dye or NeuN. Furthermore, we investigated the neuronal inputs to the MPL by using the retrograde tracer cholera toxin B subunit. The MPL has afferent neuronal connections distinct from adjacent brain regions including major inputs from the auditory cortex, medial part of the medial geniculate body, superior colliculus, external and dorsal cortices of the inferior colliculus, periolivary area, lateral preoptic area, hypothalamic ventromedial nucleus, lateral and dorsal hypothalamic areas, subparafascicular and posterior intralaminar thalamic nuclei, periaqueductal gray, and cuneiform nucleus. In addition, injection of the anterograde tracer biotinylated dextran amine into the auditory cortex and the hypothalamic ventromedial nucleus confirmed projections from these areas to the distinct MPL. The afferent neuronal connections of the MPL suggest its involvement in auditory and reproductive functions. (c) 2008 Wiley-Liss, Inc.
Advanced traveler information service (ATIS) : "Who are ATIS customers?
DOT National Transportation Integrated Search
2000-01-01
This paper offers answers to "Who are ATIS Customers?" using different, complementary research and evaluation approaches. The first section, entitled External Factors Influencing Customer Demand, offers an empirical assessment of external conditions ...
ERIC Educational Resources Information Center
Richards, Michael D.; Sherratt, Gerald R.
The historical role of institutional advancement and the specific activities and trends currently affecting it are reviewed, and four strategies for advancement programs are suggested. Institutional advancement includes alumni relations, fund-raising, public relations, internal and external communications, and government relations, and its…
ERIC Educational Resources Information Center
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
\\mathscr{H}_2 optimal control techniques for resistive wall mode feedback in tokamaks
NASA Astrophysics Data System (ADS)
Clement, Mitchell; Hanson, Jeremy; Bialek, Jim; Navratil, Gerald
2018-04-01
DIII-D experiments show that a new, advanced algorithm enables resistive wall mode (RWM) stability control in high performance discharges using external coils. DIII-D can excite strong, locked or nearly locked external kink modes whose rotation frequencies and growth rates are on the order of the magnetic flux diffusion time of the vacuum vessel wall. Experiments have shown that modern control techniques like linear quadratic Gaussian (LQG) control require less current than the proportional controller in use at DIII-D when using control coils external to DIII-D’s vacuum vessel. Experiments were conducted to develop control of a rotating n = 1 perturbation using an LQG controller derived from VALEN and external coils. Feedback using this LQG algorithm outperformed a proportional gain only controller in these perturbation experiments over a range of frequencies. Results from high βN experiments also show that advanced feedback techniques using external control coils may be as effective as internal control coil feedback using classical control techniques.
First Branchial Cleft Malformation with Duplication of External Auditory Canal
Parida, Pradipta Kumar; Raja, Kalairasi; Surianarayanan, Gopalakrishnan; Ganeshan, Sivaraman
2013-01-01
First branchial cleft anomalies are uncommon, accounting for less than 10% of all branchial abnormalities. Their rare occurrence and varied presentation have frequently led to misdiagnosis and inadequate and inappropriate treatment of these conditions leading to repeated recurrences and secondary infection. In this paper, a case of 11-year girl with type 2 first branchial cleft defect is described. She first presented with a nonhealing ulcer of upper neck from childhood. Diagnosis had previously been missed and treated as tubercular ulcer. We confirmed the correct diagnosis by history and computerized tomography fistulogram. The lesion was completely excised with no further recurrence. PMID:24312740
Oscillatory flow in the cochlea visualized by a magnetic resonance imaging technique.
Denk, W; Keolian, R M; Ogawa, S; Jelinski, L W
1993-02-15
We report a magnetic resonance imaging technique that directly measures motion of cochlear fluids. It uses oscillating magnetic field gradients phase-locked to an external stimulus to selectively visualize and quantify oscillatory fluid motion. It is not invasive, and it does not require optical line-of-sight access to the inner ear. It permits the detection of displacements far smaller than the spatial resolution. The method is demonstrated on a phantom and on living rats. It is projected to have applications for auditory research, for the visualization of vocal tract dynamics during speech and singing, and for determination of the spatial distribution of mechanical relaxations in materials.
Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G.
2017-01-01
Objective Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. Approach In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. Main Results We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. Significance We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms. PMID:29349070
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2014-01-01
Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070
Dynamic plasticity in coupled avian midbrain maps
NASA Astrophysics Data System (ADS)
Atwal, Gurinder Singh
2004-12-01
Internal mapping of the external environment is carried out using the receptive fields of topographic neurons in the brain, and in a normal barn owl the aural and visual subcortical maps are aligned from early experiences. However, instantaneous misalignment of the aural and visual stimuli has been observed to result in adaptive behavior, manifested by functional and anatomical changes of the auditory processing system. Using methods of information theory and statistical mechanics a model of the adaptive dynamics of the aural receptive field is presented and analyzed. The dynamics is determined by maximizing the mutual information between the neural output and the weighted sensory neural inputs, admixed with noise, subject to biophysical constraints. The reduced costs of neural rewiring, as in the case of young barn owls, reveal two qualitatively different types of receptive field adaptation depending on the magnitude of the audiovisual misalignment. By letting the misalignment increase with time, it is shown that the ability to adapt can be increased even when neural rewiring costs are high, in agreement with recent experimental reports of the increased plasticity of the auditory space map in adult barn owls due to incremental learning. Finally, a critical speed of misalignment is identified, demarcating the crossover from adaptive to nonadaptive behavior.
Levine, Robert A; Oron, Yahav
2015-01-01
Tinnitus, the perception of sound in the absence of an external sound, usually results from a disorder of: (1) the auditory system (usually peripheral, rarely central); (2) the somatosensory system (head and neck); or (3) a combination of the two. Its cause can be determined through its characteristics. The history must include the tinnitus': (1) quality (including whether it can ever be pulsatile or have a clicking component); (2) location; (3) variability; (4) predominant pitch (low or high); and (5) whether the patient can do something to modulate the percept. In addition to the standard neuro-otologic examination, the exam should include inspection of the teeth for evidence of wear, listening around the ear and neck for sounds similar to the tinnitus, palpation of the craniocervical musculature for trigger points, and probing whether the tinnitus percept can be modulated with "somatic testing." All subjects should have a recent audiogram. Presently the most compelling tinnitus theory is the dorsal cochlear nucleus (DCN) hypothesis: both the auditory and somatosensory systems converge upon and interact within the DCN. If the activity of the DCN's somatosensory-interacting fusiform cells exceeds an individual's tinnitus threshold, then tinnitus results. © 2015 Elsevier B.V. All rights reserved.
Blum, Sarah; Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G
2017-01-01
Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms.
Rhythmic engagement with music in infancy
Zentner, Marcel; Eerola, Tuomas
2010-01-01
Humans have a unique ability to coordinate their motor movements to an external auditory stimulus, as in music-induced foot tapping or dancing. This behavior currently engages the attention of scholars across a number of disciplines. However, very little is known about its earliest manifestations. The aim of the current research was to examine whether preverbal infants engage in rhythmic behavior to music. To this end, we carried out two experiments in which we tested 120 infants (aged 5–24 months). Infants were exposed to various excerpts of musical and rhythmic stimuli, including isochronous drumbeats. Control stimuli consisted of adult- and infant-directed speech. Infants’ rhythmic movements were assessed by multiple methods involving manual coding from video excerpts and innovative 3D motion-capture technology. The results show that (i) infants engage in significantly more rhythmic movement to music and other rhythmically regular sounds than to speech; (ii) infants exhibit tempo flexibility to some extent (e.g., faster auditory tempo is associated with faster movement tempo); and (iii) the degree of rhythmic coordination with music is positively related to displays of positive affect. The findings are suggestive of a predisposition for rhythmic movement in response to music and other metrically regular sounds. PMID:20231438
Rhythmic engagement with music in infancy.
Zentner, Marcel; Eerola, Tuomas
2010-03-30
Humans have a unique ability to coordinate their motor movements to an external auditory stimulus, as in music-induced foot tapping or dancing. This behavior currently engages the attention of scholars across a number of disciplines. However, very little is known about its earliest manifestations. The aim of the current research was to examine whether preverbal infants engage in rhythmic behavior to music. To this end, we carried out two experiments in which we tested 120 infants (aged 5-24 months). Infants were exposed to various excerpts of musical and rhythmic stimuli, including isochronous drumbeats. Control stimuli consisted of adult- and infant-directed speech. Infants' rhythmic movements were assessed by multiple methods involving manual coding from video excerpts and innovative 3D motion-capture technology. The results show that (i) infants engage in significantly more rhythmic movement to music and other rhythmically regular sounds than to speech; (ii) infants exhibit tempo flexibility to some extent (e.g., faster auditory tempo is associated with faster movement tempo); and (iii) the degree of rhythmic coordination with music is positively related to displays of positive affect. The findings are suggestive of a predisposition for rhythmic movement in response to music and other metrically regular sounds.
Targeted neural network interventions for auditory hallucinations: Can TMS inform DBS?
Taylor, Joseph J; Krystal, John H; D'Souza, Deepak C; Gerrard, Jason Lee; Corlett, Philip R
2018-05-01
The debilitating and refractory nature of auditory hallucinations (AH) in schizophrenia and other psychiatric disorders has stimulated investigations into neuromodulatory interventions that target the aberrant neural networks associated with them. Internal or invasive forms of brain stimulation such as deep brain stimulation (DBS) are currently being explored for treatment-refractory schizophrenia. The process of developing and implementing DBS is limited by symptom clustering within psychiatric constructs as well as a scarcity of causal tools with which to predict response, refine targeting or guide clinical decisions. Transcranial magnetic stimulation (TMS), an external or non-invasive form of brain stimulation, has shown some promise as a therapeutic intervention for AH but remains relatively underutilized as an investigational probe of clinically relevant neural networks. In this editorial, we propose that TMS has the potential to inform DBS by adding individualized causal evidence to an evaluation processes otherwise devoid of it in patients. Although there are significant limitations and safety concerns regarding DBS, the combination of TMS with computational modeling of neuroimaging and neurophysiological data could provide critical insights into more robust and adaptable network modulation. Copyright © 2017 Elsevier B.V. All rights reserved.
Advanced Video Activity Analytics (AVAA): Human Factors Evaluation
2015-05-01
video, and 3) creating and saving annotations (Fig. 11). (The logging program was updated after the pilot to also capture search clicks.) Playing and... visual search task and the auditory task together and thus automatically focused on the visual task. Alternatively, the operator may have intentionally...affect performance on the primary task; however, in the current test there was no apparent effect on the operator’s performance in the visual search task
Research Themes and Technological Base Program in Behavioral and Social Sciences for the U.S. Army
1976-01-01
appears to produce different al human information processing strategies. Concrete stimuli exert unifying or organizing effects that function as memory ...Technology for Tactical Information Processing and Presentation Scope: a. Objectives: To provide technological advances for enchancing user performance in...auditory, and black and white- color , situation portrayal. 44 :v.:;..^ „..■ ..„i--.v ..^.:n:,r.^,...::..:■ .;......’,. .^.M. ■ m»m viriniap
Daliri, Ayoub; Max, Ludo
2018-02-01
Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Noise and communication: a three-year update.
Brammer, Anthony J; Laroche, Chantal
2012-01-01
Noise is omnipresent and impacts us all in many aspects of daily living. Noise can interfere with communication not only in industrial workplaces, but also in other work settings (e.g. open-plan offices, construction, and mining) and within buildings (e.g. residences, arenas, and schools). The interference of noise with communication can have significant social consequences, especially for persons with hearing loss, and may compromise safety (e.g. failure to perceive auditory warning signals), influence worker productivity and learning in children, affect health (e.g. vocal pathology, noise-induced hearing loss), compromise speech privacy, and impact social participation by the elderly. For workers, attempts have been made to: 1) Better define the auditory performance needed to function effectively and to directly measure these abilities when assessing Auditory Fitness for Duty, 2) design hearing protection devices that can improve speech understanding while offering adequate protection against loud noises, and 3) improve speech privacy in open-plan offices. As the elderly are particularly vulnerable to the effects of noise, an understanding of the interplay between auditory, cognitive, and social factors and its effect on speech communication and social participation is also critical. Classroom acoustics and speech intelligibility in children have also gained renewed interest because of the importance of effective speech comprehension in noise on learning. Finally, substantial work has been made in developing models aimed at better predicting speech intelligibility. Despite progress in various fields, the design of alarm signals continues to lag behind advancements in knowledge. This summary of the last three years' research highlights some of the most recent issues for the workplace, for older adults, and for children, as well as the effectiveness of warning sounds and models for predicting speech intelligibility. Suggestions for future work are also discussed.
Jenison, Rick L.; Reale, Richard A.; Armstrong, Amanda L.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.
2015-01-01
Spectro-Temporal Receptive Fields (STRFs) were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM). A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl’s gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl’s gyrus recordings elicited by click-train stimuli. PMID:26367010
Gillespie, Lisa N; Zanin, Mark P; Shepherd, Robert K
2015-01-28
The cochlear implant provides auditory cues to profoundly deaf patients by electrically stimulating the primary auditory neurons (ANs) of the cochlea. However, ANs degenerate in deafness; the preservation of a robust AN target population, in combination with advances in cochlear implant technology, may provide improved hearing outcomes for cochlear implant patients. The exogenous delivery of neurotrophins such as brain-derived neurotrophic factor (BDNF) and neurotrophin-3 is well known to support AN survival in deafness, and cell-based therapies provide a potential clinically viable option for delivering neurotrophins into the deaf cochlea. This study utilized cells that were genetically modified to express BDNF and encapsulated in alginate microspheres, and investigated AN survival in the deaf guinea pig following (a) cell-based neurotrophin treatment in conjunction with chronic electrical stimulation from a cochlear implant, and (b) long-term cell-based neurotrophin delivery. In comparison to deafened controls, there was significantly greater AN survival following the cell-based neurotrophin treatment, and there were ongoing survival effects for at least six months. In addition, functional benefits were observed following cell-based neurotrophin treatment and chronic electrical stimulation, with a statistically significant decrease in electrically evoked auditory brainstem response thresholds observed during the experimental period. This study demonstrates that cell-based therapies, in conjunction with a cochlear implant, shows potential as a clinically transferable means of providing neurotrophin treatment to support AN survival in deafness. This technology also has the potential to deliver other therapeutic agents, and to be used in conjunction with other biomedical devices for the treatment of a variety of neurodegenerative conditions. Copyright © 2014 Elsevier B.V. All rights reserved.
Sports Management Faculty External Grant-Writing Activities in the United States
ERIC Educational Resources Information Center
DeVinney, Timothy P.
2012-01-01
This study was conducted to fill a void in information, provide relevant, current data for faculty members related to external grant-writing activities related to the academic field of sport management and serve as a tool that may aid in the advancement of external grant-writing efforts within the field of sport management. All data is specific to…
Overview of the Advanced High Frequency Branch
NASA Technical Reports Server (NTRS)
Miranda, Felix A.
2015-01-01
This presentation provides an overview of the competencies, selected areas of research and technology development activities, and current external collaborative efforts of the NASA Glenn Research Center's Advanced High Frequency Branch.
Elquza, Emad; Babiker, Hani M; Howell, Krisha J; Kovoor, Andrew I; Brown, Thomas David; Patel, Hitendra; Malangone, Steven A; Borad, Mitesh J; Dragovich, Tomislav
2016-01-01
To establish the maximum tolerated dose (MTD) and safety profile of bi-weekly Pemetrexed (PEM) when combined with weekly cisplatin (CDDP) and standard dose external beam radiation (EBRT) in patients with locally advanced or metastatic esophageal and gastroesophageal junction (GEJ) carcinomas. We conducted an open label, single institution, phase I dose escalation study designed to evaluate up to 15-35 patients with advanced or metastatic esophageal and GEJ carcinomas. 10 patients were treated with bi-weekly PEM, weekly CDDP, and EBRT. The MTD of bi-weekly PEM was determined to be 500 mg/m(2).
Procedures for central auditory processing screening in schoolchildren.
Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella
2018-03-22
Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that allow the selection of as many hearing skills as possible, validated by comparison with the battery of tests used in the diagnosis. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
ERIC Educational Resources Information Center
Bornstein, Joan L.
The booklet outlines ways to help children with learning disabilities in specific subject areas. Characteristic behavior and remedial exercises are listed for seven areas of auditory problems: auditory reception, auditory association, auditory discrimination, auditory figure ground, auditory closure and sound blending, auditory memory, and grammar…
Experience and information loss in auditory and visual memory.
Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K
2017-07-01
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.
The Contribution of Head Movement to the Externalization and Internalization of Sounds
Brimijoin, W. Owen; Boyd, Alan W.; Akeroyd, Michael A.
2013-01-01
Background When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization. Methodology/Principal Findings We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners’ head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses. Conclusions/Significance Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory perception. PMID:24312677
Auditory Learning. Dimensions in Early Learning Series.
ERIC Educational Resources Information Center
Zigmond, Naomi K.; Cicci, Regina
The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…
Auditory and motor imagery modulate learning in music performance
Brown, Rachel M.; Palmer, Caroline
2013-01-01
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences. PMID:23847495
$$\\mathscr{H}_2$$ optimal control techniques for resistive wall mode feedback in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clement, Mitchell; Hanson, Jeremy; Bialek, Jim
DIII-D experiments show that a new, advanced algorithm improves resistive wall mode (RWM) stability control in high performance discharges using external coils. DIII-D can excite strong, locked or nearly locked external kink modes whose rotation frequencies and growth rates are on the order of the magnetic ux di usion time of the vacuum vessel wall. The VALEN RWM model has been used to gauge the e ectiveness of RWM control algorithms in tokamaks. Simulations and experiments have shown that modern control techniques like Linear Quadratic Gaussian (LQG) control will perform better, using 77% less current, than classical techniques when usingmore » control coils external to DIII-D's vacuum vessel. Experiments were conducted to develop control of a rotating n = 1 perturbation using an LQG controller derived from VALEN and external coils. Feedback using this LQG algorithm outperformed a proportional gain only controller in these perturbation experiments over a range of frequencies. Results from high N experiments also show that advanced feedback techniques using external control coils may be as e ective as internal control coil feedback using classical control techniques.« less
$$\\mathscr{H}_2$$ optimal control techniques for resistive wall mode feedback in tokamaks
Clement, Mitchell; Hanson, Jeremy; Bialek, Jim; ...
2018-02-28
DIII-D experiments show that a new, advanced algorithm improves resistive wall mode (RWM) stability control in high performance discharges using external coils. DIII-D can excite strong, locked or nearly locked external kink modes whose rotation frequencies and growth rates are on the order of the magnetic ux di usion time of the vacuum vessel wall. The VALEN RWM model has been used to gauge the e ectiveness of RWM control algorithms in tokamaks. Simulations and experiments have shown that modern control techniques like Linear Quadratic Gaussian (LQG) control will perform better, using 77% less current, than classical techniques when usingmore » control coils external to DIII-D's vacuum vessel. Experiments were conducted to develop control of a rotating n = 1 perturbation using an LQG controller derived from VALEN and external coils. Feedback using this LQG algorithm outperformed a proportional gain only controller in these perturbation experiments over a range of frequencies. Results from high N experiments also show that advanced feedback techniques using external control coils may be as e ective as internal control coil feedback using classical control techniques.« less
Barsties, Ben; Maryn, Youri
2016-07-01
The Acoustic Voice Quality Index (AVQI) is an objective method to quantify the severity of overall voice quality in concatenated continuous speech and sustained phonation segments. Recently, AVQI was successfully modified to be more representative and ecologically valid because the internal consistency of AVQI was balanced out through equal proportion of the 2 speech types. The present investigation aims to explore its external validation in a large data set. An expert panel of 12 speech-language therapists rated the voice quality of 1058 concatenated voice samples varying from normophonia to severe dysphonia. The Spearman rank-order correlation coefficients (r) were used to measure concurrent validity. The AVQI's diagnostic accuracy was evaluated with several estimates of its receiver operating characteristics (ROC). Finally, 8 of the 12 experts were chosen because of reliability criteria. A strong correlation was identified between AVQI and auditoryperceptual rating (r = 0.815, P = .000). It indicated that 66.4% of the auditory-perceptual rating's variation was explained by AVQI. Additionally, the ROC results showed again the best diagnostic outcome at a threshold of AVQI = 2.43. This study highlights external validation and diagnostic precision of the AVQI version 03.01 as a robust and ecologically valid measurement to objectify voice quality. © The Author(s) 2016.
Neural representation of the self-heard biosonar click in bottlenose dolphins (Tursiops truncatus).
Finneran, James J; Mulsow, Jason; Houser, Dorian S; Schlundt, Carolyn E
2017-05-01
The neural representation of the dolphin broadband biosonar click was investigated by measuring auditory brainstem responses (ABRs) to "self-heard" clicks masked with noise bursts having various high-pass cutoff frequencies. Narrowband ABRs were obtained by sequentially subtracting responses obtained with noise having lower high-pass cutoff frequencies from those obtained with noise having higher cutoff frequencies. For comparison to the biosonar data, ABRs were also measured in a passive listening experiment, where external clicks and masking noise were presented to the dolphins and narrowband ABRs were again derived using the subtractive high-pass noise technique. The results showed little change in the peak latencies of the ABR to the self-heard click from 28 to 113 kHz; i.e., the high-frequency neural responses to the self-heard click were delayed relative to those of an external, spectrally "pink" click. The neural representation of the self-heard click is thus highly synchronous across the echolocation frequencies and does not strongly resemble that of a frequency modulated downsweep (i.e., decreasing-frequency chirp). Longer ABR latencies at higher frequencies are hypothesized to arise from spectral differences between self-heard clicks and external clicks, forward masking from previously emitted biosonar clicks, or neural inhibition accompanying the emission of clicks.
Neural representation of the self-heard biosonar click in bottlenose dolphins (Tursiops truncatus)
Finneran, James J.; Mulsow, Jason; Houser, Dorian S.; Schlundt, Carolyn E.
2017-01-01
The neural representation of the dolphin broadband biosonar click was investigated by measuring auditory brainstem responses (ABRs) to “self-heard” clicks masked with noise bursts having various high-pass cutoff frequencies. Narrowband ABRs were obtained by sequentially subtracting responses obtained with noise having lower high-pass cutoff frequencies from those obtained with noise having higher cutoff frequencies. For comparison to the biosonar data, ABRs were also measured in a passive listening experiment, where external clicks and masking noise were presented to the dolphins and narrowband ABRs were again derived using the subtractive high-pass noise technique. The results showed little change in the peak latencies of the ABR to the self-heard click from 28 to 113 kHz; i.e., the high-frequency neural responses to the self-heard click were delayed relative to those of an external, spectrally “pink” click. The neural representation of the self-heard click is thus highly synchronous across the echolocation frequencies and does not strongly resemble that of a frequency modulated downsweep (i.e., decreasing-frequency chirp). Longer ABR latencies at higher frequencies are hypothesized to arise from spectral differences between self-heard clicks and external clicks, forward masking from previously emitted biosonar clicks, or neural inhibition accompanying the emission of clicks. PMID:28599518
Effect of phase advance on the brushless dc motor torque speed respond
NASA Astrophysics Data System (ADS)
Mohd, M. S.; Karsiti, M. N.; Mohd, M. S.
2015-12-01
Brushless direct current (BLDC) motor is widely used in small and medium sized electric vehicles as it exhibit highest specific power and thermal efficiency as compared to the induction motor. Permanent magnets BLDC rotor create a constant magnetic flux, which limit the motor top speed. As the back electromotive force (EMF) voltage increases proportionally with motor rotational speed and it approaches the amplitude of the input voltage, the phase current amplitude will reach zero. By advancing the phase current, it is possible to extend the maximum speed of the BLDC motor beyond the rated top speed. This will allow smaller BLDC motor to be used in small electric vehicles (EV) and in larger applications will allow the use of BLDC motor without the use of multispeed transmission unit for high speed operation. However, increasing the speed of BLDC will affect the torque speed response. The torque output will decrease as speed increases. Adjusting the phase angle will affect the speed of the motor as each coil is energized earlier than the corresponding rise in the back emf of the coil. This paper discusses the phase advance strategy of Brushless DC motor by phase angle manipulation approaches using external hall sensors. Tests have been performed at different phase advance angles in advance and retard positions for different voltage levels applied. The objective is to create the external hall sensor system to commutate the BLDC motor, to establish the phase advance of the BLDC by varying the phase angle through external hall sensor manipulation, observe the respond of the motor while applying the phase advance by hall sensor adjustment.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Lateralization of Frequency-Specific Networks for Covert Spatial Attention to Auditory Stimuli
Thorpe, Samuel; D'Zmura, Michael
2011-01-01
We conducted a cued spatial attention experiment to investigate the time–frequency structure of human EEG induced by attentional orientation of an observer in external auditory space. Seven subjects participated in a task in which attention was cued to one of two spatial locations at left and right. Subjects were instructed to report the speech stimulus at the cued location and to ignore a simultaneous speech stream originating from the uncued location. EEG was recorded from the onset of the directional cue through the offset of the inter-stimulus interval (ISI), during which attention was directed toward the cued location. Using a wavelet spectrum, each frequency band was then normalized by the mean level of power observed in the early part of the cue interval to obtain a measure of induced power related to the deployment of attention. Topographies of band specific induced power during the cue and inter-stimulus intervals showed peaks over symmetric bilateral scalp areas. We used a bootstrap analysis of a lateralization measure defined for symmetric groups of channels in each band to identify specific lateralization events throughout the ISI. Our results suggest that the deployment and maintenance of spatially oriented attention throughout a period of 1,100 ms is marked by distinct episodes of reliable hemispheric lateralization ipsilateral to the direction in which attention is oriented. An early theta lateralization was evident over posterior parietal electrodes and was sustained throughout the ISI. In the alpha and mu bands punctuated episodes of parietal power lateralization were observed roughly 500 ms after attentional deployment, consistent with previous studies of visual attention. In the beta band these episodes show similar patterns of lateralization over frontal motor areas. These results indicate that spatial attention involves similar mechanisms in the auditory and visual modalities. PMID:21630112
Söderlund, Göran B. W.; Jobs, Elisabeth Nilsson
2016-01-01
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6–9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure. PMID:26858679
Sclerosteosis involving the temporal bone: histopathologic aspects.
Nager, G T; Hamersma, H
1986-01-01
Sclerosteosis is a rare, potentially lethal, autosomal recessive, progressive craniotubular sclerosing bone dysplasia with characteristic facial and skeletal features. The temporal bone changes include a marked increase in overall size, extensive sclerosis, narrowing of the external auditory canal, and severe constriction of the internal auditory meatus, fallopian canal, eustachian tube, and middle ear cleft. Attenuation of the bony canals of the 9th, 10th, and 11th cranial nerves, reduction in size of the internal carotid artery, and severe obliteration of the sigmoid sinus and jugular bulb also occur. Loss of hearing, generally bilateral, is a frequent symptom. It often manifests in early childhood and initially is expressed as sound conduction impairment. Later, a sensorineural hearing loss and loss of vestibular nerve function often develop. Impairment of facial nerve function is another feature occasionally present at birth. In the beginning, a unilateral intermittent facial weakness may occur which eventually progresses to a bilateral permanent facial paresis. The histologic examination of the temporal bones from a patient with sclerosteosis explains the mechanisms involved in the progressive impairment of sound conduction and loss of cochlear, vestibular, and facial nerve function. There is a decrease of the arterial blood supply to the brain and an obstruction of the venous drainage from it. The histopathology reveals the obstacles to decompression of the middle ear cleft, ossicular chain, internal auditory and facial canals, and the risks, and in many instances the contraindications, to such procedures. On the other hand, decompression of the sigmoid sinus and jugular bulb should be considered as an additional life-saving procedure in conjunction with the prophylactic craniotomy recommended in all adult patients.
Why Cannot We have an Etiological Classification for the Patients with Granular Myringitis?
Bansal, Mohan
2017-09-01
Though granular myringitis (GM) is not a very rare disease it does not have any classification. Its exact etiology is not known. The granulations on tympanic membrane also occur in association with other lesions of external auditory canal (EAC) and middle ear. The aims of this study were to know the etiological factors of GM and classify the disease according to its etiological factors and associated disorders of EAC and middle ear. Data were retrieved from the search of four electronic databases: PubMed, EMBASE, Cochrane Library, and Google scholar. Relevant articles were also sought by a hand search review of reference books. The databases were searched using the key words otitis externa, external otitis, granular myringitis, granular otitis externa and myringitis. Data were extracted using a pre-defined data-extraction form. The following data were recorded (1) etiological and predisposing conditions; (2) pathological features; (3) associated disorders of external and middle ear. The study proposes the etiological classification of GM. It suggests two major groups: primary and secondary. The primary GM is basically idiopathic and these patients do no have evidences of any other types of otitis media and otitis externa. In the secondary GM the cause is obvious and the patients usually have associated otitis media and/or lesions of external ear canal. Author speculates that habit of self ear cleaning/scratching is a specific etiological factor in cases of primary GM but more studies are required to confirm this theory.
Short GSM mobile phone exposure does not alter human auditory brainstem response.
Stefanics, Gábor; Kellényi, Lóránd; Molnár, Ferenc; Kubinyi, Györgyi; Thuróczy, György; Hernádi, István
2007-11-12
There are about 1.6 billion GSM cellular phones in use throughout the world today. Numerous papers have reported various biological effects in humans exposed to electromagnetic fields emitted by mobile phones. The aim of the present study was to advance our understanding of potential adverse effects of the GSM mobile phones on the human hearing system. Auditory Brainstem Response (ABR) was recorded with three non-polarizing Ag-AgCl scalp electrodes in thirty young and healthy volunteers (age 18-26 years) with normal hearing. ABR data were collected before, and immediately after a 10 minute exposure to 900 MHz pulsed electromagnetic field (EMF) emitted by a commercial Nokia 6310 mobile phone. Fifteen subjects were exposed to genuine EMF and fifteen to sham EMF in a double blind and counterbalanced order. Possible effects of irradiation was analyzed by comparing the latency of ABR waves I, III and V before and after genuine/sham EMF exposure. Paired sample t-test was conducted for statistical analysis. Results revealed no significant differences in the latency of ABR waves I, III and V before and after 10 minutes of genuine/sham EMF exposure. The present results suggest that, in our experimental conditions, a single 10 minute exposure of 900 MHz EMF emitted by a commercial mobile phone does not produce measurable immediate effects in the latency of auditory brainstem waves I, III and V.
Acoustic and Auditory Perception Effects of the Voice Therapy Technique Finger Kazoo in Adult Women.
Christmann, Mara Keli; Cielo, Carla Aparecida
2017-05-01
This study aimed to verify and to correlate acoustic and auditory-perceptual measures of glottic source after the performance of finger kazoo (FK) technique. This is an experimental, cross-sectional, and qualitative study. We made an analysis of the vowel [a:] in 46 adult women with neither vocal complaints nor laryngeal alterations, through the Multi-Dimensional Voice Program Advanced and RASATI scale, before and immediately after performing three series of FK and 5 minutes after a period of silence. Kappa, Friedman, Wilcoxon, and Spearman tests were used. We found significant increase in fundamental frequency, reduction of amplitude variation, and degree of sub-harmonics immediately after performing FK. Positive correlations were measures of frequency and its perturbation, measures of amplitude, of soft phonation index, of degree and number of unvoiced segments with aspects of RASATI. Negative correlations were voice turbulence index, measures of frequency and its perturbation, and measures of soft phonation index with aspects of RASATI. There was fundamental frequency increase, within normal limits, and reduction of acoustic measures related to presence of noise and instability. In general, acoustic measures, suggestive of noise and instability, were reduced according to the decrease of perceptive-auditory aspects of vocal alteration. It shows that both instruments are complementary and that the acoustic vocal effect was positive. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Hu, Xiaochen; Ackermann, Hermann; Martin, Jason A; Erb, Michael; Winkler, Susanne; Reiterer, Susanne M
2013-12-01
Individual differences in second language (L2) aptitude have been assumed to depend upon a variety of cognitive and personality factors. Especially, the cognitive factor phonological working memory has been conceptualised as language learning device. However, strong associations between phonological working memory and L2 aptitude have been previously found in early-stage learners only, not in advanced learners. The current study aimed at investigating the behavioural and neurobiological predictors of advanced L2 learning. Our behavioural results showed that phonetic coding ability and empathy, but not phonological working memory, predict L2 pronunciation aptitude in advanced learners. Second, functional neuroimaging revealed this behavioural trait to be correlated with hemodynamic responses of the cerebral network of speech motor control and auditory-perceptual areas. We suggest that the acquisition of L2 pronunciation aptitude is a dynamic process, requiring a variety of neural resources at different processing stages over time. Copyright © 2012 Elsevier Inc. All rights reserved.
Ravicz, M E; Rosowski, J J; Voigt, H F
1992-07-01
This is the first paper of a series dealing with sound-power collection by the auditory periphery of the gerbil. The purpose of the series is to quantify the physiological action of the gerbil's relatively large tympanic membrane and middle-ear air cavities. To this end the middle-ear input impedance ZT was measured at frequencies between 10 Hz and 18 kHz before and after manipulations of the middle-ear cavity. The frequency dependence of ZT is consistent with that of the middle-ear transfer function computed from extant data. Comparison of the impedance and transfer function suggests a middle-ear transformer ratio of 50 at frequencies below 1 kHz, substantially smaller than the anatomical value of 90 [Lay, J. Morph. 138, 41-120 (1972)]. Below 1 kHz the data suggest a low-frequency acoustic stiffness KT for the middle ear of 970 Pa/mm3 and a stiffness of the middle-ear cavity of 720 Pa/mm3 (middle-ear volume V MEC of 195 mm3); thus the middle-ear air spaces contribute about 70% of the acoustic stiffness of the auditory periphery. Manipulations of a middle-ear model suggest that decreases in V MEC lead to proportionate increases in KT but that further increases in middle-ear cavity volume produce only limited decreases in middle-ear stiffness. The data and the model point out that the real part of the middle-ear impedance at frequencies below 100 Hz is determined primarily by losses within the middle-ear cavity. The measured impedance is comparable in magnitude and frequency dependence to the impedance in several larger mammalian species commonly used in auditory research. A comparison of low-frequency stiffness and anatomical dimensions among several species suggests that the large middle-ear cavities in gerbil act to reduce the middle-ear stiffness at low frequencies. A description of sound-power collection by the gerbil ear requires a description of the function of the external ear.
Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de
2017-12-07
To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.
Electromagnetic semi-implantable hearing device: phase I. Clinical trials.
McGee, T M; Kartush, J M; Heide, J C; Bojrab, D I; Clemis, J D; Kulick, K C
1991-04-01
Conventional hearing aids have improved significantly in recent years; however, amplification of sound within the external auditory canal creates a number of intrinsic problems, including acoustic feedback and the need for a tight ear mold to increase usable gain. Nonacoustic alternatives which could obviate these encumbrances have not become practical due to inefficient coupling (piezoelectric techniques) or unfeasible power requirements (electromagnetic techniques). Recent technical advances, however, prompted a major clinical investigation of a new electromagnetic, semi-implantable hearing device. This study presents the details of clinical phase I, in which an electromagnetic driver was coupled with a target magnet temporarily affixed onto the lateral surface of the malleus of six hearing aid users with sensorineural losses. The results indicate that the electromagnetic hearing device provides sufficient gain and output characteristics to benefit individuals with sensorineural hearing loss. Significant improvements compared to conventional hearing aids were noted in pure-tone testing and, to a lesser degree, in speech discrimination. Subjective responses were quite favorable, indicating that the electromagnetic hearing device 1. produces no acoustic feedback; 2. works well in noisy environments; and 3. provides a more quiet, natural sound than patients' conventional hearing aids. These favorable results led to phase II of the project, in which patients with surgically amendable mixed hearing losses were implanted with the target magnet incorporated within a hydroxyapatite ossicular prosthesis. The results of this second-stage investigation were also encouraging and will be reported separately.
Law, Lily N. C.; Zentner, Marcel
2012-01-01
A common approach for determining musical competence is to rely on information about individuals’ extent of musical training, but relying on musicianship status fails to identify musically untrained individuals with musical skill, as well as those who, despite extensive musical training, may not be as skilled. To counteract this limitation, we developed a new test battery (Profile of Music Perception Skills; PROMS) that measures perceptual musical skills across multiple domains: tonal (melody, pitch), qualitative (timbre, tuning), temporal (rhythm, rhythm-to-melody, accent, tempo), and dynamic (loudness). The PROMS has satisfactory psychometric properties for the composite score (internal consistency and test-retest r>.85) and fair to good coefficients for the individual subtests (.56 to.85). Convergent validity was established with the relevant dimensions of Gordon’s Advanced Measures of Music Audiation and Musical Aptitude Profile (melody, rhythm, tempo), the Musical Ear Test (rhythm), and sample instrumental sounds (timbre). Criterion validity was evidenced by consistently sizeable and significant relationships between test performance and external musical proficiency indicators in all three studies (.38 to.62, p<.05 to p<.01). An absence of correlations between test scores and a nonmusical auditory discrimination task supports the battery’s discriminant validity (−.05, ns). The interrelationships among the various subtests could be accounted for by two higher order factors, sequential and sensory music processing. A brief version of the full PROMS is introduced as a time-efficient approximation of the full version of the battery. PMID:23285071
AULA-Advanced Virtual Reality Tool for the Assessment of Attention: Normative Study in Spain.
Iriarte, Yahaira; Diaz-Orueta, Unai; Cueto, Eduardo; Irazustabarrena, Paula; Banterla, Flavio; Climent, Gema
2016-06-01
The present study describes the obtention of normative data for the AULA test, a virtual reality tool designed to evaluate attention problems, especially in children and adolescents. The normative sample comprised 1,272 participants (48.2% female) with an age range from 6 to 16 years (M = 10.25, SD = 2.83). The AULA test administered to them shows both visual and auditory stimuli, while randomized distractors of ecological nature appear progressively. Variables provided by AULA were clustered in different categories for their posterior analysis. Differences by age and gender were analyzed, resulting in 14 groups, 7 per sex group. Differences between visual and auditory attention were also obtained. Obtained normative data are relevant for the use of AULA for evaluating attention in Spanish children and adolescents in a more ecological way. Further studies will be needed to determine sensitivity and specificity of AULA to measure attention in different clinical populations. (J. of Att. Dis. 2016; 20(6) 542-568). © The Author(s) 2012.
[Advances in genetics of congenital malformation of external and middle ear].
Wang, Dayong; Wang, Qiuju
2013-05-01
Congenital malformation of external and middle ear is a common disease in ENT department, and the incidence of this disease is second only to cleft lip and palate in the whole congenital malformations of the head and face. The external and middle ear malformations may occur separately, or as an important ear symptom of the systemic syndrome. We systematically review and analysis the genetic research progress of congenital malformation of external and middle ear, which would be helpful to understand the mechanism of external and middle ear development, and to provide clues for the further discovery of new virulence genes.
Bigger Brains or Bigger Nuclei? Regulating the Size of Auditory Structures in Birds
Kubke, M. Fabiana; Massoglia, Dino P.; Carr, Catherine E.
2012-01-01
Increases in the size of the neuronal structures that mediate specific behaviors are believed to be related to enhanced computational performance. It is not clear, however, what developmental and evolutionary mechanisms mediate these changes, nor whether an increase in the size of a given neuronal population is a general mechanism to achieve enhanced computational ability. We addressed the issue of size by analyzing the variation in the relative number of cells of auditory structures in auditory specialists and generalists. We show that bird species with different auditory specializations exhibit variation in the relative size of their hindbrain auditory nuclei. In the barn owl, an auditory specialist, the hind-brain auditory nuclei involved in the computation of sound location show hyperplasia. This hyperplasia was also found in songbirds, but not in non-auditory specialists. The hyperplasia of auditory nuclei was also not seen in birds with large body weight suggesting that the total number of cells is selected for in auditory specialists. In barn owls, differences observed in the relative size of the auditory nuclei might be attributed to modifications in neurogenesis and cell death. Thus, hyperplasia of circuits used for auditory computation accompanies auditory specialization in different orders of birds. PMID:14726625
Multiyear Subcontractor Selection Criteria Analysis.
1983-09-01
advancement are program instability, higher costs, and increased lead-times. Compounding the instability created by advancing technology are changes in...drive smaller firms out of business (17:46). Technology is advancing at an ever increasing pace, demanding higher performance and larger amounts of engi...Process Adding to the external factors mentioned above, the weapon systems acquisition process tends to retard pro- ductivity advancements by its very
Analysis of Modification Mechanism of Gait with Rhythmic Cueing Training Paradigm
NASA Astrophysics Data System (ADS)
Muto, Takeshi; Kanai, Tetsuya; Sakuta, Hiroshi; Miyake, Yoshihiro
In this research, we applied the gait training method which takes in the rhythmic auditory stimulation as a pace maker to the assistance of gait motion, and analyzed the dynamical stability of the period and trajectory of the lower limbs' motions. As the result, it was clarified that, in the training style which presents a constant rhythm, trajectory of ankles was modified as the stable state which has the historical property, but the period of footsteps was not modified but susceptible to the external environment. This result suggests that the hierarchical modification mechanism of motor schema of gait is realized by the connection between the immediate and historical modification system.
Noise levels from toys and recreational articles for children and teenagers.
Hellstrom, P A; Dengerink, H A; Axelsson, A
1992-10-01
This study examined the noise level emitted by toys and recreational articles used by children and teenagers. The results indicate that many of the items tested emit sufficiently intense noise to be a source of noise induced hearing loss in school-age children. While the baby toys provided noise exposure within the limits of national regulations, they are most intense in a frequency range that corresponds to the resonance frequency of the external auditory canal of very young children. Hobby motors emit noise that may require protection depending upon the length of use. Fire-crackers and cap guns emit impulse noises that exceed even conservative standards for noise exposure.
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
The what, where and how of auditory-object perception.
Bizley, Jennifer K; Cohen, Yale E
2013-10-01
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
The what, where and how of auditory-object perception
Bizley, Jennifer K.; Cohen, Yale E.
2014-01-01
The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177
External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Niedra, Janis M.; Geng, Steven M.
2013-01-01
Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.
Federal Guidance Report No. 15: External Exposure to Radionuclides in Air, Water and Soil
FGR 15 updates the 1993 Federal Guidance Report No. 12 (FGR 12), External Exposure to Radionuclides in Air, Water, and Soil. FGR 15 incorporates advances in radiation protection science regarding how organ/tissue doses change with age and sex.
Greenwood, Pamela M; Blumberg, Eric J; Scheldrup, Melissa R
2018-03-01
A comprehensive explanation is lacking for the broad array of cognitive effects modulated by transcranial direct current stimulation (tDCS). We advanced the testable hypothesis that tDCS to the default mode network (DMN) increases processing of goals and stored information at the expense of external events. We further hypothesized that tDCS to the dorsal attention network (DAN) increases processing of external events at the expense of goals and stored information. A literature search (PsychINFO) identified 42 empirical studies and 3 meta-analyses examining effects of prefrontal and/or parietal tDCS on tasks that selectively required external and/or internal processing. Most, though not all, of the studies that met our search criteria supported our hypothesis. Three meta-analyses supported our hypothesis. The hypothesis we advanced provides a framework for the design and interpretation of results in light of the role of large-scale intrinsic networks that govern attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
1989-06-01
Continuously stimulating advances in the aerospace sciences relevant to strengthening the common defence posture; - Improving the co-operation among member...very stimulating symposium. vii KI-1 PREDICTION OF PERSONALITY Harald T. Andersen M.D., Ph.D., D.Sc,D.Av.Med. Director RNoAF Institute of Aviation...audio tape recorder which was connected to the aircraft communication system. This recorder provided a continuous auditory record of each mission so that
Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako
2012-01-25
The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Selective impairment of auditory selective attention under concurrent cognitive load.
Dittrich, Kerstin; Stahl, Christoph
2012-06-01
Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.
Auditory hallucinations induced by trazodone
Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji
2014-01-01
A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048
An evaluation of unisensory and multisensory adaptive flight-path navigation displays
NASA Astrophysics Data System (ADS)
Moroney, Brian W.
1999-11-01
The present study assessed the use of unimodal (auditory or visual) and multimodal (audio-visual) adaptive interfaces to aid military pilots in the performance of a precision-navigation flight task when they were confronted with additional information-processing loads. A standard navigation interface was supplemented by adaptive interfaces consisting of either a head-up display based flight director, a 3D virtual audio interface, or a combination of the two. The adaptive interfaces provided information about how to return to the pathway when off course. Using an advanced flight simulator, pilots attempted two navigation scenarios: (A) maintain proper course under normal flight conditions and (B) return to course after their aircraft's position has been perturbed. Pilots flew in the presence or absence of an additional information-processing task presented in either the visual or auditory modality. The additional information-processing tasks were equated in terms of perceived mental workload as indexed by the NASA-TLX. Twelve experienced military pilots (11 men and 1 woman), naive to the purpose of the experiment, participated in the study. They were recruited from Wright-Patterson Air Force Base and had a mean of 2812 hrs. of flight experience. Four navigational interface configurations, the standard visual navigation interface alone (SV), SV plus adaptive visual, SV plus adaptive auditory, and SV plus adaptive visual-auditory composite were combined factorially with three concurrent tasks (CT), the no CT, the visual CT, and the auditory CT, a completely repeated measures design. The adaptive navigation displays were activated whenever the aircraft was more than 450 ft off course. In the normal flight scenario, the adaptive interfaces did not bolster navigation performance in comparison to the standard interface. It is conceivable that the pilots performed quite adequately using the familiar generic interface under normal flight conditions and hence showed no added benefit of the adaptive interfaces. In the return-to-course scenario, the relative advantages of the three adaptive interfaces were dependent upon the nature of the CT in a complex way. In the absence of a CT, recovery heading performance was superior with the adaptive visual and adaptive composite interfaces compared to the adaptive auditory interface. In the context of a visual CT, recovery when using the adaptive composite interface was superior to that when using the adaptive visual interface. Post-experimental inquiry indicated that when faced with a visual CT, the pilots used the auditory component of the multimodal guidance display to detect gross heading errors and the visual component to make more fine-grained heading adjustments. In the context of the auditory CT, navigation performance using the adaptive visual interface tended to be superior to that when using the adaptive auditory interface. Neither CT performance nor NASA-TLX workload level was influenced differentially by the interface configurations. Thus, the potential benefits associated with the proposed interfaces appear to be unaccompanied by negative side effects involving CT interference and workload. The adaptive interface configurations were altered without any direct input from the pilot. Thus, it was feared that pilots might reject the activation of interfaces independent of their control. However, pilots' debriefing comments about the efficacy of the adaptive interface approach were very positive. (Abstract shortened by UMI.)
Transient human auditory cortex activation during volitional attention shifting
Uhlig, Christian Harm; Gutschalk, Alexander
2017-01-01
While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues. PMID:28273110
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
2015-09-01
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
A new data architecture for advancing life cycle assessment
IntroductionLife cycle assessment (LCA) has a technical architecture that limits data interoperability, transparency, and automated integration of external data. More advanced information technologies offer promise for increasing the ease with which information can be synthesized...
McGurk illusion recalibrates subsequent auditory perception
Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.
2016-01-01
Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960
Auditory Spatial Attention Representations in the Human Cerebral Cortex
Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.
2014-01-01
Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753
Floresco, Stan B; Montes, David R; Tse, Maric M T; van Holstein, Mieke
2018-02-21
The nucleus accumbens (NAc) is a key node within corticolimbic circuitry for guiding action selection and cost/benefit decision making in situations involving reward uncertainty. Preclinical studies have typically assessed risk/reward decision making using assays where decisions are guided by internally generated representations of choice-outcome contingencies. Yet, real-life decisions are often influenced by external stimuli that inform about likelihoods of obtaining rewards. How different subregions of the NAc mediate decision making in such situations is unclear. Here, we used a novel assay colloquially termed the "Blackjack" task that models these types of situations. Male Long-Evans rats were trained to choose between one lever that always delivered a one-pellet reward and another that delivered four pellets with different probabilities [either 50% (good-odds) or 12.5% (poor-odds)], which were signaled by one of two auditory cues. Under control conditions, rats selected the large/risky option more often on good-odds versus poor-odds trials. Inactivation of the NAc core caused indiscriminate choice patterns. In contrast, NAc shell inactivation increased risky choice, more prominently on poor-odds trials. Additional experiments revealed that both subregions contribute to auditory conditional discrimination. NAc core or shell inactivation reduced Pavlovian approach elicited by an auditory CS+, yet shell inactivation also increased responding during presentation of a CS-. These data highlight distinct contributions for NAc subregions in decision making and reward seeking guided by discriminative stimuli. The core is crucial for implementation of conditional rules, whereas the shell refines reward seeking by mitigating the allure of larger, unlikely rewards and reducing expression of inappropriate or non-rewarded actions. SIGNIFICANCE STATEMENT Using external cues to guide decision making is crucial for adaptive behavior. Deficits in cue-guided behavior have been associated with neuropsychiatric disorders, such as attention deficit hyperactivity disorder and schizophrenia, which in turn has been linked to aberrant processing in the nucleus accumbens. However, many preclinical studies have often assessed risk/reward decision making in the absence of explicit cues. The current study fills that gap by using a novel task that allows for the assessment of cue-guided risk/reward decision making in rodents. Our findings identified distinct yet complementary roles for the medial versus lateral portions of this nucleus that provide a broader understanding of the differential contributions it makes to decision making and reward seeking guided by discriminative stimuli. Copyright © 2018 the authors 0270-6474/18/381901-14$15.00/0.
Pillai, Roshni; Yathiraj, Asha
2017-09-01
The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.
Lim, Lynne H Y
2008-12-01
The objective is to describe the multidisciplinary management programme at the National University Hospital (NUH) in Singapore for children with hearing impairment (HI). Over 99.95% of babies born at NUH have hearing tested with both otoacoustic emission and automated auditory brainstem response tests by 6 weeks of age. The referral rate to Otolaryngology is 0.5%. Acquired causes of congenital HI are decreasing. Thirty percent of patients at NUH with idiopathic congenital sensorineural HI have DFNB1/ GJB6 Connexin 26 HI. CT scan or MRI imaging has a higher diagnostic yield when there is unilateral, fluctuating or non-Connexin 26 related HI. Routine electrocardiogram and Opthalmology evaluations will exclude associations of fatal cardiac rhythm anomaly and retinopathy. Other investigations are directed by history and clinical examination. There is now a very wide range of increasingly sophisticated medication, neuro-otologic external, middle and inner ear surgery, hearing aids, middle ear implants and cochlear implants available to improve hearing. A multidisciplinary team from neonatology, paediatrics, otolaryngology, audiology, auditory verbal and speech therapy, ophthalmology, radiology, and psychology working closely with the child, family and schools is needed to develop a cost-effective and comprehensive management programme for paediatric HI.
Wang, Xuelin; Wang, Liling; Zhou, Jianjun; Hu, Yujin
2014-08-01
A three-dimensional finite element model is developed for the simulation of the sound transmission through the human auditory periphery consisting of the external ear canal, middle ear and cochlea. The cochlea is modelled as a straight duct divided into two fluid-filled scalae by the basilar membrane (BM) having an orthotropic material property with dimensional variation along its length. In particular, an active feed-forward mechanism is added into the passive cochlear model to represent the activity of the outer hair cells (OHCs). An iterative procedure is proposed for calculating the nonlinear response resulting from the active cochlea in the frequency domain. Results on the middle-ear transfer function, BM steady-state frequency response and intracochlear pressure are derived. A good match of the model predictions with experimental data from the literatures demonstrates the validity of the ear model for simulating sound pressure gain of middle ear, frequency to place map, cochlear sensitivity and compressive output for large intensity input. The current model featuring an active cochlea is able to correlate directly the sound stimulus in the ear canal with the vibration of BM and provides a tool to explore the mechanisms by which sound pressure in the ear canal is converted to a stimulus for the OHCs.
Non-invasive biophysical measurement of travelling waves in the insect inner ear
2017-01-01
Frequency analysis in the mammalian cochlea depends on the propagation of frequency information in the form of a travelling wave (TW) across tonotopically arranged auditory sensilla. TWs have been directly observed in the basilar papilla of birds and the ears of bush-crickets (Insecta: Orthoptera) and have also been indirectly inferred in the hearing organs of some reptiles and frogs. Existing experimental approaches to measure TW function in tetrapods and bush-crickets are inherently invasive, compromising the fine-scale mechanics of each system. Located in the forelegs, the bush-cricket ear exhibits outer, middle and inner components; the inner ear containing tonotopically arranged auditory sensilla within a fluid-filled cavity, and externally protected by the leg cuticle. Here, we report bush-crickets with transparent ear cuticles as potential model species for direct, non-invasive measuring of TWs and tonotopy. Using laser Doppler vibrometry and spectroscopy, we show that increased transmittance of light through the ear cuticle allows for effective non-invasive measurements of TWs and frequency mapping. More transparent cuticles allow several properties of TWs to be precisely recovered and measured in vivo from intact specimens. Our approach provides an innovative, non-invasive alternative to measure the natural motion of the sensilla-bearing surface embedded in the intact inner ear fluid. PMID:28573026
Size and shape variations of the bony components of sperm whale cochleae.
Schnitzler, Joseph G; Frédérich, Bruno; Früchtnicht, Sven; Schaffeld, Tobias; Baltzer, Johannes; Ruser, Andreas; Siebert, Ursula
2017-04-25
Several mass strandings of sperm whales occurred in the North Sea during January and February 2016. Twelve animals were necropsied and sampled around 48 h after their discovery on German coasts of Schleswig Holstein. The present study aims to explore the morphological variation of the primary sensory organ of sperm whales, the left and right auditory system, using high-resolution computerised tomography imaging. We performed a quantitative analysis of size and shape of cochleae using landmark-based geometric morphometrics to reveal inter-individual anatomical variations. A hierarchical cluster analysis based on thirty-one external morphometric characters classified these 12 individuals in two stranding clusters. A relative amount of shape variation could be attributable to geographical differences among stranding locations and clusters. Our geometric data allowed the discrimination of distinct bachelor schools among sperm whales that stranded on German coasts. We argue that the cochleae are individually shaped, varying greatly in dimensions and that the intra-specific variation observed in the morphology of the cochleae may partially reflect their affiliation to their bachelor school. There are increasing concerns about the impact of noise on cetaceans and describing the auditory periphery of odontocetes is a key conservation issue to further assess the effect of noise pollution.
[Severe Bleeding from the Middle Ear Cavity after Myringotomy: Review Based on a Case Report].
Hofmann, Veit M; Niehues, Stefan M; Albers, Andreas E; Pudszuhn, Annett
2017-03-01
Report of a rare case of severe bleeding from the middle ear cavity after myringotomy. On the basis of the case report, the procedure for such bleeding is discussed in the context of the literature. A 6-year-old boy received a revision myringotomy in an ambulant setting. During the procedure a severe bleeding occurred. The external auditory canal was adequately packed. The patient was extubated and transferred to the clinic as an emergency. Computer tomography of the temporal bone showed the anatomical variant of a dehiscent high jugular bulb, which had been injured. Because no rebleeding occurred, the packing of the ear canal was removed and an explorative tympanoscopy was performed on the third postoperative day. When the tympanomeatal flap was lifted, the defect in the jugular bulb was found. The lesion was covered with Tutopatch ® pads and fibrin glue and the auditory canal was packed again. After removal of the packing three weeks postoperatively a properly healed situs was found. No further measures were taken. The injury of a dehiscent jugular bulb in the course of ear surgeries leads to a massive hemorrhage. The case describes the diagnostic and therapeutic procedure for this relatively rare but severe complication. © Georg Thieme Verlag KG Stuttgart · New York.
Kajikawa, Yoshinao; Frey, Stephen; Ross, Deborah; Falchier, Arnaud; Hackett, Troy A; Schroeder, Charles E
2015-03-11
The superior temporal gyrus (STG) is on the inferior-lateral brain surface near the external ear. In macaques, 2/3 of the STG is occupied by an auditory cortical region, the "parabelt," which is part of a network of inferior temporal areas subserving communication and social cognition as well as object recognition and other functions. However, due to its location beneath the squamous temporal bone and temporalis muscle, the STG, like other inferior temporal regions, has been a challenging target for physiological studies in awake-behaving macaques. We designed a new procedure for implanting recording chambers to provide direct access to the STG, allowing us to evaluate neuronal properties and their topography across the full extent of the STG in awake-behaving macaques. Initial surveys of the STG have yielded several new findings. Unexpectedly, STG sites in monkeys that were listening passively responded to tones with magnitudes comparable to those of responses to 1/3 octave band-pass noise. Mapping results showed longer response latencies in more rostral sites and possible tonotopic patterns parallel to core and belt areas, suggesting the reversal of gradients between caudal and rostral parabelt areas. These results will help further exploration of parabelt areas. Copyright © 2015 the authors 0270-6474/15/354140-11$15.00/0.
The human otitis media with effusion: a numerical-based study.
Areias, B; Parente, M P L; Santos, C; Gentil, F; Natal Jorge, R M
2017-07-01
Otitis media is a group of inflammatory diseases of the middle ear. Acute otitis media and otitis media with effusion (OME) are its two main types of manifestation. Otitis media is common in children and can result in structural alterations in the middle ear which will lead to hearing losses. This work studies the effects of an OME on the sound transmission from the external auditory meatus to the inner ear. The finite element method was applied on the present biomechanical study. The numerical model used in this work was built based on the geometrical information obtained from The visible ear project. The present work explains the mechanisms by which the presence of fluid in the middle ear affects hearing by calculating the magnitude, phase and reduction of the normalized umbo velocity and also the magnitude and phase of the normalized stapes velocity. A sound pressure level of 90 dB SPL was applied at the tympanic membrane. The harmonic analysis was performed with the auditory frequency varying from 100 Hz to 10 kHz. A decrease in the response of the normalized umbo and stapes velocity as the tympanic cavity was filled with fluid was obtained. The decrease was more accentuated at the umbo.
Perception of Animacy from the Motion of a Single Sound Object.
Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel
2015-02-01
Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.
NASA Astrophysics Data System (ADS)
Becker, Meike; Kirschner, Matthias; Sakas, Georgios
2014-03-01
Our research project investigates a multi-port approach for minimally-invasive otologic surgery. For planning such a surgery, an accurate segmentation of the risk structures is crucial. However, the segmentation of these risk structures is a challenging task: The anatomical structures are very small and some have a complex shape, low contrast and vary both in shape and appearance. Therefore, prior knowledge is needed which is why we apply model-based approaches. In the present work, we use the Probabilistic Active Shape Model (PASM), which is a more flexible and specific variant of the Active Shape Model (ASM), to segment the following risk structures: cochlea, semicircular canals, facial nerve, chorda tympani, ossicles, internal auditory canal, external auditory canal and internal carotid artery. For the evaluation we trained and tested the algorithm on 42 computed tomography data sets using leave-one-out tests. Visual assessment of the results shows in general a good agreement of manual and algorithmic segmentations. Further, we achieve a good Average Symmetric Surface Distance while the maximum error is comparatively large due to low contrast at start and end points. Last, we compare the PASM to the standard ASM and show that the PASM leads to a higher accuracy.
Adjamian, Peyman
2016-01-01
Tinnitus is defined as the perception of sound in the absence of an external source. It is often associated with hearing loss and is thought to result from abnormal neural activity at some point or points in the auditory pathway, which is incorrectly interpreted by the brain as an actual sound. Neurostimulation therapies therefore, which interfere on some level with that abnormal activity, are a logical approach to treatment. For tinnitus, where the pathological neuronal activity might be associated with auditory and other areas of the brain, interventions using electromagnetic, electrical, or acoustic stimuli separately, or paired electrical and acoustic stimuli, have been proposed as treatments. Neurostimulation therapies should modulate neural activity to deliver a permanent reduction in tinnitus percept by driving the neuroplastic changes necessary to interrupt abnormal levels of oscillatory cortical activity and restore typical levels of activity. This change in activity should alter or interrupt the tinnitus percept (reduction or extinction) making it less bothersome. Here we review developments in therapies involving electrical stimulation of the ear, head, cranial nerve, or cortex in the treatment of tinnitus which demonstrably, or are hypothesised to, interrupt pathological neuronal activity in the cortex associated with tinnitus. PMID:27403346
Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.
2014-01-01
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Fostering Visions for the Future: A Review of the NASA Institute for Advanced Concepts
NASA Technical Reports Server (NTRS)
2009-01-01
The NASA Institute for Advanced Concepts (NIAC) was formed in 1998 to provide an independent source of advanced aeronautical and space concepts that could dramatically impact how NASA develops and conducts its missions. Until the program's termination in August 2007, NIAC provided an independent open forum, a high-level point of entry to NASA for an external community of innovators, and an external capability for analysis and definition of advanced aeronautics and space concepts to complement the advanced concept activities conducted within NASA. Throughout its 9-year existence, NIAC inspired an atmosphere for innovation that stretched the imagination and encouraged creativity. As requested by Congress, this volume reviews the effectiveness of NIAC and makes recommendations concerning the importance of such a program to NASA and to the nation as a whole, including the proper role of NASA and the federal government in fostering scientific innovation and creativity and in developing advanced concepts for future systems. Key findings and recommendations include that in order to achieve its mission, NASA must have, and is currently lacking, a mechanism to investigate visionary, far-reaching advanced concepts. Therefore, a NIAC-like entity should be reestablished to fill this gap.
Bravo-Torres, Sofía; Der-Mussa, Carolina; Fuentes-López, Eduardo
2018-01-01
To describe, in terms of functional gain and word recognition, the audiological results of patients under 18 years of age implanted with the active bone conduction implant, Bonebridge™. Retrospective case studies conducted by reviewing the medical records of patients receiving implants between 2014 and 2016 in the public health sector in Chile. All patients implanted with the Bonebridge were included (N = 15). Individuals who had bilateral conductive hearing loss, secondary to external ear malformations, were considered as candidates. The average hearing threshold one month after switch on was 25.2 dB (95%CI 23.5-26.9). Hearing thresholds between 0.5 and 4 kHz were better when compared with bone conduction hearing aids. Best performance was observed at 4 kHz, where improvements to hearing were observed throughout the adaptation process. There was evidence of a significant increase in the recognition of monosyllables. The Bonebridge implant showed improvements to hearing thresholds and word recognition in paediatric patients with congenital conductive hearing loss.
Challenges and solutions for realistic room simulation
NASA Astrophysics Data System (ADS)
Begault, Durand R.
2002-05-01
Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.
[Auditory training in workshops: group therapy option].
Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa
2006-01-01
auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.
Short-term plasticity in auditory cognition.
Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko
2007-12-01
Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.