Demodulation processes in auditory perception
NASA Astrophysics Data System (ADS)
Feth, Lawrence L.
1994-08-01
The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.
NASA Astrophysics Data System (ADS)
Moore, Brian C. J.
Psychoacoustics
Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R
2000-08-18
The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.
Tervaniemi, M; Schröger, E; Näätänen, R
1997-05-23
Neuronal mechanisms involved in the processing of complex sounds with asynchronous onsets were studied in reading subjects. The sound onset asynchrony (SOA) between the leading partial and the remaining complex tone was varied between 0 and 360 ms. Infrequently occurring deviant sounds (in which one out of 10 harmonics was different in pitch relative to the frequently occurring standard sound) elicited the mismatch negativity (MMN), a change-specific cortical event-related potential (ERP) component. This indicates that the pitch of standard stimuli had been pre-attentively coded by sensory-memory traces. Moreover, when the complex-tone onset fell within temporal integration window initiated by the leading-partial onset, the deviants elicited the N2b component. This indexes that involuntary attention switch towards the sound change occurred. In summary, the present results support the existence of pre-perceptual integration mechanism of 100-200 ms duration and emphasize its importance in switching attention towards the stimulus change.
Auditory Processing of Complex Sounds Across Frequency Channels.
1992-06-26
towards gaining an understanding how the auditory system processes complex sounds. "The results of binaural psychophysical experiments in human subjects...suggest (1) that spectrally synthetic binaural processing is the rule when the number of components in the tone complex are relatively few (less than...10) and there are no dynamic binaural cues to aid segregation of the target from the background, and (2) that waveforms having large effective
Felix, Richard A; Portfors, Christine V
2007-06-01
Individuals with age-related hearing loss often have difficulty understanding complex sounds such as basic speech. The C57BL/6 mouse suffers from progressive sensorineural hearing loss and thus is an effective tool for dissecting the neural mechanisms underlying changes in complex sound processing observed in humans. Neural mechanisms important for processing complex sounds include multiple tuning and combination sensitivity, and these responses are common in the inferior colliculus (IC) of normal hearing mice. We examined neural responses in the IC of C57Bl/6 mice to single and combinations of tones to examine the extent of spectral integration in the IC after age-related high frequency hearing loss. Ten percent of the neurons were tuned to multiple frequency bands and an additional 10% displayed non-linear facilitation to the combination of two different tones (combination sensitivity). No combination-sensitive inhibition was observed. By comparing these findings to spectral integration properties in the IC of normal hearing CBA/CaJ mice, we suggest that high frequency hearing loss affects some of the neural mechanisms in the IC that underlie the processing of complex sounds. The loss of spectral integration properties in the IC during aging likely impairs the central auditory system's ability to process complex sounds such as speech.
ERIC Educational Resources Information Center
Leech, Robert; Saygin, Ayse Pinar
2011-01-01
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…
Mechanisms Mediating the Perception of Complex Acoustic Patterns
1990-11-09
units stimulated by the louder sound include the units stimulated by the fainter sound. Thus, auditory induction corresponds to a rather sophisticated...FIELD GRU - auditory perception, complex sounds I. I 19. ABSTRACT (Continue on reverse if necessary and identify by block number) Five studies were...show how auditory mechanisms employed for the processing of complex nonverbal patterns have been modified for the perception of speech. 2 Richard M
2013-01-01
Background Previous studies have demonstrated functional and structural temporal lobe abnormalities located close to the auditory cortical regions in schizophrenia. The goal of this study was to determine whether functional abnormalities exist in the cortical processing of musical sound in schizophrenia. Methods Twelve schizophrenic patients and twelve age- and sex-matched healthy controls were recruited, and participants listened to a random sequence of two kinds of sonic entities, intervals (tritones and perfect fifths) and chords (atonal chords, diminished chords, and major triads), of varying degrees of complexity and consonance. The perception of musical sound was investigated by the auditory evoked potentials technique. Results Our results showed that schizophrenic patients exhibited significant reductions in the amplitudes of the N1 and P2 components elicited by musical stimuli, to which consonant sounds contributed more significantly than dissonant sounds. Schizophrenic patients could not perceive the dissimilarity between interval and chord stimuli based on the evoked potentials responses as compared with the healthy controls. Conclusion This study provided electrophysiological evidence of functional abnormalities in the cortical processing of sound complexity and music consonance in schizophrenia. The preliminary findings warrant further investigations for the underlying mechanisms. PMID:23721126
Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.
2017-01-01
Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283
Harmonic template neurons in primate auditory cortex underlying complex sound processing
Feng, Lei
2017-01-01
Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music. PMID:28096341
De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia
2017-11-13
Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.
Auditory brainstem response to complex sounds: a tutorial
Skoe, Erika; Kraus, Nina
2010-01-01
This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007
Processing of speech and non-speech stimuli in children with specific language impairment
NASA Astrophysics Data System (ADS)
Basu, Madhavi L.; Surprenant, Aimee M.
2003-10-01
Specific Language Impairment (SLI) is a developmental language disorder in which children demonstrate varying degrees of difficulties in acquiring a spoken language. One possible underlying cause is that children with SLI have deficits in processing sounds that are of short duration or when they are presented rapidly. Studies so far have compared their performance on speech and nonspeech sounds of unequal complexity. Hence, it is still unclear whether the deficit is specific to the perception of speech sounds or whether it more generally affects the auditory function. The current study aims to answer this question by comparing the performance of children with SLI on speech and nonspeech sounds synthesized from sine-wave stimuli. The children will be tested using the classic categorical perception paradigm that includes both the identification and discrimination of stimuli along a continuum. If there is a deficit in the performance on both speech and nonspeech tasks, it will show that these children have a deficit in processing complex sounds. Poor performance on only the speech sounds will indicate that the deficit is more related to language. The findings will offer insights into the exact nature of the speech perception deficits in children with SLI. [Work supported by ASHF.
Lewis, James W.; Frum, Chris; Brefczynski-Lewis, Julie A.; Talkington, William J.; Walker, Nathan A.; Rapuano, Kristina M.; Kovach, Amanda L.
2012-01-01
Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, while the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when attempting to recognize action sounds. PMID:21305666
Learning-Related Shifts in Generalization Gradients for Complex Sounds
Wisniewski, Matthew G.; Church, Barbara A.; Mercado, Eduardo
2010-01-01
Learning to discriminate stimuli can alter how one distinguishes related stimuli. For instance, training an individual to differentiate between two stimuli along a single dimension can alter how that individual generalizes learned responses. In this study, we examined the persistence of shifts in generalization gradients after training with sounds. University students were trained to differentiate two sounds that varied along a complex acoustic dimension. Students subsequently were tested on their ability to recognize a sound they experienced during training when it was presented among several novel sounds varying along this same dimension. Peak shift was observed in Experiment 1 when generalization tests immediately followed training, and in Experiment 2 when tests were delayed by 24 hours. These findings further support the universality of generalization processes across species, modalities, and levels of stimulus complexity. They also raise new questions about the mechanisms underlying learning-related shifts in generalization gradients. PMID:19815929
Williamson, Ross S; Ahrens, Misha B; Linden, Jennifer F; Sahani, Maneesh
2016-07-20
Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds-a modulation of "input-specific gain" rather than "output gain"-may be a widespread motif in sensory processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Infrasound from Wind Turbines Could Affect Humans
ERIC Educational Resources Information Center
Salt, Alec N.; Kaltenbach, James A.
2011-01-01
Wind turbines generate low-frequency sounds that affect the ear. The ear is superficially similar to a microphone, converting mechanical sound waves into electrical signals, but does this by complex physiologic processes. Serious misconceptions about low-frequency sound and the ear have resulted from a failure to consider in detail how the ear…
Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.
Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S
2002-06-01
Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.
How learning to abstract shapes neural sound representations
Ley, Anke; Vroomen, Jean; Formisano, Elia
2014-01-01
The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes—even in absence of changes in overall signal level—these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations. PMID:24917783
SPAIDE: A Real-time Research Platform for the Clarion CII/90K Cochlear Implant
NASA Astrophysics Data System (ADS)
Van Immerseel, L.; Peeters, S.; Dykmans, P.; Vanpoucke, F.; Bracke, P.
2005-12-01
SPAIDE ( sound-processing algorithm integrated development environment) is a real-time platform of Advanced Bionics Corporation (Sylmar, Calif, USA) to facilitate advanced research on sound-processing and electrical-stimulation strategies with the Clarion CII and 90K implants. The platform is meant for testing in the laboratory. SPAIDE is conceptually based on a clear separation of the sound-processing and stimulation strategies, and, in specific, on the distinction between sound-processing and stimulation channels and electrode contacts. The development environment has a user-friendly interface to specify sound-processing and stimulation strategies, and includes the possibility to simulate the electrical stimulation. SPAIDE allows for real-time sound capturing from file or audio input on PC, sound processing and application of the stimulation strategy, and streaming the results to the implant. The platform is able to cover a broad range of research applications; from noise reduction and mimicking of normal hearing, over complex (simultaneous) stimulation strategies, to psychophysics. The hardware setup consists of a personal computer, an interface board, and a speech processor. The software is both expandable and to a great extent reusable in other applications.
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
2017-08-01
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Low complexity lossless compression of underwater sound recordings.
Johnson, Mark; Partan, Jim; Hurst, Tom
2013-03-01
Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.
Neural responses to sounds presented on and off the beat of ecologically valid music
Tierney, Adam; Kraus, Nina
2013-01-01
The tracking of rhythmic structure is a vital component of speech and music perception. It is known that sequences of identical sounds can give rise to the percept of alternating strong and weak sounds, and that this percept is linked to enhanced cortical and oscillatory responses. The neural correlates of the perception of rhythm elicited by ecologically valid, complex stimuli, however, remain unexplored. Here we report the effects of a stimulus' alignment with the beat on the brain's processing of sound. Human subjects listened to short popular music pieces while simultaneously hearing a target sound. Cortical and brainstem electrophysiological onset responses to the sound were enhanced when it was presented on the beat of the music, as opposed to shifted away from it. Moreover, the size of the effect of alignment with the beat on the cortical response correlated strongly with the ability to tap to a beat, suggesting that the ability to synchronize to the beat of simple isochronous stimuli and the ability to track the beat of complex, ecologically valid stimuli may rely on overlapping neural resources. These results suggest that the perception of musical rhythm may have robust effects on processing throughout the auditory system. PMID:23717268
Transfer of knowledge from sound quality measurement to noise impact evaluation
NASA Astrophysics Data System (ADS)
Genuit, Klaus
2004-05-01
It is well known that the measurement and analysis of sound quality requires a complex procedure with consideration of the physical, psychoacoustical and psychological aspects of sound. Sound quality cannot be described only by a simple value based on A-weighted sound pressure level measurements. The A-weighted sound pressure level is sufficient to predict the probabilty that the human ear could be damaged by sound but the A-weighted level is not the correct descriptor for the annoyance of a complex sound situation given by several different sound events at different and especially moving positions (soundscape). On the one side, the consideration of the spectral distribution and the temporal pattern (psychoacoustics) is requested and, on the other side, the subjective attitude with respect to the sound situation, the expectation and experience of the people (psychology) have to be included in context with the complete noise impact evaluation. This paper describes applications of the newest methods of sound quality measurements-as it is well introduced at the car manufacturers-based on artifical head recordings and signal processing comparable to the human hearing used in noisy environments like community/traffic noise.
Attentional Capacity Limits Gap Detection during Concurrent Sound Segregation.
Leung, Ada W S; Jolicoeur, Pierre; Alain, Claude
2015-11-01
Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.
Webster, Paula J.; Skipper-Kallal, Laura M.; Frum, Chris A.; Still, Hayley N.; Ward, B. Douglas; Lewis, James W.
2017-01-01
A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less semantically complex natural sounds produced by non-conspecific animals rather than humans. Our results revealed a striking double-dissociation of activated networks bilaterally. This included a previously well described pathway preferential for processing vocalization signals directed laterally from functionally defined primary auditory cortices to the anterior superior temporal gyri, and a less well-described pathway preferential for processing animal action sounds directed medially to the posterior insulae. We additionally found that some of these regions and associated cortical networks showed parametric sensitivity to high-order quantifiable acoustic signal attributes and/or to perceptual features of the natural stimuli, such as the degree of perceived recognition or intentional understanding. Overall, these results supported a neurobiological theoretical framework for how the mammalian brain may be fundamentally organized to process acoustically and acoustic-semantically distinct categories of ethologically valid, real-world sounds. PMID:28111538
Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.
2012-01-01
Previous studies have shown that the pitch of a sound is perceived in the absence of its fundamental frequency (F0), suggesting that a distinct mechanism may resolve pitch based on a pattern that exists between harmonic frequencies. The present study investigated whether such a mechanism is active during voice pitch control. ERPs were recorded in response to +200 cents pitch shifts in the auditory feedback of self-vocalizations and complex tones with and without the F0. The absence of the fundamental induced no difference in ERP latencies. However, a right-hemisphere difference was found in the N1 amplitudes with larger responses to complex tones that included the fundamental compared to when it was missing. The P1 and N1 latencies were shorter in the left hemisphere, and the N1 and P2 amplitudes were larger bilaterally for pitch shifts in voice and complex tones compared with pure tones. These findings suggest hemispheric differences in neural encoding of pitch in sounds with missing fundamental. Data from the present study suggest that the right cortical auditory areas, thought to be specialized for spectral processing, may utilize different mechanisms to resolve pitch in sounds with missing fundamental. The left hemisphere seems to perform faster processing to resolve pitch based on the rate of temporal variations in complex sounds compared with pure tones. These effects indicate that the differential neural processing of pitch in the left and right hemispheres may enable the audio-vocal system to detect temporal and spectral variations in the auditory feedback for vocal pitch control. PMID:22386045
Human brain regions involved in recognizing environmental sounds.
Lewis, James W; Wightman, Frederic L; Brefczynski, Julie A; Phinney, Raymond E; Binder, Jeffrey R; DeYoe, Edgar A
2004-09-01
To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.
ERIC Educational Resources Information Center
McKeown, Denis; Wellsted, David
2009-01-01
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…
Sonification Design for Complex Work Domains: Dimensions and Distractors
ERIC Educational Resources Information Center
Anderson, Janet E.; Sanderson, Penelope
2009-01-01
Sonification--representing data in sound--is a potential method for supporting human operators who have to monitor dynamic processes. Previous research has investigated a limited number of sound dimensions and has not systematically investigated the impact of dimensional interactions on sonification effectiveness. In three experiments the authors…
Auditory scene analysis in school-aged children with developmental language disorders
Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.
2014-01-01
Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430
Emotional sounds modulate early neural processing of emotional pictures
Gerdes, Antje B. M.; Wieser, Matthias J.; Bublatzky, Florian; Kusay, Anita; Plichta, Michael M.; Alpers, Georg W.
2013-01-01
In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant, and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2 s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP), independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception. PMID:24151476
McKeown, Denis; Wellsted, David
2009-06-01
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex was decreased (Experiments 1 and 2) or increased (Experiments 3, 4, and 5) in intensity on half of trials: The task was simply to identify those trials. Prior to each trial, a pure tone inducer was introduced either at the same frequency as the target component or at the frequency of a different component of the complex. Consistent with a frequency-specific form of disruption, discrimination performance was impaired when the inducing tone matched the frequency of the following decrement or increment. A timbre memory model (TMM) is proposed incorporating channel-specific interference allied to inhibition of attending in the coding of sounds in the context of memory traces of recent sounds. (c) 2009 APA, all rights reserved.
Complex auditory behaviour emerges from simple reactive steering
NASA Astrophysics Data System (ADS)
Hedwig, Berthold; Poulet, James F. A.
2004-08-01
The recognition and localization of sound signals is fundamental to acoustic communication. Complex neural mechanisms are thought to underlie the processing of species-specific sound patterns even in animals with simple auditory pathways. In female crickets, which orient towards the male's calling song, current models propose pattern recognition mechanisms based on the temporal structure of the song. Furthermore, it is thought that localization is achieved by comparing the output of the left and right recognition networks, which then directs the female to the pattern that most closely resembles the species-specific song. Here we show, using a highly sensitive method for measuring the movements of female crickets, that when walking and flying each sound pulse of the communication signal releases a rapid steering response. Thus auditory orientation emerges from reactive motor responses to individual sound pulses. Although the reactive motor responses are not based on the song structure, a pattern recognition process may modulate the gain of the responses on a longer timescale. These findings are relevant to concepts of insect auditory behaviour and to the development of biologically inspired robots performing cricket-like auditory orientation.
Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time
Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André
2015-01-01
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition. PMID:26388721
Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André
2015-01-01
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.
Psychophysiological Correlates of Developmental Changes in Healthy and Autistic Boys
ERIC Educational Resources Information Center
Weismüller, Benjamin; Thienel, Renate; Youlden, Anne-Marie; Fulham, Ross; Koch, Michael; Schall, Ulrich
2015-01-01
This study investigated neurodevelopmental changes in sound processing by recording mismatch negativity (MMN) in response to various degrees of sound complexity in 18 mildly to moderately autistic versus 15 healthy boys aged between 6 and 15 years. Autistic boys presented with lower IQ and poor performance on a range of executive and social…
Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J
2015-05-01
Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.
Neural Correlates of Sound Localization in Complex Acoustic Environments
Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto
2013-01-01
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185
Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State
NASA Astrophysics Data System (ADS)
Stoop, Ruedi; Gomez, Florian
2016-07-01
The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.
Cortical representations of communication sounds.
Heiser, Marc A; Cheung, Steven W
2008-10-01
This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.
Amin, Noopur; Gastpar, Michael; Theunissen, Frédéric E.
2013-01-01
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment. PMID:23630587
Tarasenko, Melissa A; Swerdlow, Neal R; Makeig, Scott; Braff, David L; Light, Gregory A
2014-01-01
Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker - the auditory brain-stem response (ABR) to complex sounds (cABR) - that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions.
Slevc, L Robert; Shell, Alison R
2015-01-01
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Auditory biological marker of concussion in children
Kraus, Nina; Thompson, Elaine C.; Krizman, Jennifer; Cook, Katherine; White-Schwoch, Travis; LaBella, Cynthia R.
2016-01-01
Concussions carry devastating potential for cognitive, neurologic, and socio-emotional disease, but no objective test reliably identifies a concussion and its severity. A variety of neurological insults compromise sound processing, particularly in complex listening environments that place high demands on brain processing. The frequency-following response captures the high computational demands of sound processing with extreme granularity and reliably reveals individual differences. We hypothesize that concussions disrupt these auditory processes, and that the frequency-following response indicates concussion occurrence and severity. Specifically, we hypothesize that concussions disrupt the processing of the fundamental frequency, a key acoustic cue for identifying and tracking sounds and talkers, and, consequently, understanding speech in noise. Here we show that children who sustained a concussion exhibit a signature neural profile. They have worse representation of the fundamental frequency, and smaller and more sluggish neural responses. Neurophysiological responses to the fundamental frequency partially recover to control levels as concussion symptoms abate, suggesting a gain in biological processing following partial recovery. Neural processing of sound correctly identifies 90% of concussion cases and clears 95% of control cases, suggesting this approach has practical potential as a scalable biological marker for sports-related concussion and other types of mild traumatic brain injuries. PMID:28005070
The Design, Development and Testing of a Multi-process Real-time Software System
2007-03-01
programming large systems stems from the complexity of dealing with many different details at one time. A sound engineering approach is to break...controls and 3) is portable to other OS platforms such as Microsoft Windows. Next, to reduce the complexity of the programming tasks, the system...processes depending on how often the process has to check to see if common data was modified. A good method for one process to quickly notify another
Developmental Changes in Locating Voice and Sound in Space
Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi
2017-01-01
We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220
Evolutionary trends in directional hearing
Carr, Catherine E.; Christensen-Dalsgaard, Jakob
2016-01-01
Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829
Evoked-potential changes following discrimination learning involving complex sounds
Orduña, Itzel; Liu, Estella H.; Church, Barbara A.; Eddins, Ann C.; Mercado, Eduardo
2011-01-01
Objective Perceptual sensitivities are malleable via learning, even in adults. We trained adults to discriminate complex sounds (periodic, frequency-modulated sweep trains) using two different training procedures, and used psychoacoustic tests and evoked potential measures (the N1-P2 complex) to assess changes in both perceptual and neural sensitivities. Methods Training took place either on a single day, or daily across eight days, and involved discrimination of pairs of stimuli using a single-interval, forced-choice task. In some participants, training started with dissimilar pairs that became progressively more similar across sessions, whereas in others training was constant, involving only one, highly similar, stimulus pair. Results Participants were better able to discriminate the complex sounds after training, particularly after progressive training, and the evoked potentials elicited by some of the sounds increased in amplitude following training. Significant amplitude changes were restricted to the P2 peak. Conclusion Our findings indicate that changes in perceptual sensitivities parallel enhanced neural processing. Significance These results are consistent with the proposal that changes in perceptual abilities arise from the brain’s capacity to adaptively modify cortical representations of sensory stimuli, and that different training regimens can lead to differences in cortical sensitivities, even after relatively short periods of training. PMID:21958655
Age Differences in the Neuroelectric Adaptation to Meaningful Sounds
Leung, Ada W. S.; He, Yu; Grady, Cheryl L.; Alain, Claude
2013-01-01
Much of what we know regarding the effect of stimulus repetition on neuroelectric adaptation comes from studies using artificially produced pure tones or harmonic complex sounds. Little is known about the neural processes associated with the representation of everyday sounds and how these may be affected by aging. In this study, we used real life, meaningful sounds presented at various azimuth positions and found that auditory evoked responses peaking at about 100 and 180 ms after sound onset decreased in amplitude with stimulus repetition. This neural adaptation was greater in young than in older adults and was more pronounced when the same sound was repeated at the same location. Moreover, the P2 waves showed differential patterns of domain-specific adaptation when location and identity was repeated among young adults. Background noise decreased ERP amplitudes and modulated the magnitude of repetition effects on both the N1 and P2 amplitude, and the effects were comparable in young and older adults. These findings reveal an age-related difference in the neural processes associated with adaptation to meaningful sounds, which may relate to older adults’ difficulty in ignoring task-irrelevant stimuli. PMID:23935900
The sound symbolism bootstrapping hypothesis for language acquisition and language evolution
Imai, Mutsumi; Kita, Sotaro
2014-01-01
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. PMID:25092666
Neurobiology of Everyday Communication: What Have We Learned From Music?
Kraus, Nina; White-Schwoch, Travis
2016-06-09
Sound is an invisible but powerful force that is central to everyday life. Studies in the neurobiology of everyday communication seek to elucidate the neural mechanisms underlying sound processing, their stability, their plasticity, and their links to language abilities and disabilities. This sound processing lies at the nexus of cognitive, sensorimotor, and reward networks. Music provides a powerful experimental model to understand these biological foundations of communication, especially with regard to auditory learning. We review studies of music training that employ a biological approach to reveal the integrity of sound processing in the brain, the bearing these mechanisms have on everyday communication, and how these processes are shaped by experience. Together, these experiments illustrate that music works in synergistic partnerships with language skills and the ability to make sense of speech in complex, everyday listening environments. The active, repeated engagement with sound demanded by music making augments the neural processing of speech, eventually cascading to listening and language. This generalization from music to everyday communications illustrates both that these auditory brain mechanisms have a profound potential for plasticity and that sound processing is biologically intertwined with listening and language skills. A new wave of studies has pushed neuroscience beyond the traditional laboratory by revealing the effects of community music training in underserved populations. These community-based studies reinforce laboratory work highlight how the auditory system achieves a remarkable balance between stability and flexibility in processing speech. Moreover, these community studies have the potential to inform health care, education, and social policy by lending a neurobiological perspective to their efficacy. © The Author(s) 2016.
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.
Sound Richness of Music Might Be Mediated by Color Perception: A PET Study.
Satoh, Masayuki; Nagata, Ken; Tomimoto, Hidekazu
2015-01-01
We investigated the role of the fusiform cortex in music processing with the use of PET, focusing on the perception of sound richness. Musically naïve subjects listened to familiar melodies with three kinds of accompaniments: (i) an accompaniment composed of only three basic chords (chord condition), (ii) a simple accompaniment typically used in traditional music text books in elementary school (simple condition), and (iii) an accompaniment with rich and flowery sounds composed by a professional composer (complex condition). Using a PET subtraction technique, we studied changes in regional cerebral blood flow (rCBF) in simple minus chord, complex minus simple, and complex minus chord conditions. The simple minus chord, complex minus simple, and complex minus chord conditions regularly showed increases in rCBF at the posterior portion of the inferior temporal gyrus, including the LOC and fusiform gyrus. We may conclude that certain association cortices such as the LOC and the fusiform cortex may represent centers of multisensory integration, with foreground and background segregation occurring at the LOC level and the recognition of richness and floweriness of stimuli occurring in the fusiform cortex, both in terms of vision and audition.
Sahu, Atanu; Bhattacharya, Partha; Niyogi, Arup Guha; Rose, Michael
2017-06-01
Double-wall panels are known for their superior sound insulation properties over single wall panels as a sound barrier. The sound transmission phenomenon through a double-wall structure is a complex process involving vibroacoustic interaction between structural panels, the air-cushion in between, and the secondary acoustic domain. It is in this context a versatile and a fully coupled technique based on the finite-element-boundary element model is developed that enables estimation of sound transfer through a double-wall panel into an adjacent enclosure while satisfying the displacement compatibility across the interface. The contribution of individual components in the transmitted energy is identified through numerical simulations.
Soundscapes and the sense of hearing of fishes.
Fay, Richard
2009-03-01
Underwater soundscapes have probably played an important role in the adaptation of ears and auditory systems of fishes throughout evolutionary time, and for all species. These sounds probably contain important information about the environment and about most objects and events that confront the receiving fish so that appropriate behavior is possible. For example, the sounds from reefs appear to be used by at least some fishes for their orientation and migration. These sorts of environmental sounds should be considered much like "acoustic daylight," that continuously bathes all environments and contain information that all organisms can potentially use to form a sort of image of the environment. At present, however, we are generally ignorant of the nature of ambient sound fields impinging on fishes, and the adaptive value of processing these fields to resolve the multiple sources of sound. Our field has focused almost exclusively on the adaptive value of processing species-specific communication sounds, and has not considered the informational value of ambient "noise." Since all fishes can detect and process acoustic particle motion, including the directional characteristics of this motion, underwater sound fields are potentially more complex and information-rich than terrestrial acoustic environments. The capacities of one fish species (goldfish) to receive and make use of such sound source information have been demonstrated (sound source segregation and auditory scene analysis), and it is suggested that all vertebrate species have this capacity. A call is made to better understand underwater soundscapes, and the associated behaviors they determine in fishes. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.
Bao, Shaowen; Chang, Edward F.; Teng, Ching-Ling; Heiser, Marc A.; Merzenich, Michael M.
2013-01-01
Cortical sensory representations can be reorganized by sensory exposure in an epoch of early development. The adaptive role of this type of plasticity for natural sounds in sensory development is, however, unclear. We have reared rats in a naturalistic, complex acoustic environment and examined their auditory representations. We found that cortical neurons became more selective to spectrotemporal features in the experienced sounds. At the neuronal population level, more neurons were involved in representing the whole set of complex sounds, but fewer neurons actually responded to each individual sound, but with greater magnitudes. A comparison of population-temporal responses to the experienced complex sounds revealed that cortical responses to different renderings of the same song motif were more similar, indicating that the cortical neurons became less sensitive to natural acoustic variations associated with stimulus context and sound renderings. By contrast, cortical responses to sounds of different motifs became more distinctive, suggesting that cortical neurons were tuned to the defining features of the experienced sounds. These effects lead to emergent “categorical” representations of the experienced sounds, which presumably facilitate their recognition. PMID:23747304
Perception of Long-Period Complex Sounds
1989-11-27
Richard M. Warren AFOSR Grant No. 88-0320 M CES Guttman, N. & Julesz, B. (1963). Lower limits of auditory periodicity analysis. Journal of the Aostical...order within auditory sequences. Peretion & PsvchobhVsics, 12, 86-90. Watson, C.S., (1987). Uncertainty, informational masking, and the capacity of...immediate memory. In W.A. Yost and C.S. Watson (eds.), Auditory Processing of Camlex Sounds. New Jersey: lawrence Erlbaum Associates, pp. 267-277
Tarasenko, Melissa A.; Swerdlow, Neal R.; Makeig, Scott; Braff, David L.; Light, Gregory A.
2014-01-01
Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker – the auditory brain-stem response (ABR) to complex sounds (cABR) – that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions. PMID:25352811
Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.
2015-01-01
The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening conditions. PMID:25628545
Miyazaki, Takahiro; Thompson, Jessica; Fujioka, Takako; Ross, Bernhard
2013-04-19
Amplitude fluctuations of natural sounds carry multiple types of information represented at different time scales, such as syllables and voice pitch in speech. However, it is not well understood how such amplitude fluctuations at different time scales are processed in the brain. In the present study we investigated the effect of the stimulus rate on the cortical evoked responses using magnetoencephalography (MEG). We used a two-tone complex sound, whose envelope fluctuated at the difference frequency and induced an acoustic beat sensation. When the beat rate was continuously swept between 3Hz and 60Hz, auditory evoked response showed distinct transient waves at slow rates, while at fast rates continuous sinusoidal oscillations similar to the auditory steady-state response (ASSR) were observed. We further derived temporal modulation transfer functions (TMTF) from amplitudes of the transient responses and from the ASSR. The results identified two critical rates of 12.5Hz and 25Hz, at which consecutive transient responses overlapped with each other. These stimulus rates roughly corresponded to the rates at which the perceptual quality of the sound envelope is known to change. Low rates (> 10Hz) are perceived as loudness fluctuation, medium rates as acoustical flutter, and rates above 25Hz as roughness. We conclude that these results reflect cortical processes that integrate successive acoustic events at different time scales for extracting complex features of natural sound. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang; Yang, Zhaoqing
2011-01-01
Estuarine and coastal hydrodynamic processes are sometimes neglected in the design and planning of nearshore restoration actions. Despite best intentions, efforts to restore nearshore habitats can result in poor outcomes if circulation and transport which also affect freshwater-saltwater interactions are not properly addressed. Limitations due to current land use can lead to selection of sub-optimal restoration alternatives that may result in undesirable consequences, such as flooding, deterioration of water quality, and erosion, requiring immediate remedies and costly repairs. Uncertainty with achieving restoration goals, such as recovery of tidal exchange, supply of sediment and nutrients, and establishment of fish migration pathways,more » may be minimized by using numerical models designed for application to the nearshore environment. A high resolution circulation and transport model of the Puget Sound, in the state of Washington, was developed to assist with nearshore habitat restoration design and analysis, and to answer the question “can we achieve beneficial restoration outcomes at small local scale, as well as at a large estuary-wide scale?” The Puget Sound model is based on an unstructured grid framework to define the complex Puget Sound shoreline using a finite volume coastal ocean model (FVCOM). The capability of the model for simulating the important nearshore processes, such as circulation in complex multiple tidal channels, wetting and drying of tide flats, and water quality and sediment transport as part of restoration feasibility, are illustrated through examples of restoration projects in Puget Sound.« less
The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.
Imai, Mutsumi; Kita, Sotaro
2014-09-19
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model
Marsh, John E.; Campbell, Tom A.
2016-01-01
The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory. PMID:27242396
Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model.
Marsh, John E; Campbell, Tom A
2016-01-01
The rostral brainstem receives both "bottom-up" input from the ascending auditory system and "top-down" descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory.
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
Maturation of the auditory t-complex brain response across adolescence.
Mahajan, Yatin; McArthur, Genevieve
2013-02-01
Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10-18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Teaching MBA Statistics Online: A Pedagogically Sound Process Approach
ERIC Educational Resources Information Center
Grandzol, John R.
2004-01-01
Delivering MBA statistics in the online environment presents significant challenges to education and students alike because of varying student preparedness levels, complexity of content, difficulty in assessing learning outcomes, and faculty availability and technological expertise. In this article, the author suggests a process model that…
ERIC Educational Resources Information Center
Kyburz-Graber, Regula
2004-01-01
There is a tendency to use case-study research methodology for research issues aiming at simply describing a complex situation, and to draw conclusions with insufficient rigour. Sound case-study research, however, follows discriminate rules which can be described in all the dimensions of a full case-study research process. This paper examines…
Human Time-Frequency Acuity Beats the Fourier Uncertainty Principle
NASA Astrophysics Data System (ADS)
Oppenheim, Jacob N.; Magnasco, Marcelo O.
2013-01-01
The time-frequency uncertainty principle states that the product of the temporal and frequency extents of a signal cannot be smaller than 1/(4π). We study human ability to simultaneously judge the frequency and the timing of a sound. Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity. Our results establish a lower bound for the nonlinearity and complexity of the algorithms employed by our brains in parsing transient sounds, rule out simple “linear filter” models of early auditory processing, and highlight timing acuity as a central feature in auditory object processing.
Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions
Leech, Robert; Holt, Lori L.; Devlin, Joseph T.; Dick, Frederic
2009-01-01
Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial non-linguistic sounds. Before and after training, we used functional MRI to measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the non-speech sounds predicted the change in pre- to post-training activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based restructuring of high-dimensional perceptual space. PMID:19386919
Application of Intervention Mapping to the Development of a Complex Physical Therapist Intervention.
Jones, Taryn M; Dear, Blake F; Hush, Julia M; Titov, Nickolai; Dean, Catherine M
2016-12-01
Physical therapist interventions, such as those designed to change physical activity behavior, are often complex and multifaceted. In order to facilitate rigorous evaluation and implementation of these complex interventions into clinical practice, the development process must be comprehensive, systematic, and transparent, with a sound theoretical basis. Intervention Mapping is designed to guide an iterative and problem-focused approach to the development of complex interventions. The purpose of this case report is to demonstrate the application of an Intervention Mapping approach to the development of a complex physical therapist intervention, a remote self-management program aimed at increasing physical activity after acquired brain injury. Intervention Mapping consists of 6 steps to guide the development of complex interventions: (1) needs assessment; (2) identification of outcomes, performance objectives, and change objectives; (3) selection of theory-based intervention methods and practical applications; (4) organization of methods and applications into an intervention program; (5) creation of an implementation plan; and (6) generation of an evaluation plan. The rationale and detailed description of this process are presented using an example of the development of a novel and complex physical therapist intervention, myMoves-a program designed to help individuals with an acquired brain injury to change their physical activity behavior. The Intervention Mapping framework may be useful in the development of complex physical therapist interventions, ensuring the development is comprehensive, systematic, and thorough, with a sound theoretical basis. This process facilitates translation into clinical practice and allows for greater confidence and transparency when the program efficacy is investigated. © 2016 American Physical Therapy Association.
Techniques and instrumentation for the measurement of transient sound energy flux
NASA Astrophysics Data System (ADS)
Watkinson, P. S.; Fahy, F. J.
1983-12-01
The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.
Andreeva, I G; Vartanian, I A
2012-01-01
The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.
Auditory connections and functions of prefrontal cortex
Plakke, Bethany; Romanski, Lizabeth M.
2014-01-01
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
Perception of environmental sounds by experienced cochlear implant patients.
Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan
2011-01-01
Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.
Jones, S J; Longe, O; Vaz Pato, M
1998-03-01
Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.
Wave field synthesis of moving virtual sound sources with complex radiation properties.
Ahrens, Jens; Spors, Sascha
2011-11-01
An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.
ERIC Educational Resources Information Center
Lunetta, Vincent N.; And Others
1984-01-01
Advocates including environmental issues balanced with basic science concepts/processes to provide a sound science foundation. Suggests case studies of regional environmental issues to sensitize/motivate students while reflecting complex nature of science/society issues. Issues considered include: fresh water quality, earthquake predication,…
Know thy sound: perceiving self and others in musical contexts.
Sevdalis, Vassilis; Keller, Peter E
2014-10-01
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory-motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual-motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice. Copyright © 2014 Elsevier B.V. All rights reserved.
Music and language perception: expectations, structural integration, and cognitive sequencing.
Tillmann, Barbara
2012-10-01
Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes. Copyright © 2012 Cognitive Science Society, Inc.
Kello, Christopher T; Bella, Simone Dalla; Médé, Butovens; Balasubramaniam, Ramesh
2017-10-01
Humans talk, sing and play music. Some species of birds and whales sing long and complex songs. All these behaviours and sounds exhibit hierarchical structure-syllables and notes are positioned within words and musical phrases, words and motives in sentences and musical phrases, and so on. We developed a new method to measure and compare hierarchical temporal structures in speech, song and music. The method identifies temporal events as peaks in the sound amplitude envelope, and quantifies event clustering across a range of timescales using Allan factor (AF) variance. AF variances were analysed and compared for over 200 different recordings from more than 16 different categories of signals, including recordings of speech in different contexts and languages, musical compositions and performances from different genres. Non-human vocalizations from two bird species and two types of marine mammals were also analysed for comparison. The resulting patterns of AF variance across timescales were distinct to each of four natural categories of complex sound: speech, popular music, classical music and complex animal vocalizations. Comparisons within and across categories indicated that nested clustering in longer timescales was more prominent when prosodic variation was greater, and when sounds came from interactions among individuals, including interactions between speakers, musicians, and even killer whales. Nested clustering also was more prominent for music compared with speech, and reflected beat structure for popular music and self-similarity across timescales for classical music. In summary, hierarchical temporal structures reflect the behavioural and social processes underlying complex vocalizations and musical performances. © 2017 The Author(s).
An alternative respiratory sounds classification system utilizing artificial neural networks.
Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen
2015-01-01
Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.
Tool-use-associated sound in the evolution of language.
Larsson, Matz
2015-09-01
Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. In the present paper, it is hypothesized that the production and perception of sound, particularly of incidental sound of locomotion (ISOL) and tool-use sound (TUS), also contributed. Human bipedalism resulted in rhythmic and more predictable ISOL. It has been proposed that this stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations and to mimic natural sounds. Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use. A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved. ISOL and tool-use-related sound are worth further exploration.
DOT National Transportation Integrated Search
2010-01-01
Transportation Asset Management (TAM) has long been recognized as a sound, long-term approach to managing infrastructure. It provides decision makers with a rational, long-term systematic process for making difficult and complex decisions about how t...
Johnson, Julene K; Chow, Maggie L
2016-01-01
Music is a complex acoustic signal that relies on a number of different brain and cognitive processes to create the sensation of hearing. Changes in hearing function are generally not a major focus of concern for persons with a majority of neurodegenerative diseases associated with dementia, such as Alzheimer disease (AD). However, changes in the processing of sounds may be an early, and possibly preclinical, feature of AD and other neurodegenerative diseases. The aim of this chapter is to review the current state of knowledge concerning hearing and music perception in persons who have a dementia as a result of a neurodegenerative disease. The review focuses on both peripheral and central auditory processing in common neurodegenerative diseases, with a particular focus on the processing of music and other non-verbal sounds. The chapter also reviews music interventions used for persons with neurodegenerative diseases. PMID:25726296
Scanning silence: mental imagery of complex sounds.
Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz
2005-07-15
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.
Assessment and improvement of sound quality in cochlear implant users
Caldwell, Meredith T.; Jiam, Nicole T.
2017-01-01
Objectives Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Data Sources Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Results Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant‐mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI‐MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. Conclusions In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. Level of Evidence NA PMID:28894831
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara
2003-04-01
One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.
Is Statistical Learning Constrained by Lower Level Perceptual Organization?
Emberson, Lauren L.; Liu, Ran; Zevin, Jason D.
2013-01-01
In order for statistical information to aid in complex developmental processes such as language acquisition, learning from higher-order statistics (e.g. across successive syllables in a speech stream to support segmentation) must be possible while perceptual abilities (e.g. speech categorization) are still developing. The current study examines how perceptual organization interacts with statistical learning. Adult participants were presented with multiple exemplars from novel, complex sound categories designed to reflect some of the spectral complexity and variability of speech. These categories were organized into sequential pairs and presented such that higher-order statistics, defined based on sound categories, could support stream segmentation. Perceptual similarity judgments and multi-dimensional scaling revealed that participants only perceived three perceptual clusters of sounds and thus did not distinguish the four experimenter-defined categories, creating a tension between lower level perceptual organization and higher-order statistical information. We examined whether the resulting pattern of learning is more consistent with statistical learning being “bottom-up,” constrained by the lower levels of organization, or “top-down,” such that higher-order statistical information of the stimulus stream takes priority over the perceptual organization, and perhaps influences perceptual organization. We consistently find evidence that learning is constrained by perceptual organization. Moreover, participants generalize their learning to novel sounds that occupy a similar perceptual space, suggesting that statistical learning occurs based on regions of or clusters in perceptual space. Overall, these results reveal a constraint on learning of sound sequences, such that statistical information is determined based on lower level organization. These findings have important implications for the role of statistical learning in language acquisition. PMID:23618755
A sound advantage: Increased auditory capacity in autism.
Remington, Anna; Fairnie, Jake
2017-09-01
Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same 'skill' could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Dynamic sound localization in cats
Ruhland, Janet L.; Jones, Amy E.
2015-01-01
Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772
A Generalized Mechanism for Perception of Pitch Patterns
Loui, Psyche; Wu, Elaine H.; Wessel, David L.; Knight, Robert T.
2009-01-01
Surviving in a complex and changeable environment relies upon the ability to extract probable recurring patterns. Here we report a neurophysiological mechanism for rapid probabilistic learning of a new system of music. Participants listened to different combinations of tones from a previously-unheard system of pitches based on the Bohlen-Pierce scale, with chord progressions that form 3:1 ratios in frequency, notably different from 2:1 frequency ratios in existing musical systems. Event-related brain potentials elicited by improbable sounds in the new music system showed emergence over a one-hour period of physiological signatures known to index sound expectation in standard Western music. These indices of expectation learning were eliminated when sound patterns were played equiprobably, and co-varied with individual behavioral differences in learning. These results demonstrate that humans utilize a generalized probability-based perceptual learning mechanism to process novel sound patterns in music. PMID:19144845
Research and Studies Directory for Manpower, Personnel, and Training
1988-01-01
314-889-6505 PSYCHOPHYSIOLCGICAL MAPPING OF COGNITIVE PROCESSES SUGA N* WASHINGTON UNIV ST LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE...VISUAL PERCEPTION CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX DICHOTIC LISTENING TO COMPLEX SOUNDS: EFFECTS OF STIMULUS CHARACTERISTICS AND
Sound Fields in Complex Listening Environments
2011-01-01
The conditions of sound fields used in research, especially testing and fitting of hearing aids, are usually simplified or reduced to fundamental physical fields, such as the free or the diffuse sound field. The concepts of such ideal conditions are easily introduced in theoretical and experimental investigations and in models for directional microphones, for example. When it comes to real-world application of hearing aids, however, the field conditions are more complex with regard to specific stationary and transient properties in room transfer functions and the corresponding impulse responses and binaural parameters. Sound fields can be categorized in outdoor rural and urban and indoor environments. Furthermore, sound fields in closed spaces of various sizes and shapes and in situations of transport in vehicles, trains, and aircrafts are compared with regard to the binaural signals. In laboratory tests, sources of uncertainties are individual differences in binaural cues and too less controlled sound field conditions. Furthermore, laboratory sound fields do not cover the variety of complex sound environments. Spatial audio formats such as higher-order ambisonics are candidates for sound field references not only in room acoustics and audio engineering but also in audiology. PMID:21676999
Ideophones in Japanese modulate the P2 and late positive complex responses
Lockwood, Gwilym; Tuomainen, Jyrki
2015-01-01
Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response than arbitrary adverbs, as well as a sustained late positive complex. Our results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of arbitrary words in comparison to ideophones. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds. PMID:26191031
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
Garland, Ellen C; Rendell, Luke; Lilley, Matthew S; Poole, M Michael; Allen, Jenny; Noad, Michael J
2017-07-01
Identifying and quantifying variation in vocalizations is fundamental to advancing our understanding of processes such as speciation, sexual selection, and cultural evolution. The song of the humpback whale (Megaptera novaeangliae) presents an extreme example of complexity and cultural evolution. It is a long, hierarchically structured vocal display that undergoes constant evolutionary change. Obtaining robust metrics to quantify song variation at multiple scales (from a sound through to population variation across the seascape) is a substantial challenge. Here, the authors present a method to quantify song similarity at multiple levels within the hierarchy. To incorporate the complexity of these multiple levels, the calculation of similarity is weighted by measurements of sound units (lower levels within the display) to bridge the gap in information between upper and lower levels. Results demonstrate that the inclusion of weighting provides a more realistic and robust representation of song similarity at multiple levels within the display. This method permits robust quantification of cultural patterns and processes that will also contribute to the conservation management of endangered humpback whale populations, and is applicable to any hierarchically structured signal sequence.
Newborn infants detect cues of concurrent sound segregation.
Bendixen, Alexandra; Háden, Gábor P; Németh, Renáta; Farkas, Dávid; Török, Miklós; Winkler, István
2015-01-01
Separating concurrent sounds is fundamental for a veridical perception of one's auditory surroundings. Sound components that are harmonically related and start at the same time are usually grouped into a common perceptual object, whereas components that are not in harmonic relation or have different onset times are more likely to be perceived in terms of separate objects. Here we tested whether neonates are able to pick up the cues supporting this sound organization principle. We presented newborn infants with a series of complex tones with their harmonics in tune (creating the percept of a unitary sound object) and with manipulated variants, which gave the impression of two concurrently active sound sources. The manipulated variant had either one mistuned partial (single-cue condition) or the onset of this mistuned partial was also delayed (double-cue condition). Tuned and manipulated sounds were presented in random order with equal probabilities. Recording the neonates' electroencephalographic responses allowed us to evaluate their processing of the sounds. Results show that, in both conditions, mistuned sounds elicited a negative displacement of the event-related potential (ERP) relative to tuned sounds from 360 to 400 ms after sound onset. The mistuning-related ERP component resembles the object-related negativity (ORN) component in adults, which is associated with concurrent sound segregation. Delayed onset additionally led to a negative displacement from 160 to 200 ms, which was probably more related to the physical parameters of the sounds than to their perceptual segregation. The elicitation of an ORN-like response in newborn infants suggests that neonates possess the basic capabilities of segregating concurrent sounds by detecting inharmonic relations between the co-occurring sounds. © 2015 S. Karger AG, Basel.
Processing of Communication Sounds: Contributions of Learning, Memory, and Experience
Bigelow, James; Rossi, Breein
2013-01-01
Abundant evidence from both field and lab studies has established that conspecific vocalizations (CVs) are of critical ecological significance for a wide variety of species, including humans, nonhuman primates, rodents, and other mammals and birds. Correspondingly, a number of experiments have demonstrated behavioral processing advantages for CVs, such as in discrimination and memory tasks. Further, a wide range of experiments have described brain regions in many species that appear to be specialized for processing CVs. For example, several neural regions have been described in both mammals and birds wherein greater neural responses are elicited by CVs than by comparison stimuli such as heterospecific vocalizations, nonvocal complex sounds, and artificial stimuli. These observations raise the question of whether these regions reflect domain-specific neural mechanisms dedicated to processing CVs, or alternatively, if these regions reflect domain-general neural mechanisms for representing complex sounds of learned significance. Inasmuch as CVs can be viewed as complex combinations of basic spectrotemporal features, the plausibility of the latter position is supported by a large body of literature describing modulated cortical and subcortical representation of a variety of acoustic features that have been experimentally associated with stimuli of natural behavioral significance (such as food rewards). Herein, we review a relatively small body of existing literature describing the roles of experience, learning, and memory in the emergence of species-typical neural representations of CVs and auditory system plasticity. In both songbirds and mammals, manipulations of auditory experience as well as specific learning paradigms are shown to modulate neural responses evoked by CVs, either in terms of overall firing rate or temporal firing patterns. In some cases, CV-sensitive neural regions gradually acquire representation of non-CV stimuli with which subjects have training and experience. These results parallel literature in humans describing modulation of responses in face-sensitive neural regions through learning and experience. Thus, although many questions remain, the available evidence is consistent with the notion that CVs may acquire distinct neural representation through domain-general mechanisms for representing complex auditory objects that are of learned importance to the animal. PMID:23792078
Auditory reafferences: the influence of real-time feedback on movement control.
Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus
2015-01-01
Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
The Influence of Sound Cues on the Maintenance of Temporal Organization in the Sprague-Dawley Rat
NASA Technical Reports Server (NTRS)
Winget, C. M.; Moeller, K. A.; Holley, D. C.; Souza, Kenneth A. (Technical Monitor)
1994-01-01
Temporal organization is a fundamental property of living matter. From single cells to complex animals including man, most physiological systems undergo daily periodic changes in concert with environmental cues (e.g., light, temperature etc.). It is known that pulsed Environmental synchronizers, zeitgebers, (e.g. light) can modify rhythm parameters. Rhythm stability is a necessary requirement for most animal experiments. The extent to which sound can influence the circadian system of laboratory rats is poorly understood. This has implications to animal habitats in the novel environments of the Space-Laboratory or Space Station. A series of three white noise (88+/-0.82 db) zeitgeber experiments were conducted (n=6/experiment).The sound cue was introduced in the circadian free-running phase (DD-NQ) and in one additional case sound was added to the usual photoperiod (12L:12D) to determine masking effects. Circadian rhythm parameters of drinking frequency, feeding frequency, and gross locomotor activity were continuously monitored. Data analysis for these studies included macroscopic and microscopic methods. Raster plots to visually detect entrainment versus free-running period, were plotted for each animal, for all three parameters, during all sound perturbation tested. These data were processed through a series of detrending (robust locally weighted regression analyses) and complex demodulation analyses. In summary, these findings show that periodic "white" noise "influences" the rats circadian system but does not "entrain" the feeding, drinking or locomotor activity rhythms.
[Sound improves distinction of low intensities of light in the visual cortex of a rabbit].
Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V
2011-01-01
Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.
Auditory Discrimination of Frequency Ratios: The Octave Singularity
ERIC Educational Resources Information Center
Bonnard, Damien; Micheyl, Christophe; Semal, Catherine; Dauman, Rene; Demany, Laurent
2013-01-01
Sensitivity to frequency ratios is essential for the perceptual processing of complex sounds and the appreciation of music. This study assessed the effect of ratio simplicity on ratio discrimination for pure tones presented either simultaneously or sequentially. Each stimulus consisted of four 100-ms pure tones, equally spaced in terms of…
Effects of Voice Harmonic Complexity on ERP Responses to Pitch-Shifted Auditory Feedback
Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.
2011-01-01
Objective The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Methods Event-related potentials (ERPs) were recorded in response to +200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. Results During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. Conclusions These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. Significance This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. PMID:21719346
Testing the dual-pathway model for auditory processing in human cortex.
Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto
2016-01-01
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.
Shaping reverberating sound fields with an actively tunable metasurface.
Ma, Guancong; Fan, Xiying; Sheng, Ping; Fink, Mathias
2018-06-26
A reverberating environment is a common complex medium for airborne sound, with familiar examples such as music halls and lecture theaters. The complexity of reverberating sound fields has hindered their meaningful control. Here, by combining acoustic metasurface and adaptive wavefield shaping, we demonstrate the versatile control of reverberating sound fields in a room. This is achieved through the design and the realization of a binary phase-modulating spatial sound modulator that is based on an actively reconfigurable acoustic metasurface. We demonstrate useful functionalities including the creation of quiet zones and hotspots in a typical reverberating environment. Copyright © 2018 the Author(s). Published by PNAS.
A complex baleen whale call recorded in the Mariana Trench Marine National Monument.
Nieukirk, Sharon L; Fregosi, Selene; Mellinger, David K; Klinck, Holger
2016-09-01
In fall 2014 and spring 2015, passive acoustic data were collected via autonomous gliders east of Guam in an area that included the Mariana Trench Marine National Monument. A short (2-4 s), complex sound was recorded that features a ∼38 Hz moan with both harmonics and amplitude modulation, followed by broad-frequency metallic-sounding sweeps up to 7.5 kHz. This sound was recorded regularly during both fall and spring surveys. Aurally, the sound is quite unusual and most resembles the minke whale "Star Wars" call. It is likely this sound is biological and produced by a baleen whale.
Auditory spatial processing in Alzheimer’s disease
Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.
2015-01-01
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Efficient Geometric Sound Propagation Using Visibility Culling
NASA Astrophysics Data System (ADS)
Chandak, Anish
2011-07-01
Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.
Mercury in Sediment, Water, and Biota of Sinclair Inlet, Puget Sound, Washington, 1989-2007
Paulson, Anthony J.; Keys, Morgan E.; Scholting, Kelly L.
2010-01-01
Historical records of mercury contamination in dated sediment cores from Sinclair Inlet are coincidental with activities at the U.S. Navy Puget Sound Naval Shipyard; peak total mercury concentrations occurred around World War II. After World War II, better metallurgical management practices and environmental regulations reduced mercury contamination, but total mercury concentrations in surface sediment of Sinclair Inlet have decreased slowly because of the low rate of sedimentation relative to the vertical mixing within sediment. The slopes of linear regressions between the total mercury and total organic carbon concentrations of sediment offshore of Puget Sound urban areas was the best indicator of general mercury contamination above pre-industrial levels. Prior to the 2000-01 remediation, this indicator placed Sinclair Inlet in the tier of estuaries with the highest level of mercury contamination, along with Bellingham Bay in northern Puget Sound and Elliott Bay near Seattle. This indicator also suggests that the 2000/2001 remediation dredging had significant positive effect on Sinclair Inlet as a whole. In 2007, about 80 percent of the area of the Bremerton naval complex had sediment total mercury concentrations within about 0.5 milligrams per kilogram of the Sinclair Inlet regression. Three areas adjacent to the waterfront of the Bremerton naval complex have total mercury concentrations above this range and indicate a possible terrestrial source from waterfront areas of Bremerton naval complex. Total mercury concentrations in unfiltered Sinclair Inlet marine waters are about three times higher than those of central Puget Sound, but the small numbers of samples and complex physical and geochemical processes make it difficult to interpret the geographical distribution of mercury in marine waters from Sinclair Inlet. Total mercury concentrations in various biota species were compared among geographical locations and included data of composite samples, individual specimens, and caged mussels. Total mercury concentrations in muscle and liver of English sole from Sinclair Inlet ranked in the upper quarter and third, respectively, of Puget Sound locations. For other species, concentrations from Sinclair Inlet were within the mid-range of locations (for example, Chinook salmon). Total mercury concentrations of the long-lived and higher trophic rockfish in composites and individual specimens from Sinclair Inlet tended to be the highest in Puget Sound. For a given size, sand sole, graceful crab, staghorn sculpin, surf perch, and sea cucumber individuals collected from Sinclair Inlet had higher total mercury concentrations than individuals collected from non-urban estuaries. Total mercury concentrations in individual English sole and ratfish were not significantly different than in individuals of various sizes collected from either urban or non-urban estuaries in Puget Sound. Total mercury concentrations in English sole collected from Sinclair Inlet after the 2000-2001 dredging appear to have lower total mercury concentrations than those collected before (1996) the dredging project. The highest total mercury concentrations of mussels caged in 2002 were not within the Bremerton naval complex, but within the Port Orchard Marina and inner Sinclair Inlet.
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
Psychophysical evidence for auditory motion parallax.
Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz
2018-04-17
Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.
Effects of irrelevant sounds on phonological coding in reading comprehension and short-term memory.
Boyle, R; Coltheart, V
1996-05-01
The effects of irrelevant sounds on reading comprehension and short-term memory were studied in two experiments. In Experiment 1, adults judged the acceptability of written sentences during irrelevant speech, accompanied and unaccompanied singing, instrumental music, and in silence. Sentences varied in syntactic complexity: Simple sentences contained a right-branching relative clause (The applause pleased the woman that gave the speech) and syntactically complex sentences included a centre-embedded relative clause (The hay that the farmer stored fed the hungry animals). Unacceptable sentences either sounded acceptable (The dog chased the cat that eight up all his food) or did not (The man praised the child that sight up his spinach). Decision accuracy was impaired by syntactic complexity but not by irrelevant sounds. Phonological coding was indicated by increased errors on unacceptable sentences that sounded correct. These errors rates were unaffected by irrelevant sounds. Experiment 2 examined effects of irrelevant sounds on ordered recall of phonologically similar and dissimilar word lists. Phonological similarity impaired recall. Irrelevant speech reduced recall but did not interact with phonological similarity. The results of these experiments question assumptions about the relationship between speech input and phonological coding in reading and the short-term store.
ERIC Educational Resources Information Center
Campo, Ana E.; Williams, Virginia; Williams, Redford B.; Segundo, Marisol A.; Lydston, David; Weiss, Stephen M.'
2008-01-01
Objective: Sound clinical judgment is the cornerstone of medical practice and begins early during medical education. The authors consider the effect of personality characteristics (hostility, anger, cynicism) on clinical judgment and whether a brief intervention can affect this process. Methods: Two sophomore medical classes (experimental,…
Helping Students Manage Emotions: REBT as a Mental Health Educational Curriculum
ERIC Educational Resources Information Center
Banks, Tachelle
2011-01-01
In preparing children to deal with life in an increasingly complex society, it is important that schools devote attention to well-organised and theoretically sound programmes employing a preventive approach to mental health. Rational Emotive Behaviour Therapy (REBT), as indicated in its name, incorporates changes to thought processes and…
ERIC Educational Resources Information Center
Visto, Jane C.; And Others
1996-01-01
Ten children (ages 12-16) with specific language impairments (SLI) and controls matched for chronological or language age were tested with measures of complex sound localization involving the precedence effect phenomenon. SLI children exhibited tracking skills similar to language-age matched controls, indicating impairment in their ability to use…
Phylogenetic review of tonal sound production in whales in relation to sociality
May-Collado, Laura J; Agnarsson, Ingi; Wartzok, Douglas
2007-01-01
Background It is widely held that in toothed whales, high frequency tonal sounds called 'whistles' evolved in association with 'sociality' because in delphinids they are used in a social context. Recently, whistles were hypothesized to be an evolutionary innovation of social dolphins (the 'dolphin hypothesis'). However, both 'whistles' and 'sociality' are broad concepts each representing a conglomerate of characters. Many non-delphinids, whether solitary or social, produce tonal sounds that share most of the acoustic characteristics of delphinid whistles. Furthermore, hypotheses of character correlation are best tested in a phylogenetic context, which has hitherto not been done. Here we summarize data from over 300 studies on cetacean tonal sounds and social structure and phylogenetically test existing hypotheses on their co-evolution. Results Whistles are 'complex' tonal sounds of toothed whales that demark a more inclusive clade than the social dolphins. Whistles are also used by some riverine species that live in simple societies, and have been lost twice within the social delphinoids, all observations that are inconsistent with the dolphin hypothesis as stated. However, cetacean tonal sounds and sociality are intertwined: (1) increased tonal sound modulation significantly correlates with group size and social structure; (2) changes in tonal sound complexity are significantly concentrated on social branches. Also, duration and minimum frequency correlate as do group size and mean minimum frequency. Conclusion Studying the evolutionary correlation of broad concepts, rather than that of their component characters, is fraught with difficulty, while limits of available data restrict the detail in which component character correlations can be analyzed in this case. Our results support the hypothesis that sociality influences the evolution of tonal sound complexity. The level of social and whistle complexity are correlated, suggesting that complex tonal sounds play an important role in social communication. Minimum frequency is higher in species with large groups, and correlates negatively with duration, which may reflect the increased distances over which non-social species communicate. Our findings are generally stable across a range of alternative phylogenies. Our study points to key species where future studies would be particularly valuable for enriching our understanding of the interplay of acoustic communication and sociality. PMID:17692128
Acoustical deterrence of Silver Carp (Hypophthalmichthys molitrix)
Brooke J. Vetter,; Cupp, Aaron R.; Fredricks, Kim T.; Gaikowski, Mark P.; Allen F. Mensinger,
2015-01-01
The invasive Silver Carp (Hypophthalmichthys molitrix) dominate large regions of the Mississippi River drainage and continue to expand their range northward threatening the Laurentian Great Lakes. This study found that complex broadband sound (0–10 kHz) is effective in altering the behavior of Silver Carp with implications for deterrent barriers or potential control measures (e.g., herding fish into nets). The phonotaxic response of Silver Carp was investigated using controlled experiments in outdoor concrete ponds (10 × 4.9 × 1.2 m). Pure tones (500–2000 Hz) and complex sound (underwater field recordings of outboard motors) were broadcast using underwater speakers. Silver Carp always reacted to the complex sounds by exhibiting negative phonotaxis to the sound source and by alternating speaker location, Silver Carp could be directed consistently, up to 37 consecutive times, to opposite ends of the large outdoor pond. However, fish habituated quickly to pure tones, reacting to only approximately 5 % of these presentations and never showed more than two consecutive responses. Previous studies have demonstrated the success of sound barriers in preventing Silver Carp movement using pure tones and this research suggests that a complex sound stimulus would be an even more effective deterrent.
Detecting regular sound changes in linguistics as events of concerted evolution.
Hruschka, Daniel J; Branford, Simon; Smith, Eric D; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy
2015-01-05
Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Young, Duncan; Blankeship, Donald; Beem, Lucas; Cavitte, Marie; Quartini, Enrica; Lindzey, Laura; Jackson, Charles; Roberts, Jason; Ritz, Catherine; Siegert, Martin; Greenbaum, Jamin; Frederick, Bruce
2017-04-01
The roughness of subglacial interfaces (as measured by airborne radar echo sounding) at length scales between profile line spacing and the footprint of the instrument is a key, but complex, signature of glacial and geomorphic processes, material lithology and integrated history at the bed of ice sheets. Subglacial roughness is also intertwined with assessments of ice thickness uncertainty using radar echo sounding, the utility of interpolation methodologies, and a key aspect of subglacial assess strategies. Here we present an assessment of subglacial roughness estimation in both West and East Antarctica, and compare this to exposed subglacial terrains. We will use recent high resolution aerogeophysical surveys to examine what variations in roughness are a fingerprint for, assess the limits of ice thickness uncertainty quantification and compare strategies for roughness assessment and utilization.
Memory for product sounds: the effect of sound and label type.
Ozcan, Elif; van Egmond, René
2007-11-01
The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.
Cranford, Ted W.; Krysl, Petr; Amundin, Mats
2010-01-01
Global concern over the possible deleterious effects of noise on marine organisms was catalyzed when toothed whales stranded and died in the presence of high intensity sound. The lack of knowledge about mechanisms of hearing in toothed whales prompted our group to study the anatomy and build a finite element model to simulate sound reception in odontocetes. The primary auditory pathway in toothed whales is an evolutionary novelty, compensating for the impedance mismatch experienced by whale ancestors as they moved from hearing in air to hearing in water. The mechanism by which high-frequency vibrations pass from the low density fats of the lower jaw into the dense bones of the auditory apparatus is a key to understanding odontocete hearing. Here we identify a new acoustic portal into the ear complex, the tympanoperiotic complex (TPC) and a plausible mechanism by which sound is transduced into the bony components. We reveal the intact anatomic geometry using CT scanning, and test functional preconceptions using finite element modeling and vibrational analysis. We show that the mandibular fat bodies bifurcate posteriorly, attaching to the TPC in two distinct locations. The smaller branch is an inconspicuous, previously undescribed channel, a cone-shaped fat body that fits into a thin-walled bony funnel just anterior to the sigmoid process of the TPC. The TPC also contains regions of thin translucent bone that define zones of differential flexibility, enabling the TPC to bend in response to sound pressure, thus providing a mechanism for vibrations to pass through the ossicular chain. The techniques used to discover the new acoustic portal in toothed whales, provide a means to decipher auditory filtering, beam formation, impedance matching, and transduction. These tools can also be used to address concerns about the potential deleterious effects of high-intensity sound in a broad spectrum of marine organisms, from whales to fish. PMID:20694149
Cranford, Ted W; Krysl, Petr; Amundin, Mats
2010-08-04
Global concern over the possible deleterious effects of noise on marine organisms was catalyzed when toothed whales stranded and died in the presence of high intensity sound. The lack of knowledge about mechanisms of hearing in toothed whales prompted our group to study the anatomy and build a finite element model to simulate sound reception in odontocetes. The primary auditory pathway in toothed whales is an evolutionary novelty, compensating for the impedance mismatch experienced by whale ancestors as they moved from hearing in air to hearing in water. The mechanism by which high-frequency vibrations pass from the low density fats of the lower jaw into the dense bones of the auditory apparatus is a key to understanding odontocete hearing. Here we identify a new acoustic portal into the ear complex, the tympanoperiotic complex (TPC) and a plausible mechanism by which sound is transduced into the bony components. We reveal the intact anatomic geometry using CT scanning, and test functional preconceptions using finite element modeling and vibrational analysis. We show that the mandibular fat bodies bifurcate posteriorly, attaching to the TPC in two distinct locations. The smaller branch is an inconspicuous, previously undescribed channel, a cone-shaped fat body that fits into a thin-walled bony funnel just anterior to the sigmoid process of the TPC. The TPC also contains regions of thin translucent bone that define zones of differential flexibility, enabling the TPC to bend in response to sound pressure, thus providing a mechanism for vibrations to pass through the ossicular chain. The techniques used to discover the new acoustic portal in toothed whales, provide a means to decipher auditory filtering, beam formation, impedance matching, and transduction. These tools can also be used to address concerns about the potential deleterious effects of high-intensity sound in a broad spectrum of marine organisms, from whales to fish.
NASA Astrophysics Data System (ADS)
Balaji, P. A.
1999-07-01
A cricket's ear is a directional acoustic sensor. It has a remarkable level of sensitivity to the direction of sound propagation in a narrow frequency bandwidth of 4-5 KHz. Because of its complexity, the directional sensitivity has long intrigued researchers. The cricket's ear is a four-acoustic-inputs/two-vibration-outputs system. In this dissertation, this system is examined in depth, both experimentally and theoretically, with a primary goal to understand the mechanics involved in directional hearing. Experimental identification of the system is done by using random signal processing techniques. Theoretical identification of the system is accomplished by analyzing sound transmission through complex trachea of the ear. Finally, a description of how the cricket achieves directional hearing sensitivity is proposed. The fundamental principle involved in directional heating of the cricket has been utilized to design a device to obtain a directional signal from non- directional inputs.
Time-Scale Modification of Complex Acoustic Signals in Noise
1994-02-04
of a response from a closing stapler . 15 6 Short-time processing of long waveforms. 16 7 Time-scale expansion (x 2) of sequence of transients using...filter bank/overlap- add. 17 8 Time-scale expansion (x2) of a closing stapler using filter bank/overlap-add. 18 9 Composite subband time-scale...INTRODUCTION Short-duration complex sounds, as from the closing of a stapler or the tapping of a drum stick, often consist of a series of brief
Developmental changes in distinguishing concurrent auditory objects.
Alain, Claude; Theunissen, Eef L; Chevalier, Hélène; Batty, Magali; Taylor, Margot J
2003-04-01
Children have considerable difficulties in identifying speech in noise. In the present study, we examined age-related differences in central auditory functions that are crucial for parsing co-occurring auditory events using behavioral and event-related brain potential measures. Seventeen pre-adolescent children and 17 adults were presented with complex sounds containing multiple harmonics, one of which could be 'mistuned' so that it was no longer an integer multiple of the fundamental. Both children and adults were more likely to report hearing the mistuned harmonic as a separate sound with an increase in mistuning. However, children were less sensitive in detecting mistuning across all levels as revealed by lower d' scores than adults. The perception of two concurrent auditory events was accompanied by a negative wave that peaked at about 160 ms after sound onset. In both age groups, the negative wave, referred to as the 'object-related negativity' (ORN), increased in amplitude with mistuning. The ORN was larger in children than in adults despite a lower d' score. Together, the behavioral and electrophysiological results suggest that concurrent sound segregation is probably adult-like in pre-adolescent children, but that children are inefficient in processing the information following the detection of mistuning. These findings also suggest that processes involved in distinguishing concurrent auditory objects continue to mature during adolescence.
Auditory Cortex Processes Variation in Our Own Speech
Sitek, Kevin R.; Mathalon, Daniel H.; Roach, Brian J.; Houde, John F.; Niziolek, Caroline A.; Ford, Judith M.
2013-01-01
As we talk, we unconsciously adjust our speech to ensure it sounds the way we intend it to sound. However, because speech production involves complex motor planning and execution, no two utterances of the same sound will be exactly the same. Here, we show that auditory cortex is sensitive to natural variations in self-produced speech from utterance to utterance. We recorded event-related potentials (ERPs) from ninety-nine subjects while they uttered “ah” and while they listened to those speech sounds played back. Subjects' utterances were sorted based on their formant deviations from the previous utterance. Typically, the N1 ERP component is suppressed during talking compared to listening. By comparing ERPs to the least and most variable utterances, we found that N1 was less suppressed to utterances that differed greatly from their preceding neighbors. In contrast, an utterance's difference from the median formant values did not affect N1. Trial-to-trial pitch (f0) deviation and pitch difference from the median similarly did not affect N1. We discuss mechanisms that may underlie the change in N1 suppression resulting from trial-to-trial formant change. Deviant utterances require additional auditory cortical processing, suggesting that speaking-induced suppression mechanisms are optimally tuned for a specific production. PMID:24349399
Paavilainen, P; Simola, J; Jaramillo, M; Näätänen, R; Winkler, I
2001-03-01
Brain mechanisms extracting invariant information from varying auditory inputs were studied using the mismatch-negativity (MMN) brain response. We wished to determine whether the preattentive sound-analysis mechanisms, reflected by MMN, are capable of extracting invariant relationships based on abstract conjunctions between two sound features. The standard stimuli varied over a large range in frequency and intensity dimensions following the rule that the higher the frequency, the louder the intensity. The occasional deviant stimuli violated this frequency-intensity relationship and elicited an MMN. The results demonstrate that preattentive processing of auditory stimuli extends to unexpectedly complex relationships between the stimulus features.
Effect of Blast Injury on Auditory Localization in Military Service Members.
Kubli, Lina R; Brungart, Douglas; Northern, Jerry
Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Effects of voice harmonic complexity on ERP responses to pitch-shifted auditory feedback.
Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R
2011-12-01
The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Event-related potentials (ERPs) were recorded in response to+200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Acoustic signatures of sound source-tract coupling.
Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B
2011-04-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society
Acoustic signatures of sound source-tract coupling
Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.
2014-01-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213
Bednar, Adam; Boland, Francis M; Lalor, Edmund C
2017-03-01
The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Pitch discrimination by ferrets for simple and complex sounds.
Walker, Kerry M M; Schnupp, Jan W H; Hart-Schnupp, Sheelah M B; King, Andrew J; Bizley, Jennifer K
2009-09-01
Although many studies have examined the performance of animals in detecting a frequency change in a sequence of tones, few have measured animals' discrimination of the fundamental frequency (F0) of complex, naturalistic stimuli. Additionally, it is not yet clear if animals perceive the pitch of complex sounds along a continuous, low-to-high scale. Here, four ferrets (Mustela putorius) were trained on a two-alternative forced choice task to discriminate sounds that were higher or lower in F0 than a reference sound using pure tones and artificial vowels as stimuli. Average Weber fractions for ferrets on this task varied from approximately 20% to 80% across references (200-1200 Hz), and these fractions were similar for pure tones and vowels. These thresholds are approximately ten times higher than those typically reported for other mammals on frequency change detection tasks that use go/no-go designs. Naive human listeners outperformed ferrets on the present task, but they showed similar effects of stimulus type and reference F0. These results suggest that while non-human animals can be trained to label complex sounds as high or low in pitch, this task may be much more difficult for animals than simply detecting a frequency change.
ERIC Educational Resources Information Center
Davidson, Meghan M.
2016-01-01
Reading comprehension is a complex interactional process whereby the accumulated meaning of sounds, words, and sentences is integrated to form a meaningful representation of text. It is well established that many individuals with autism spectrum disorder (ASD) have reading comprehension difficulties, but less is understood about the underlying…
NASA Astrophysics Data System (ADS)
Leek, Marjorie R.; Neff, Donna L.
2004-05-01
Charles Watson's studies of informational masking and the effects of stimulus uncertainty on auditory perception have had a profound impact on auditory research. His series of seminal studies in the mid-1970s on the detection and discrimination of target sounds in sequences of brief tones with uncertain properties addresses the fundamental problem of extracting target signals from background sounds. As conceptualized by Chuck and others, informational masking results from more central (even ``cogneetive'') processes as a consequence of stimulus uncertainty, and can be distinguished from ``energetic'' masking, which primarily arises from the auditory periphery. Informational masking techniques are now in common use to study the detection, discrimination, and recognition of complex sounds, the capacity of auditory memory and aspects of auditory selective attention, the often large effects of training to reduce detrimental effects of uncertainty, and the perceptual segregation of target sounds from irrelevant context sounds. This paper will present an overview of past and current research on informational masking, and show how Chuck's work has been expanded in several directions by other scientists to include the effects of informational masking on speech perception and on perception by listeners with hearing impairment. [Work supported by NIDCD.
Human neuromagnetic steady-state responses to amplitude-modulated tones, speech, and music.
Lamminmäki, Satu; Parkkonen, Lauri; Hari, Riitta
2014-01-01
Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears' inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs. MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales. The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas SSFs to music showed no lateralization. In addition, the right lateralization of SSFs to the speech stimuli decreased with decreasing modulation depth. The results showed that SSFs can be reliably measured to amplitude-modulated natural sounds, with slightly different hemispheric lateralization for different carrier sounds. With speech stimuli, modulation at 100% depth is required, whereas for music the 75% or even 50% modulation depths provide a reasonable compromise between the signal-to-noise ratio of SSFs and sound quality or perceptual requirements. SSF recordings thus seem feasible for assessing the early cortical processing of natural sounds.
Cortical contributions to the auditory frequency-following response revealed by MEG
Coffey, Emily B. J.; Herholz, Sibylle C.; Chepesiuk, Alexander M. P.; Baillet, Sylvain; Zatorre, Robert J.
2016-01-01
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. PMID:27009409
Auditory sensitivity of seals and sea lions in complex listening scenarios.
Cunningham, Kane A; Southall, Brandon L; Reichmuth, Colleen
2014-12-01
Standard audiometric data, such as audiograms and critical ratios, are often used to inform marine mammal noise-exposure criteria. However, these measurements are obtained using simple, artificial stimuli-i.e., pure tones and flat-spectrum noise-while natural sounds typically have more complex structure. In this study, detection thresholds for complex signals were measured in (I) quiet and (II) masked conditions for one California sea lion (Zalophus californianus) and one harbor seal (Phoca vitulina). In Experiment I, detection thresholds in quiet conditions were obtained for complex signals designed to isolate three common features of natural sounds: Frequency modulation, amplitude modulation, and harmonic structure. In Experiment II, detection thresholds were obtained for the same complex signals embedded in two types of masking noise: Synthetic flat-spectrum noise and recorded shipping noise. To evaluate how accurately standard hearing data predict detection of complex sounds, the results of Experiments I and II were compared to predictions based on subject audiograms and critical ratios combined with a basic hearing model. Both subjects exhibited greater-than-predicted sensitivity to harmonic signals in quiet and masked conditions, as well as to frequency-modulated signals in masked conditions. These differences indicate that the complex features of naturally occurring sounds enhance detectability relative to simple stimuli.
Coupled Modeling of Hydrodynamics and Sound in Coastal Ocean for Renewable Ocean Energy Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Wen; Jung, Ki Won; Yang, Zhaoqing
An underwater sound model was developed to simulate sound propagation from marine and hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite difference methods were developed to solve the 3D Helmholtz equation for sound propagation in the coastal environment. A 3D sparse matrix solver with complex coefficients was formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method was applied to solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model was then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generatedmore » by human activities, such as construction of OSW turbines or tidal stream turbine operations, in a range-dependent setting. As a proof of concept, initial validation of the solver is presented for two coastal wedge problems. This sound model can be useful for evaluating impacts on marine mammals due to deployment of MHK devices and OSW energy platforms.« less
Language Experience Affects Grouping of Musical Instrument Sounds
ERIC Educational Resources Information Center
Bhatara, Anjali; Boll-Avetisyan, Natalie; Agus, Trevor; Höhle, Barbara; Nazzi, Thierry
2016-01-01
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of…
33 CFR 150.720 - What are the requirements for sound signals?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false What are the requirements for sound signals? 150.720 Section 150.720 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF... are the requirements for sound signals? The sound signal on each pumping platform complex must be...
33 CFR 149.585 - What are the requirements for sound signals?
Code of Federal Regulations, 2010 CFR
2010-07-01
... sound signals? 149.585 Section 149.585 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF... Navigation Miscellaneous § 149.585 What are the requirements for sound signals? (a) Each pumping platform complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...
33 CFR 150.720 - What are the requirements for sound signals?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false What are the requirements for sound signals? 150.720 Section 150.720 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF... are the requirements for sound signals? The sound signal on each pumping platform complex must be...
33 CFR 150.720 - What are the requirements for sound signals?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false What are the requirements for sound signals? 150.720 Section 150.720 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF... are the requirements for sound signals? The sound signal on each pumping platform complex must be...
33 CFR 150.720 - What are the requirements for sound signals?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false What are the requirements for sound signals? 150.720 Section 150.720 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF... are the requirements for sound signals? The sound signal on each pumping platform complex must be...
33 CFR 150.720 - What are the requirements for sound signals?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false What are the requirements for sound signals? 150.720 Section 150.720 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF... are the requirements for sound signals? The sound signal on each pumping platform complex must be...
Lateralization as a symmetry breaking process in birdsong
NASA Astrophysics Data System (ADS)
Trevisan, M. A.; Cooper, B.; Goller, F.; Mindlin, G. B.
2007-03-01
The singing by songbirds is a most convincing example in the animal kingdom of functional lateralization of the brain, a feature usually associated with human language. Lateralization is expressed as one or both of the bird’s sound sources being active during the vocalization. Normal songs require high coordination between the vocal organ and respiratory activity, which is bilaterally symmetric. Moreover, the physical and neural substrate used to produce the song lack obvious asymmetries. In this work we show that complex spatiotemporal patterns of motor activity controlling airflow through the sound sources can be explained in terms of spontaneous symmetry breaking bifurcations. This analysis also provides a framework from which to study the effects of imperfections in the system’ s symmetries. A physical model of the avian vocal organ is used to generate synthetic sounds, which allows us to predict acoustical signatures of the song and compare the predictions of the model with experimental data.
Constructing Noise-Invariant Representations of Sound in the Auditory Pathway
Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.
2013-01-01
Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596
Bader, Maria; Schröger, Erich; Grimm, Sabine
2017-01-01
The recognition of sound patterns in speech or music (e.g., a melody that is played in different keys) requires knowledge about pitch relations between successive sounds. We investigated the formation of regularity representations for sound patterns in an event-related potential (ERP) study. A pattern, which consisted of six concatenated 50 ms tone segments differing in fundamental frequency, was presented 1, 2, 3, 6, or 12 times and then replaced by another pattern by randomly changing the pitch of the tonal segments (roving standard paradigm). In an absolute repetition condition, patterns were repeated identically, whereas in a transposed condition, only the pitch relations of the tonal segments of the patterns were repeated, while the entire patterns were shifted up or down in pitch. During ERP measurement participants were not informed about the pattern repetition rule, but were instructed to discriminate rarely occurring targets of lower or higher sound intensity. EPRs for pattern changes (mismatch negativity, MMN; and P3a) and for pattern repetitions (repetition positivity, RP) revealed that the auditory system is able to rapidly extract regularities from unfamiliar complex sound patterns even when absolute pitch varies. Yet, enhanced RP and P3a amplitudes, and improved behavioral performance measured in a post-hoc test, in the absolute as compared with the transposed condition suggest that it is more difficult to encode patterns without absolute pitch information. This is explained by dissociable processing of standards and deviants as well as a back propagation mechanism to early sensory processing stages, which is effective after less repetitions of a standard stimulus for absolute pitch.
Schröger, Erich; Grimm, Sabine
2017-01-01
The recognition of sound patterns in speech or music (e.g., a melody that is played in different keys) requires knowledge about pitch relations between successive sounds. We investigated the formation of regularity representations for sound patterns in an event-related potential (ERP) study. A pattern, which consisted of six concatenated 50 ms tone segments differing in fundamental frequency, was presented 1, 2, 3, 6, or 12 times and then replaced by another pattern by randomly changing the pitch of the tonal segments (roving standard paradigm). In an absolute repetition condition, patterns were repeated identically, whereas in a transposed condition, only the pitch relations of the tonal segments of the patterns were repeated, while the entire patterns were shifted up or down in pitch. During ERP measurement participants were not informed about the pattern repetition rule, but were instructed to discriminate rarely occurring targets of lower or higher sound intensity. EPRs for pattern changes (mismatch negativity, MMN; and P3a) and for pattern repetitions (repetition positivity, RP) revealed that the auditory system is able to rapidly extract regularities from unfamiliar complex sound patterns even when absolute pitch varies. Yet, enhanced RP and P3a amplitudes, and improved behavioral performance measured in a post-hoc test, in the absolute as compared with the transposed condition suggest that it is more difficult to encode patterns without absolute pitch information. This is explained by dissociable processing of standards and deviants as well as a back propagation mechanism to early sensory processing stages, which is effective after less repetitions of a standard stimulus for absolute pitch. PMID:28472146
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
Late quaternary geologic framework, north-central Gulf of Mexico
Kindinger, Jack G.; Penland, Shea; Williams, S. Jeffress; Brooks, Gregg R.; Suter, John R.; McBride, Randolph A.
1991-01-01
The geologic framework of the north-central Gulf of Mexico shelf is composed of multiple, stacked, delta systems. Shelf and nearshore sedimentary facies were deposited by deltaic progradation, followed by shoreface erosion and submergence. A variety of sedimentary facies has been identified, including prodelta, delta fringe, distributary, lagoonal, barrier island, and shelf sand sheet. This study is based on the interpretation and the synthesis of > 6,700 km of high-resolution seismic profiles, 75 grab samples, and 77 vibracores. The nearshore morphology, shallow stratigraphy, and sediment distribution of the eastern Louisiana shelf are the products of transgressive sedimentary processes reworking the abandoned St. Bernard delta complex. Relatively recent Mississippi delta lobe consists primarily of fine sand, silt, and clay. In the southern portion of the St. Bernard delta complex, asymmetrical sand ridges (>5 m relief) have formed as the result of marine reworking of distributary mouth-bar sands. Silty sediments from the modern Mississippi Birdsfoot delta onlap the St. Bernard delta complex along the southern edge. The distal margin of the St. Bernard complex is distinct and has a sharp contact on the north near the Mississippi Sound barrier island coastline and a late Wisconsinan delta to the south. The Chandeleur Islands and the barrier islands of Mississippi Sound have been formed by a combination of Holocene and Pleistocene fluvial processes, shoreface erosion, and ravinement of the exposed shelf. Sediments underlying the relatively thin Holocene sediment cover are relict fluvial sands, deposited during the late Wisconsinan lowstand. Subsequent relative sea-level rise allowed marine processes to rework and redistribute sediments that formed the nearshore fine-grained facies and the shelf sand sheet.
Action planning and predictive coding when speaking
Wang, Jun; Mathalon, Daniel H.; Roach, Brian J.; Reilly, James; Keedy, Sarah; Sweeney, John A.; Ford, Judith M.
2014-01-01
Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps are filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders. PMID:24423729
Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang
2012-01-01
Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.
Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang
2012-01-01
Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-28
... Securitization, 3064-0137, and The Interagency Statement on Sound Practices Concerning Complex Structured Finance... guard station at the rear of the 17th Street Building (located on F Street), on business days between 7... Sound Practices Concerning Complex Structured Finance Transactions OMB Number: 3064-0148. Form Number...
Learning Midlevel Auditory Codes from Natural Sound Statistics.
Młynarski, Wiktor; McDermott, Josh H
2018-03-01
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.
Rossi, Tullio; Nagelkerken, Ivan; Connell, Sean D.
2016-01-01
The dispersal of larvae and their settlement to suitable habitat is fundamental to the replenishment of marine populations and the communities in which they live. Sound plays an important role in this process because for larvae of various species, it acts as an orientational cue towards suitable settlement habitat. Because marine sounds are largely of biological origin, they not only carry information about the location of potential habitat, but also information about the quality of habitat. While ocean acidification is known to affect a wide range of marine organisms and processes, its effect on marine soundscapes and its reception by navigating oceanic larvae remains unknown. Here, we show that ocean acidification causes a switch in role of present-day soundscapes from attractor to repellent in the auditory preferences in a temperate larval fish. Using natural CO2 vents as analogues of future ocean conditions, we further reveal that ocean acidification can impact marine soundscapes by profoundly diminishing their biological sound production. An altered soundscape poorer in biological cues indirectly penalizes oceanic larvae at settlement stage because both control and CO2-treated fish larvae showed lack of any response to such future soundscapes. These indirect and direct effects of ocean acidification put at risk the complex processes of larval dispersal and settlement. PMID:26763221
The Problems with "Noise Numbers" for Wind Farm Noise Assessment
ERIC Educational Resources Information Center
Thorne, Bob
2011-01-01
Human perception responds primarily to sound character rather than sound level. Wind farms are unique sound sources and exhibit special audible and inaudible characteristics that can be described as modulating sound or as a tonal complex. Wind farm compliance measures based on a specified noise number alone will fail to address problems with noise…
Data sonification and sound visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaper, H. G.; Tipei, S.; Wiebel, E.
1999-07-01
Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.
NASA Astrophysics Data System (ADS)
Zhukhovitskii, D. I.; Fortov, V. E.; Molotkov, V. I.; Lipaev, A. M.; Naumkin, V. N.; Thomas, H. M.; Ivlev, A. V.; Schwabe, M.; Morfill, G. E.
2015-02-01
We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are incompatible with the theory of ion acoustic waves. The estimate for the pressure in a strongly coupled Coulomb system and a scaling law for the complex plasma make it possible to derive an evaluation for the speed of sound, which is in a reasonable agreement with the experiments in complex plasmas.
Harwell, Mark A.; Gentile, John H.; Cummins, Kenneth W.; Highsmith, Raymond C.; Hilborn, Ray; McRoy, C. Peter; Parrish, Julia; Weingartner, Thomas
2010-01-01
Prince William Sound (PWS) is a semi-enclosed fjord estuary on the coast of Alaska adjoining the northern Gulf of Alaska (GOA). PWS is highly productive and diverse, with primary productivity strongly coupled to nutrient dynamics driven by variability in the climate and oceanography of the GOA and North Pacific Ocean. The pelagic and nearshore primary productivity supports a complex and diverse trophic structure, including large populations of forage and large fish that support many species of marine birds and mammals. High intra-annual, inter-annual, and interdecadal variability in climatic and oceanographic processes as drives high variability in the biological populations. A risk-based conceptual ecosystem model (CEM) is presented describing the natural processes, anthropogenic drivers, and resultant stressors that affect PWS, including stressors caused by the Great Alaska Earthquake of 1964 and the Exxon Valdez oil spill of 1989. A trophodynamic model incorporating PWS valued ecosystem components is integrated into the CEM. By representing the relative strengths of driver/stressors/effects, the CEM graphically demonstrates the fundamental dynamics of the PWS ecosystem, the natural forces that control the ecological condition of the Sound, and the relative contribution of natural processes and human activities to the health of the ecosystem. The CEM illustrates the dominance of natural processes in shaping the structure and functioning of the GOA and PWS ecosystems. PMID:20862192
Harwell, Mark A; Gentile, John H; Cummins, Kenneth W; Highsmith, Raymond C; Hilborn, Ray; McRoy, C Peter; Parrish, Julia; Weingartner, Thomas
2010-07-01
Prince William Sound (PWS) is a semi-enclosed fjord estuary on the coast of Alaska adjoining the northern Gulf of Alaska (GOA). PWS is highly productive and diverse, with primary productivity strongly coupled to nutrient dynamics driven by variability in the climate and oceanography of the GOA and North Pacific Ocean. The pelagic and nearshore primary productivity supports a complex and diverse trophic structure, including large populations of forage and large fish that support many species of marine birds and mammals. High intra-annual, inter-annual, and interdecadal variability in climatic and oceanographic processes as drives high variability in the biological populations. A risk-based conceptual ecosystem model (CEM) is presented describing the natural processes, anthropogenic drivers, and resultant stressors that affect PWS, including stressors caused by the Great Alaska Earthquake of 1964 and the Exxon Valdez oil spill of 1989. A trophodynamic model incorporating PWS valued ecosystem components is integrated into the CEM. By representing the relative strengths of driver/stressors/effects, the CEM graphically demonstrates the fundamental dynamics of the PWS ecosystem, the natural forces that control the ecological condition of the Sound, and the relative contribution of natural processes and human activities to the health of the ecosystem. The CEM illustrates the dominance of natural processes in shaping the structure and functioning of the GOA and PWS ecosystems.
The auditory scene: an fMRI study on melody and accompaniment in professional pianists.
Spada, Danilo; Verga, Laura; Iadanza, Antonella; Tettamanti, Marco; Perani, Daniela
2014-11-15
The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes. Copyright © 2014 Elsevier Inc. All rights reserved.
Perceptual Literacy and the Construction of Significant Meanings within Art Education
ERIC Educational Resources Information Center
Cerkez, Beatriz Tomsic
2014-01-01
In order to verify how important the ability to process visual images and sounds in a holistic way can be, we developed an experiment based on the production and reception of an art work that was conceived as a multi-sensorial experience and implied a complex understanding of visual and auditory information. We departed from the idea that to…
Lelo-de-Larrea-Mancera, E Sebastian; Rodríguez-Agudelo, Yaneth; Solís-Vivanco, Rodolfo
2017-06-01
Music represents a complex form of human cognition. To what extent our auditory system is attuned to music is yet to be clearly understood. Our principal aim was to determine whether the neurophysiological operations underlying pre-attentive auditory change detection (N1 enhancement (N1e)/Mismatch Negativity (MMN)) and the subsequent involuntary attentional reallocation (P3a) towards infrequent sound omissions, are influenced by differences in musical content. Specifically, we intended to explore any interaction effects that rhythmic and pitch dimensions of musical organization may have over these processes. Results showed that both the N1e and MMN amplitudes were differentially influenced by rhythm and pitch dimensions. MMN latencies were shorter for musical structures containing both features. This suggests some neurocognitive independence between pitch and rhythm domains, but also calls for further address on possible interactions between both of them at the level of early, automatic auditory detection. Furthermore, results demonstrate that the N1e reflects basic sensory memory processes. Lastly, we show that the involuntary switch of attention associated with the P3a reflects a general-purpose mechanism not modulated by musical features. Altogether, the N1e/MMN/P3a complex elicited by infrequent sound omissions revealed evidence of musical influence over early stages of auditory perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
EEG signatures accompanying auditory figure-ground segregation
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István
2017-01-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185
By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants
Geangu, Elena; Quadrelli, Ermanno; Lewis, James W.; Macchi Cassia, Viola; Turati, Chiara
2015-01-01
Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011). Yet, little is known about the development of such specialization. Using event-related potentials (ERP), this study investigated neural correlates of 7-month-olds’ processing of human action (HA) sounds in comparison to human vocalizations (HV), environmental (ENV), and mechanical (MEC) sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV) led to significantly different response profiles compared to non-living sound sources (ENV + MEC) at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds. PMID:25732377
Kanwal, Jagmeet S
2012-01-01
In the Doppler-shifted constant frequency processing area in the primary auditory cortex of mustached bats, Pteronotus parnellii, neurons respond to both social calls and to echolocation signals. This multifunctional nature of cortical neurons creates a paradox for simultaneous processing of two behaviorally distinct categories of sound. To test the possibility of a stimulus-specific hemispheric bias, single-unit responses were obtained to both types of sounds, calls and pulse-echo tone pairs, from the right and left auditory cortex. Neurons on the left exhibited only slightly higher peak response magnitudes for their respective best calls, but they showed a significantly higher sensitivity (lower response thresholds) to calls than neurons on the right. On average, call-to-tone response ratios were significantly higher for neurons on the left than for those on the right. Neurons on the right responded significantly more strongly to pulse-echo tone pairs than those on the left. Overall, neurons in males responded to pulse-echo tone pairs with a much higher spike count compared to females, but this difference was less pronounced for calls. Multidimensional scaling of call responses yielded a segregated representation of call types only on the left. These data establish for the first time, a behaviorally directed right-left asymmetry at the level of single cortical neurons. It is proposed that a lateralized cortex emerges from multiparametric integration (e.g. combination-sensitivity) within a neuron and inhibitory interactions between neurons that come into play during the processing of complex sounds. © 2011 The Author. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Temporal dynamics of contingency extraction from tonal and verbal auditory sequences.
Bendixen, Alexandra; Schwartze, Michael; Kotz, Sonja A
2015-09-01
Consecutive sound events are often to some degree predictive of each other. Here we investigated the brain's capacity to detect contingencies between consecutive sounds by means of electroencephalography (EEG) during passive listening. Contingencies were embedded either within tonal or verbal stimuli. Contingency extraction was measured indirectly via the elicitation of the mismatch negativity (MMN) component of the event-related potential (ERP) by contingency violations. MMN results indicate that structurally identical forms of predictability can be extracted from both tonal and verbal stimuli. We also found similar generators to underlie the processing of contingency violations across stimulus types, as well as similar performance in an active-listening follow-up test. However, the process of passive contingency extraction was considerably slower (twice as many rule exemplars were needed) for verbal than for tonal stimuli These results suggest caution in transferring findings on complex predictive regularity processing obtained with tonal stimuli directly to the speech domain. Copyright © 2014 Elsevier Inc. All rights reserved.
Delgutte, Bertrand
2015-01-01
At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292
Wolfe, Jace; Schafer, Erin; Parkinson, Aaron; John, Andrew; Hudson, Mary; Wheeler, Julie; Mucci, Angie
2013-01-01
The objective of this study was to compare speech recognition in quiet and in noise for cochlear implant recipients using two different types of personal frequency modulation (FM) systems (directly coupled [direct auditory input] versus induction neckloop) with each of two sound processors (Cochlear Nucleus Freedom versus Cochlear Nucleus 5). Two different experiments were conducted within this study. In both these experiments, mixing of the FM signal within the Freedom processor was implemented via the same scheme used clinically for the Freedom sound processor. In Experiment 1, the aforementioned comparisons were conducted with the Nucleus 5 programmed so that the microphone and FM signals were mixed and then the mixed signals were subjected to autosensitivity control (ASC). In Experiment 2, comparisons between the two FM systems and processors were conducted again with the Nucleus 5 programmed to provide a more complex multistage implementation of ASC during the preprocessing stage. This study was a within-subject, repeated-measures design. Subjects were recruited from the patient population at the Hearts for Hearing Foundation in Oklahoma City, OK. Fifteen subjects participated in Experiment 1, and 16 subjects participated in Experiment 2. Subjects were adults who had used either unilateral or bilateral cochlear implants for at least 1 year. In this experiment, no differences were found in speech recognition in quiet obtained with the two different FM systems or the various sound-processor conditions. With each sound processor, speech recognition in noise was better with the directly coupled direct auditory input system relative to the neckloop system. The multistage ASC processing of the Nucleus 5 sound processor provided better performance than the single-stage approach for the Nucleus 5 and the Nucleus Freedom sound processor. Speech recognition in noise is substantially affected by the type of sound processor, FM system, and implementation of ASC used by a Cochlear implant recipient.
33 CFR 149.585 - What are the requirements for sound signals?
Code of Federal Regulations, 2013 CFR
2013-07-01
... complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...) Each sound signal must be: (1) Located at least 10 feet, but not more than 150 feet, above mean high...
33 CFR 149.585 - What are the requirements for sound signals?
Code of Federal Regulations, 2014 CFR
2014-07-01
... complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...) Each sound signal must be: (1) Located at least 10 feet, but not more than 150 feet, above mean high...
33 CFR 149.585 - What are the requirements for sound signals?
Code of Federal Regulations, 2012 CFR
2012-07-01
... complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...) Each sound signal must be: (1) Located at least 10 feet, but not more than 150 feet, above mean high...
33 CFR 149.585 - What are the requirements for sound signals?
Code of Federal Regulations, 2011 CFR
2011-07-01
... complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...) Each sound signal must be: (1) Located at least 10 feet, but not more than 150 feet, above mean high...
Detecting regular sound changes in linguistics as events of concerted evolution
Hruschka, Daniel J.; Branford, Simon; Smith, Eric D.; ...
2014-12-18
Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular soundmore » change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.« less
Detecting regular sound changes in linguistics as events of concerted evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hruschka, Daniel J.; Branford, Simon; Smith, Eric D.
Background: Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Results: Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular soundmore » change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. Conclusions: We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group.« less
Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H
2015-09-01
To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.
Parallel Processing of Large Scale Microphone Arrays for Sound Capture
NASA Astrophysics Data System (ADS)
Jan, Ea-Ee.
1995-01-01
Performance of microphone sound pick up is degraded by deleterious properties of the acoustic environment, such as multipath distortion (reverberation) and ambient noise. The degradation becomes more prominent in a teleconferencing environment in which the microphone is positioned far away from the speaker. Besides, the ideal teleconference should feel as easy and natural as face-to-face communication with another person. This suggests hands-free sound capture with no tether or encumbrance by hand-held or body-worn sound equipment. Microphone arrays for this application represent an appropriate approach. This research develops new microphone array and signal processing techniques for high quality hands-free sound capture in noisy, reverberant enclosures. The new techniques combine matched-filtering of individual sensors and parallel processing to provide acute spatial volume selectivity which is capable of mitigating the deleterious effects of noise interference and multipath distortion. The new method outperforms traditional delay-and-sum beamformers which provide only directional spatial selectivity. The research additionally explores truncated matched-filtering and random distribution of transducers to reduce complexity and improve sound capture quality. All designs are first established by computer simulation of array performance in reverberant enclosures. The simulation is achieved by a room model which can efficiently calculate the acoustic multipath in a rectangular enclosure up to a prescribed order of images. It also calculates the incident angle of the arriving signal. Experimental arrays were constructed and their performance was measured in real rooms. Real room data were collected in a hard-walled laboratory and a controllable variable acoustics enclosure of similar size, approximately 6 x 6 x 3 m. An extensive speech database was also collected in these two enclosures for future research on microphone arrays. The simulation results are shown to be consistent with the real room data. Localization of sound sources has been explored using cross-power spectrum time delay estimation and has been evaluated using real room data under slightly, moderately and highly reverberant conditions. To improve the accuracy and reliability of the source localization, an outlier detector that removes incorrect time delay estimation has been invented. To provide speaker selectivity for microphone array systems, a hands-free speaker identification system has been studied. A recently invented feature using selected spectrum information outperforms traditional recognition methods. Measured results demonstrate the capabilities of speaker selectivity from a matched-filtered array. In addition, simulation utilities, including matched -filtering processing of the array and hands-free speaker identification, have been implemented on the massively -parallel nCube super-computer. This parallel computation highlights the requirements for real-time processing of array signals.
The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.
Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin
2017-01-18
Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.
Sound source localization and segregation with internally coupled ears: the treefrog model
Christensen-Dalsgaard, Jakob
2016-01-01
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384
Complex coevolution of wing, tail, and vocal sounds of courting male bee hummingbirds.
Clark, Christopher J; McGuire, Jimmy A; Bonaccorso, Elisa; Berv, Jacob S; Prum, Richard O
2018-03-01
Phenotypic characters with a complex physical basis may have a correspondingly complex evolutionary history. Males in the "bee" hummingbird clade court females with sound from tail-feathers, which flutter during display dives. On a phylogeny of 35 species, flutter sound frequency evolves as a gradual, continuous character on most branches. But on at least six internal branches fall two types of major, saltational changes: mode of flutter changes, or the feather that is the sound source changes, causing frequency to jump from one discrete value to another. In addition to their tail "instruments," males also court females with sound from their syrinx and wing feathers, and may transfer or switch instruments over evolutionary time. In support of this, we found a negative phylogenetic correlation between presence of wing trills and singing. We hypothesize this transference occurs because wing trills and vocal songs serve similar functions and are thus redundant. There are also three independent origins of self-convergence of multiple signals, in which the same species produces both a vocal (sung) frequency sweep, and a highly similar nonvocal sound. Moreover, production of vocal, learned song has been lost repeatedly. Male bee hummingbirds court females with a diverse, coevolving array of acoustic traits. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.
Dyslexia risk gene relates to representation of sound in the auditory brainstem.
Neef, Nicole E; Müller, Bent; Liebig, Johanna; Schaadt, Gesa; Grigutsch, Maren; Gunter, Thomas C; Wilcke, Arndt; Kirsten, Holger; Skeide, Michael A; Kraft, Indra; Kraus, Nina; Emmrich, Frank; Brauer, Jens; Boltze, Johannes; Friederici, Angela D
2017-04-01
Dyslexia is a reading disorder with strong associations with KIAA0319 and DCDC2. Both genes play a functional role in spike time precision of neurons. Strikingly, poor readers show an imprecise encoding of fast transients of speech in the auditory brainstem. Whether dyslexia risk genes are related to the quality of sound encoding in the auditory brainstem remains to be investigated. Here, we quantified the response consistency of speech-evoked brainstem responses to the acoustically presented syllable [da] in 159 genotyped, literate and preliterate children. When controlling for age, sex, familial risk and intelligence, partial correlation analyses associated a higher dyslexia risk loading with KIAA0319 with noisier responses. In contrast, a higher risk loading with DCDC2 was associated with a trend towards more stable responses. These results suggest that unstable representation of sound, and thus, reduced neural discrimination ability of stop consonants, occurred in genotypes carrying a higher amount of KIAA0319 risk alleles. Current data provide the first evidence that the dyslexia-associated gene KIAA0319 can alter brainstem responses and impair phoneme processing in the auditory brainstem. This brain-gene relationship provides insight into the complex relationships between phenotype and genotype thereby improving the understanding of the dyslexia-inherent complex multifactorial condition. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Silver, bighead, and common carp orient to acoustic particle motion when avoiding a complex sound.
Zielinski, Daniel P; Sorensen, Peter W
2017-01-01
Behavioral responses of silver carp (Hypopthalmichthys molitrix), bighead carp (H. nobilis), and common carp (Cyprinus carpio) to a complex, broadband sound were tested in the absence of visual cues to determine whether these species are negatively phonotaxic and the roles that sound pressure and particle motion might play mediating this response. In a dark featureless square enclosure, groups of 3 fish were tracked and the distance of each fish from speakers and their swimming trajectories relative to sound pressure and particle acceleration were analyzed before, and then while an outboard motor sound was played. All three species exhibited negative phonotaxis during the first two exposures after which they ceased responding. The median percent time fish spent near the active speaker for the first two trials decreased from 7.0% to 1.3% for silver carp, 7.9% to 1.1% for bighead carp, and 9.5% to 3% for common carp. Notably, when close to the active speaker fish swam away from the source and maintained a nearly perfect 0° orientation to the axes of particle acceleration. Fish did not enter sound fields greater than 140 dB (ref. 1 μPa). These results demonstrate that carp avoid complex sounds in darkness and while initial responses may be informed by sound pressure, sustained oriented avoidance behavior is likely mediated by particle motion. This understanding of how invasive carp use particle motion to guide avoidance could be used to design new acoustic deterrents to divert them in dark, turbid river waters.
Silver, bighead, and common carp orient to acoustic particle motion when avoiding a complex sound
Sorensen, Peter W.
2017-01-01
Behavioral responses of silver carp (Hypopthalmichthys molitrix), bighead carp (H. nobilis), and common carp (Cyprinus carpio) to a complex, broadband sound were tested in the absence of visual cues to determine whether these species are negatively phonotaxic and the roles that sound pressure and particle motion might play mediating this response. In a dark featureless square enclosure, groups of 3 fish were tracked and the distance of each fish from speakers and their swimming trajectories relative to sound pressure and particle acceleration were analyzed before, and then while an outboard motor sound was played. All three species exhibited negative phonotaxis during the first two exposures after which they ceased responding. The median percent time fish spent near the active speaker for the first two trials decreased from 7.0% to 1.3% for silver carp, 7.9% to 1.1% for bighead carp, and 9.5% to 3% for common carp. Notably, when close to the active speaker fish swam away from the source and maintained a nearly perfect 0° orientation to the axes of particle acceleration. Fish did not enter sound fields greater than 140 dB (ref. 1 μPa). These results demonstrate that carp avoid complex sounds in darkness and while initial responses may be informed by sound pressure, sustained oriented avoidance behavior is likely mediated by particle motion. This understanding of how invasive carp use particle motion to guide avoidance could be used to design new acoustic deterrents to divert them in dark, turbid river waters. PMID:28654676
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukhovitskii, D. I., E-mail: dmr@ihed.ras.ru; Fortov, V. E.; Molotkov, V. I.
2015-02-15
We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are incompatible with the theory of ion acoustic waves. The estimate for the pressure in a strongly coupled Coulomb system and a scaling law for the complex plasma make it possiblemore » to derive an evaluation for the speed of sound, which is in a reasonable agreement with the experiments in complex plasmas.« less
Early auditory processing in musicians and dancers during a contemporary dance piece
Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari
2016-01-01
The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other. PMID:27611929
Underwater Sound Propagation from Marine Pile Driving.
Reyff, James A
2016-01-01
Pile driving occurs in a variety of nearshore environments that typically have very shallow-water depths. The propagation of pile-driving sound in water is complex, where sound is directly radiated from the pile as well as through the ground substrate. Piles driven in the ground near water bodies can produce considerable underwater sound energy. This paper presents examples of sound propagation through shallow-water environments. Some of these examples illustrate the substantial variation in sound amplitude over time that can be critical to understand when computing an acoustic-based safety zone for aquatic species.
NASA Astrophysics Data System (ADS)
Matoza, Robin S.; Fee, David; GarcéS, Milton A.
2010-12-01
Long-lived effusive volcanism at the Pu`u `Ō`ō crater complex, Kilauea Volcano, Hawaii produces persistent infrasonic tremor that has been recorded almost continuously for months to years. Previous studies showed that this infrasonic tremor wavefield can be recorded at a range of >10 km. However, the low signal power of this tremor relative to ambient noise levels results in significant propagation effects on signal detectability at this range. In April 2007, we supplemented a broadband infrasound array at ˜12.5 km from Pu`u `Ō`ō (MENE) with a similar array at ˜2.4 km from the source (KIPU). The additional closer-range data enable further evaluation of tropospheric propagation effects and provide higher signal-to-noise ratios for studying volcanic source processes. The infrasonic tremor source appears to consist of at least two separate physical processes. We suggest that bubble cloud oscillation in a roiling magma conduit beneath the crater complex may produce a broadband component of the tremor. Low-frequency sound sourced in a shallow magma conduit may radiate infrasound efficiently into the atmosphere due to the anomalous transparency of the magma-air interface. We further propose that more sharply peaked tones with complex temporal evolution may result from oscillatory interactions of a low-velocity gas jet with solid vent boundaries in a process analogous to the hole tone or whistler nozzle. The infrasonic tremor arrives with a median azimuth of ˜67° at KIPU. Additional infrasonic signals and audible sounds originating from the extended lava tube system to the south of the crater complex (median azimuth ˜77°) coincided with turbulent degassing activity at a new lava tube skylight. Our observations indicate that acoustic studies may aid in understanding persistent continuous degassing and unsteady flow dynamics at Kilauea Volcano.
Neural basis of processing threatening voices in a crowded auditory world
Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.
2016-01-01
In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543
Yuskaitis, Christopher J.; Parviz, Mahsa; Loui, Psyche; Wan, Catherine Y.; Pearl, Phillip L.
2017-01-01
Music production and perception invoke a complex set of cognitive functions that rely on the integration of sensory-motor, cognitive, and emotional pathways. Pitch is a fundamental perceptual attribute of sound and a building block for both music and speech. Although the cerebral processing of pitch is not completely understood, recent advances in imaging and electrophysiology have provided insight into the functional and anatomical pathways of pitch processing. This review examines the current understanding of pitch processing, behavioral and neural variations that give rise to difficulties in pitch processing, and potential applications of music education for language processing disorders such as dyslexia. PMID:26092314
How we hear what is not there: A neural mechanism for the missing fundamental illusion
NASA Astrophysics Data System (ADS)
Chialvo, Dante R.
2003-12-01
How the brain estimates the pitch of a complex sound remains unsolved. Complex sounds are composed of more than one tone. When two tones occur together, a third lower pitched tone is often heard. This is referred to as the "missing fundamental illusion" because the perceived pitch is a frequency (fundamental) for which there is no actual source vibration. This phenomenon exemplifies a larger variety of problems related to how pitch is extracted from complex tones, music and speech, and thus has been extensively used to test theories of pitch perception. A noisy nonlinear process is presented here as a candidate neural mechanism to explain the majority of reported phenomenology and provide specific quantitative predictions. The two basic premises of this model are as follows: (I) The individual tones composing the complex tones add linearly producing peaks of constructive interference whose amplitude is always insufficient to fire the neuron (II): The spike threshold is reached only with noise, which naturally selects the maximum constructive interferences. The spacing of these maxima, and consequently the spikes, occurs at a rate identical to the perceived pitch for the complex tone. Comparison with psychophysical and physiological data reveals a remarkable quantitative agreement not dependent on adjustable parameters. In addition, results from numerical simulations across different models are consistent, suggesting relevance to other sensory modalities.
ERIC Educational Resources Information Center
Landi, Nicole; Frost, Stephen J.; Mencl, W. Einar; Sandak, Rebecca; Pugh, Kenneth R.
2013-01-01
For accurate reading comprehension, readers must first learn to map letters to their corresponding speech sounds and meaning, and then they must string the meanings of many words together to form a representation of the text. Furthermore, readers must master the complexities involved in parsing the relevant syntactic and pragmatic information…
Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance
ERIC Educational Resources Information Center
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-01-01
Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…
Evidence for distinct human auditory cortex regions for sound location versus identity processing
Ahveninen, Jyrki; Huang, Samantha; Nummenmaa, Aapo; Belliveau, John W.; Hung, An-Yi; Jääskeläinen, Iiro P.; Rauschecker, Josef P.; Rossi, Stephanie; Tiitinen, Hannu; Raij, Tommi
2014-01-01
Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC. PMID:24121634
Agnosia for accents in primary progressive aphasia☆
Fletcher, Phillip D.; Downey, Laura E.; Agustus, Jennifer L.; Hailstone, Julia C.; Tyndall, Marina H.; Cifelli, Alberto; Schott, Jonathan M.; Warrington, Elizabeth K.; Warren, Jason D.
2013-01-01
As an example of complex auditory signal processing, the analysis of accented speech is potentially vulnerable in the progressive aphasias. However, the brain basis of accent processing and the effects of neurodegenerative disease on this processing are not well understood. Here we undertook a detailed neuropsychological study of a patient, AA with progressive nonfluent aphasia, in whom agnosia for accents was a prominent clinical feature. We designed a battery to assess AA's ability to process accents in relation to other complex auditory signals. AA's performance was compared with a cohort of 12 healthy age and gender matched control participants and with a second patient, PA, who had semantic dementia with phonagnosia and prosopagnosia but no reported difficulties with accent processing. Relative to healthy controls, the patients showed distinct profiles of accent agnosia. AA showed markedly impaired ability to distinguish change in an individual's accent despite being able to discriminate phonemes and voices (apperceptive accent agnosia); and in addition, a severe deficit of accent identification. In contrast, PA was able to perceive changes in accents, phonemes and voices normally, but showed a relatively mild deficit of accent identification (associative accent agnosia). Both patients showed deficits of voice and environmental sound identification, however PA showed an additional deficit of face identification whereas AA was able to identify (though not name) faces normally. These profiles suggest that AA has conjoint (or interacting) deficits involving both apperceptive and semantic processing of accents, while PA has a primary semantic (associative) deficit affecting accents along with other kinds of auditory objects and extending beyond the auditory modality. Brain MRI revealed left peri-Sylvian atrophy in case AA and relatively focal asymmetric (predominantly right sided) temporal lobe atrophy in case PA. These cases provide further evidence for the fractionation of brain mechanisms for complex sound analysis, and for the stratification of progressive aphasia syndromes according to the signature of nonverbal auditory deficits they produce. PMID:23721780
Agnosia for accents in primary progressive aphasia.
Fletcher, Phillip D; Downey, Laura E; Agustus, Jennifer L; Hailstone, Julia C; Tyndall, Marina H; Cifelli, Alberto; Schott, Jonathan M; Warrington, Elizabeth K; Warren, Jason D
2013-08-01
As an example of complex auditory signal processing, the analysis of accented speech is potentially vulnerable in the progressive aphasias. However, the brain basis of accent processing and the effects of neurodegenerative disease on this processing are not well understood. Here we undertook a detailed neuropsychological study of a patient, AA with progressive nonfluent aphasia, in whom agnosia for accents was a prominent clinical feature. We designed a battery to assess AA's ability to process accents in relation to other complex auditory signals. AA's performance was compared with a cohort of 12 healthy age and gender matched control participants and with a second patient, PA, who had semantic dementia with phonagnosia and prosopagnosia but no reported difficulties with accent processing. Relative to healthy controls, the patients showed distinct profiles of accent agnosia. AA showed markedly impaired ability to distinguish change in an individual's accent despite being able to discriminate phonemes and voices (apperceptive accent agnosia); and in addition, a severe deficit of accent identification. In contrast, PA was able to perceive changes in accents, phonemes and voices normally, but showed a relatively mild deficit of accent identification (associative accent agnosia). Both patients showed deficits of voice and environmental sound identification, however PA showed an additional deficit of face identification whereas AA was able to identify (though not name) faces normally. These profiles suggest that AA has conjoint (or interacting) deficits involving both apperceptive and semantic processing of accents, while PA has a primary semantic (associative) deficit affecting accents along with other kinds of auditory objects and extending beyond the auditory modality. Brain MRI revealed left peri-Sylvian atrophy in case AA and relatively focal asymmetric (predominantly right sided) temporal lobe atrophy in case PA. These cases provide further evidence for the fractionation of brain mechanisms for complex sound analysis, and for the stratification of progressive aphasia syndromes according to the signature of nonverbal auditory deficits they produce. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Methods of sound simulation and applications in flight simulators
NASA Technical Reports Server (NTRS)
Gaertner, K. P.
1980-01-01
An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
NASA Astrophysics Data System (ADS)
Rimland, Jeff; Ballora, Mark
2014-05-01
The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of "big data" and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.
Fava, Eswen; Hull, Rachel; Bortfeld, Heather
2014-01-01
Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech). Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity. PMID:25116572
Phonics Pathways. 7th Edition.
ERIC Educational Resources Information Center
Hiskes, Dolores G.
This book uses phonics to teach reading in a complete manual for beginning and remedial readers of all ages. The book builds sounds and spelling patterns slowly and systematically into syllables, phrases, and sentences of gradually increasing complexity, presenting only one sound per lesson--each new sound builds upon previously learned skills for…
Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments
NASA Astrophysics Data System (ADS)
Horowitz, Seth S.; Simmons, Andrea M.; Blue, China
2005-09-01
Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.
Systems and processes that ensure high quality care.
Bassett, Sally; Westmore, Kathryn
2012-10-01
This is the second in a series of articles examining the components of good corporate governance. It considers how the structures and processes for quality governance can affect an organisation's ability to be assured about the quality of care. Complex information systems and procedures can lead to poor quality care, but sound structures and processes alone are insufficient to ensure good governance, and behavioural factors play a significant part in making sure that staff are enabled to provide good quality care. The next article in this series looks at how the information reporting of an organisation can affect its governance.
Sound propagation from a ridge wind turbine across a valley.
Van Renterghem, Timothy
2017-04-13
Sound propagation outdoors can be strongly affected by ground topography. The existence of hills and valleys between a source and receiver can lead to the shielding or focusing of sound waves. Such effects can result in significant variations in received sound levels. In addition, wind speed and air temperature gradients in the atmospheric boundary layer also play an important role. All of the foregoing factors can become especially important for the case of wind turbines located on a ridge overlooking a valley. Ridges are often selected for wind turbines in order to increase their energy capture potential through the wind speed-up effects often experienced in such locations. In this paper, a hybrid calculation method is presented to model such a case, relying on an analytical solution for sound diffraction around an impedance cylinder and the conformal mapping (CM) Green's function parabolic equation (GFPE) technique. The various aspects of the model have been successfully validated against alternative prediction methods. Example calculations with this hybrid analytical-CM-GFPE model show the complex sound pressure level distribution across the valley and the effect of valley ground type. The proposed method has the potential to include the effect of refraction through the inclusion of complex wind and temperature fields, although this aspect has been highly simplified in the current simulations.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
Functional MRI of the vocalization-processing network in the macaque brain
Ortiz-Rios, Michael; Kuśmierek, Paweł; DeWitt, Iain; Archakov, Denis; Azevedo, Frederico A. C.; Sams, Mikko; Jääskeläinen, Iiro P.; Keliris, Georgios A.; Rauschecker, Josef P.
2015-01-01
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt. PMID:25883546
The Potential Role of the cABR in Assessment and Management of Hearing Impairment
Anderson, Samira; Kraus, Nina
2013-01-01
Hearing aid technology has improved dramatically in the last decade, especially in the ability to adaptively respond to dynamic aspects of background noise. Despite these advancements, however, hearing aid users continue to report difficulty hearing in background noise and having trouble adjusting to amplified sound quality. These difficulties may arise in part from current approaches to hearing aid fittings, which largely focus on increased audibility and management of environmental noise. These approaches do not take into account the fact that sound is processed all along the auditory system from the cochlea to the auditory cortex. Older adults represent the largest group of hearing aid wearers; yet older adults are known to have deficits in temporal resolution in the central auditory system. Here we review evidence that supports the use of the auditory brainstem response to complex sounds (cABR) in the assessment of hearing-in-noise difficulties and auditory training efficacy in older adults. PMID:23431313
Physiological models of the lateral superior olive
2017-01-01
In computational biology, modeling is a fundamental tool for formulating, analyzing and predicting complex phenomena. Most neuron models, however, are designed to reproduce certain small sets of empirical data. Hence their outcome is usually not compatible or comparable with other models or datasets, making it unclear how widely applicable such models are. In this study, we investigate these aspects of modeling, namely credibility and generalizability, with a specific focus on auditory neurons involved in the localization of sound sources. The primary cues for binaural sound localization are comprised of interaural time and level differences (ITD/ILD), which are the timing and intensity differences of the sound waves arriving at the two ears. The lateral superior olive (LSO) in the auditory brainstem is one of the locations where such acoustic information is first computed. An LSO neuron receives temporally structured excitatory and inhibitory synaptic inputs that are driven by ipsi- and contralateral sound stimuli, respectively, and changes its spike rate according to binaural acoustic differences. Here we examine seven contemporary models of LSO neurons with different levels of biophysical complexity, from predominantly functional ones (‘shot-noise’ models) to those with more detailed physiological components (variations of integrate-and-fire and Hodgkin-Huxley-type). These models, calibrated to reproduce known monaural and binaural characteristics of LSO, generate largely similar results to each other in simulating ITD and ILD coding. Our comparisons of physiological detail, computational efficiency, predictive performances, and further expandability of the models demonstrate (1) that the simplistic, functional LSO models are suitable for applications where low computational costs and mathematical transparency are needed, (2) that more complex models with detailed membrane potential dynamics are necessary for simulation studies where sub-neuronal nonlinear processes play important roles, and (3) that, for general purposes, intermediate models might be a reasonable compromise between simplicity and biological plausibility. PMID:29281618
Rossi, Tullio; Nagelkerken, Ivan; Pistevos, Jennifer C A; Connell, Sean D
2016-01-01
The dispersal of larvae and their settlement to suitable habitat is fundamental to the replenishment of marine populations and the communities in which they live. Sound plays an important role in this process because for larvae of various species, it acts as an orientational cue towards suitable settlement habitat. Because marine sounds are largely of biological origin, they not only carry information about the location of potential habitat, but also information about the quality of habitat. While ocean acidification is known to affect a wide range of marine organisms and processes, its effect on marine soundscapes and its reception by navigating oceanic larvae remains unknown. Here, we show that ocean acidification causes a switch in role of present-day soundscapes from attractor to repellent in the auditory preferences in a temperate larval fish. Using natural CO2 vents as analogues of future ocean conditions, we further reveal that ocean acidification can impact marine soundscapes by profoundly diminishing their biological sound production. An altered soundscape poorer in biological cues indirectly penalizes oceanic larvae at settlement stage because both control and CO2-treated fish larvae showed lack of any response to such future soundscapes. These indirect and direct effects of ocean acidification put at risk the complex processes of larval dispersal and settlement. © 2016 The Author(s).
The Perception of Concurrent Sound Objects in Harmonic Complexes Impairs Gap Detection
ERIC Educational Resources Information Center
Leung, Ada W. S.; Jolicoeur, Pierre; Vachon, Francois; Alain, Claude
2011-01-01
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a…
EEG signatures accompanying auditory figure-ground segregation.
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P; Szerafin, Ágnes; Shinn-Cunningham, Barbara G; Winkler, István
2016-11-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. Copyright © 2016. Published by Elsevier Inc.
Neuromimetic Sound Representation for Percept Detection and Manipulation
NASA Astrophysics Data System (ADS)
Zotkin, Dmitry N.; Chi, Taishih; Shamma, Shihab A.; Duraiswami, Ramani
2005-12-01
The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at [InlineEquation not available: see fulltext.]). Work on bringing the algorithms into the real-time processing domain is ongoing.
``Hiss, clicks and pops'' - The enigmatic sounds of meteors
NASA Astrophysics Data System (ADS)
Finnegan, J. A.
2015-04-01
The improbability of sounds heard simultaneously with meteors allows the phenomenon to remain on the margins of scientific interest and research. This is unjustified, since these audibly perceived electric field effects indicate complex, inconsistent and still unresolved electric-magnetic coupling and charge dynamics; interacting between the meteor; the ionosphere and mesosphere; stratosphere; troposphere and the surface of the earth. This paper reviews meteor acoustic effects, presents illustrating reports and hypotheses and includes a summary of similar and additional phenomena observed during the 2013 February 15 asteroid fragment disintegration above the Russian district of Chelyabinsk. An augmenting theory involving near ground, non uniform electric field production of Ozone, as a stimulated geo-physical phenomenon to explain some hissing `meteor sounds' is suggested in section 2.2. Unlike previous theories, electric-magnetic field fluctuation rates are not required to occur in the audio frequency range for this process to acoustically emit hissing and intermittent impulsive sounds; removing the requirements of direct conversion, passive human transduction or excited, localised acoustic `emitters'. Links to the Armagh Observatory All-sky meteor cameras, electrophonic meteor research and full construction plans for an extremely low frequency (ELF) receiver are also included.
Low-power wearable respiratory sound sensing.
Oletic, Dinko; Arsenali, Bruno; Bilas, Vedran
2014-04-09
Building upon the findings from the field of automated recognition of respiratory sound patterns, we propose a wearable wireless sensor implementing on-board respiratory sound acquisition and classification, to enable continuous monitoring of symptoms, such as asthmatic wheezing. Low-power consumption of such a sensor is required in order to achieve long autonomy. Considering that the power consumption of its radio is kept minimal if transmitting only upon (rare) occurrences of wheezing, we focus on optimizing the power consumption of the digital signal processor (DSP). Based on a comprehensive review of asthmatic wheeze detection algorithms, we analyze the computational complexity of common features drawn from short-time Fourier transform (STFT) and decision tree classification. Four algorithms were implemented on a low-power TMS320C5505 DSP. Their classification accuracies were evaluated on a dataset of prerecorded respiratory sounds in two operating scenarios of different detection fidelities. The execution times of all algorithms were measured. The best classification accuracy of over 92%, while occupying only 2.6% of the DSP's processing time, is obtained for the algorithm featuring the time-frequency tracking of shapes of crests originating from wheezing, with spectral features modeled using energy.
Neural correlates of successful semantic processing during propofol sedation.
Adapa, Ram M; Davis, Matthew H; Stamatakis, Emmanuel A; Absalom, Anthony R; Menon, David K
2014-07-01
Sedation has a graded effect on brain responses to auditory stimuli: perceptual processing persists at sedation levels that attenuate more complex processing. We used fMRI in healthy volunteers sedated with propofol to assess changes in neural responses to spoken stimuli. Volunteers were scanned awake, sedated, and during recovery, while making perceptual or semantic decisions about nonspeech sounds or spoken words respectively. Sedation caused increased error rates and response times, and differentially affected responses to words in the left inferior frontal gyrus (LIFG) and the left inferior temporal gyrus (LITG). Activity in LIFG regions putatively associated with semantic processing, was significantly reduced by sedation despite sedated volunteers continuing to make accurate semantic decisions. Instead, LITG activity was preserved for words greater than nonspeech sounds and may therefore be associated with persistent semantic processing during the deepest levels of sedation. These results suggest functionally distinct contributions of frontal and temporal regions to semantic decision making. These results have implications for functional imaging studies of language, for understanding mechanisms of impaired speech comprehension in postoperative patients with residual levels of anesthetic, and may contribute to the development of frameworks against which EEG based monitors could be calibrated to detect awareness under anesthesia. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
West, Eva; Wallin, Anita
2013-04-01
Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.
When the Sound Becomes the Goal. 4E Cognition and Teleomusicality in Early Infancy
Schiavio, Andrea; van der Schyff, Dylan; Kruse-Weber, Silke; Timmers, Renee
2017-01-01
In this paper we explore early musical behaviors through the lenses of the recently emerged “4E” approach to mind, which sees cognitive processes as Embodied, Embedded, Enacted, and Extended. In doing so, we draw from a range of interdisciplinary research, engaging in critical and constructive discussions with both new findings and existing positions. In particular, we refer to observational research by French pedagogue and psychologist François Delalande, who examined infants' first “sound discoveries” and individuated three different musical “conducts” inspired by the “phases of the game” originally postulated by Piaget. Elaborating on such ideas we introduce the notion of “teleomusicality,” which describes the goal-directed behaviors infants adopt to explore and play with sounds. This is distinguished from the developmentally earlier “protomusicality,” which is based on music-like utterances, movements, and emotionally relevant interactions (e.g., with primary caregivers) that do not entail a primary focus on sound itself. The development from protomusicality to teleomusicality is discussed in terms of an “attentive shift” that occurs between 6 and 10 months of age. This forms the basis of a conceptual framework for early musical development that emphasizes the emergence of exploratory, goal-directed (i.e., sound-oriented), and self-organized musical actions in infancy. In line with this, we provide a preliminary taxonomy of teleomusical processes discussing “Original Teleomusical Acts” (OTAs) and “Constituted Teleomusical Acts” (CTAs). We argue that while OTAs can be easily witnessed in infants' exploratory behaviors, CTAs involve the mastery of more specific and complex goal-directed chains of actions central to musical activity. PMID:28993745
Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; Macpherson, Megan; Weber-Fox, Christine
2013-04-01
Using electrophysiology, we have examined two questions in relation to musical training - namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in timbre of the sounds. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 event-related potential component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We therefore conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians' accuracy tended to suffer less from the change in timbre of the sounds, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. © 2013 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Investigation of pulmonary acoustic simulation: comparing airway model generation techniques
NASA Astrophysics Data System (ADS)
Henry, Brian; Dai, Zoujun; Peng, Ying; Mansy, Hansen A.; Sandler, Richard H.; Royston, Thomas
2014-03-01
Alterations in the structure and function of the pulmonary system that occur in disease or injury often give rise to measurable spectral, spatial and/or temporal changes in lung sound production and transmission. These changes, if properly quantified, might provide additional information about the etiology, severity and location of trauma, injury, or pathology. With this in mind, the authors are developing a comprehensive computer simulation model of pulmonary acoustics, known as The Audible Human Project™. Its purpose is to improve our understanding of pulmonary acoustics and to aid in interpreting measurements of sound and vibration in the lungs generated by airway insonification, natural breath sounds, and external stimuli on the chest surface, such as that used in elastography. As a part of this development process, finite element (FE) models were constructed of an excised pig lung that also underwent experimental studies. Within these models, the complex airway structure was created via two methods: x-ray CT image segmentation and through an algorithmic means called Constrained Constructive Optimization (CCO). CCO was implemented to expedite the segmentation process, as airway segments can be grown digitally. These two approaches were used in FE simulations of the surface motion on the lung as a result of sound input into the trachea. Simulation results were compared to experimental measurements. By testing how close these models are to experimental measurements, we are evaluating whether CCO can be used as a means to efficiently construct physiologically relevant airway trees.
Acoustic interference and recognition space within a complex assemblage of dendrobatid frogs
Amézquita, Adolfo; Flechas, Sandra Victoria; Lima, Albertina Pimentel; Gasser, Herbert; Hödl, Walter
2011-01-01
In species-rich assemblages of acoustically communicating animals, heterospecific sounds may constrain not only the evolution of signal traits but also the much less-studied signal-processing mechanisms that define the recognition space of a signal. To test the hypothesis that the recognition space is optimally designed, i.e., that it is narrower toward the species that represent the higher potential for acoustic interference, we studied an acoustic assemblage of 10 diurnally active frog species. We characterized their calls, estimated pairwise correlations in calling activity, and, to model the recognition spaces of five species, conducted playback experiments with 577 synthetic signals on 531 males. Acoustic co-occurrence was not related to multivariate distance in call parameters, suggesting a minor role for spectral or temporal segregation among species uttering similar calls. In most cases, the recognition space overlapped but was greater than the signal space, indicating that signal-processing traits do not act as strictly matched filters against sounds other than homospecific calls. Indeed, the range of the recognition space was strongly predicted by the acoustic distance to neighboring species in the signal space. Thus, our data provide compelling evidence of a role of heterospecific calls in evolutionarily shaping the frogs' recognition space within a complex acoustic assemblage without obvious concomitant effects on the signal. PMID:21969562
Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.
2016-01-01
Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822
Listen up! Processing of intensity change differs for vocal and nonvocal sounds.
Schirmer, Annett; Simpson, Elizabeth; Escoffier, Nicolas
2007-10-24
Changes in the intensity of both vocal and nonvocal sounds can be emotionally relevant. However, as only vocal sounds directly reflect communicative intent, intensity change of vocal but not nonvocal sounds is socially relevant. Here we investigated whether a change in sound intensity is processed differently depending on its social relevance. To this end, participants listened passively to a sequence of vocal or nonvocal sounds that contained rare deviants which differed from standards in sound intensity. Concurrently recorded event-related potentials (ERPs) revealed a mismatch negativity (MMN) and P300 effect for intensity change. Direction of intensity change was of little importance for vocal stimulus sequences, which recruited enhanced sensory and attentional resources for both loud and soft deviants. In contrast, intensity change in nonvocal sequences recruited more sensory and attentional resources for loud as compared to soft deviants. This was reflected in markedly larger MMN/P300 amplitudes and shorter P300 latencies for the loud as compared to soft nonvocal deviants. Furthermore, while the processing pattern observed for nonvocal sounds was largely comparable between men and women, sex differences for vocal sounds suggest that women were more sensitive to their social relevance. These findings extend previous evidence of sex differences in vocal processing and add to reports of voice specific processing mechanisms by demonstrating that simple acoustic change recruits more processing resources if it is socially relevant.
Seafloor environments in the Long Island Sound estuarine system
Knebel, H.J.; Signell, R.P.; Rendigs, R. R.; Poppe, L.J.; List, J.H.
1999-01-01
Four categories of modern seafloor sedimentary environments have been identified and mapped across the large, glaciated, topographically complex Long Island Sound estuary by means of an extensive regional set of sidescan sonographs, bottom samples, and video-camera observations and supplemental marine-geologic and modeled physical-oceanographic data. (1) Environments of erosion or nondeposition contain sediments which range from boulder fields to gravelly coarse-to-medium sands and appear on the sonographs either as patterns with isolated reflections (caused by outcrops of glacial drift and bedrock) or as patterns of strong backscatter (caused by coarse lag deposits). Areas of erosion or nondeposition were found across the rugged seafloor at the eastern entrance of the Sound and atop bathymetric highs and within constricted depressions in other parts of the basin. (2) Environments of bedload transport contain mostly coarse-to-fine sand with only small amounts of mud and are depicted by sonograph patterns of sand ribbons and sand waves. Areas of bedload transport were found primarily in the eastern Sound where bottom currents have sculptured the surface of a Holocene marine delta and are moving these sediments toward the WSW into the estuary. (3) Environments of sediment sorting and reworking comprise variable amounts of fine sand and mud and are characterized either by patterns of moderate backscatter or by patterns with patches of moderate-to-weak backscatter that reflect a combination of erosion and deposition. Areas of sediment sorting and reworking were found around the periphery of the zone of bedload transport in the eastern Sound and along the southern nearshore margin. They also are located atop low knolls, on the flanks of shoal complexes, and within segments of the axial depression in the western Sound. (4) Environments of deposition are blanketed by muds and muddy fine sands that produce patterns of uniformly weak backscatter. Depositional areas occupy broad areas of the basin floor in the western part of the Sound. The regional distribution of seafloor environments reflects fundamental differences in marine-geologic conditions between the eastern and western parts of the Sound. In the funnel-shaped eastern part, a gradient of strong tidal currents coupled with the net nontidal (estuarine) bottom drift produce a westward progression of environments ranging from erosion or nondeposition at the narrow entrance to the Sound, through an extensive area of bedload transport, to a peripheral zone of sediment sorting. In the generally broader western part of the Sound, a weak tidal-current regime combined with the production of particle aggregates by biologic or chemical processes, cause large areas of deposition that are locally interrupted by a patchy distribution of various other environments where the bottom currents are enhanced by and interact with the seafloor topography.
Keeping Timbre in Mind: Working Memory for Complex Sounds that Can't Be Verbalized
ERIC Educational Resources Information Center
Golubock, Jason L.; Janata, Petr
2013-01-01
Properties of auditory working memory for sounds that lack strong semantic associations and are not readily verbalized or sung are poorly understood. We investigated auditory working memory capacity for lists containing 2-6 easily discriminable abstract sounds synthesized within a constrained timbral space, at delays of 1-6 s (Experiment 1), and…
Yamato, Maya; Ketten, Darlene R; Arruda, Julie; Cramer, Scott; Moore, Kathleen
2012-01-01
Cetaceans possess highly derived auditory systems adapted for underwater hearing. Odontoceti (toothed whales) are thought to receive sound through specialized fat bodies that contact the tympanoperiotic complex, the bones housing the middle and inner ears. However, sound reception pathways remain unknown in Mysticeti (baleen whales), which have very different cranial anatomies compared to odontocetes. Here, we report a potential fatty sound reception pathway in the minke whale (Balaenoptera acutorostrata), a mysticete of the balaenopterid family. The cephalic anatomy of seven minke whales was investigated using computerized tomography and magnetic resonance imaging, verified through dissections. Findings include a large, well-formed fat body lateral, dorsal, and posterior to the mandibular ramus and lateral to the tympanoperiotic complex. This fat body inserts into the tympanoperiotic complex at the lateral aperture between the tympanic and periotic bones and is in contact with the ossicles. There is also a second, smaller body of fat found within the tympanic bone, which contacts the ossicles as well. This is the first analysis of these fatty tissues' association with the auditory structures in a mysticete, providing anatomical evidence that fatty sound reception pathways may not be a unique feature of odontocete cetaceans. Anat Rec, 2012. © 2012 Wiley Periodicals, Inc. PMID:22488847
Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; MacPherson, Megan; Weber-Fox, Christine
2012-01-01
Using electrophysiology, we have examined two questions in relation to musical training – namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in sounds’ timbre. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 ERP component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We, therefore, conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians’ accuracy tended to suffer less from the change in sounds’ timbre, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. PMID:23301775
Linguistics: evolution and language change.
Bowern, Claire
2015-01-05
Linguists have long identified sound changes that occur in parallel. Now novel research shows how Bayesian modeling can capture complex concerted changes, revealing how evolution of sounds proceeds. Copyright © 2015 Elsevier Ltd. All rights reserved.
Long Island Sound Tropospheric Ozone Study (LISTOS) Fact Sheet
EPA scientists are collaborating on a multi-agency field study to investigate the complex interaction of emissions, chemistry and meteorological factors contributing to elevated ozone levels along the Long Island Sound shoreline.
Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias
2017-01-01
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088
Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias
2017-01-01
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.
Chrostowski, Michael; Salvi, Richard J.; Allman, Brian L.
2012-01-01
A high dose of sodium salicylate temporarily induces tinnitus, mild hearing loss, and possibly hyperacusis in humans and other animals. Salicylate has well-established effects on cochlear function, primarily resulting in the moderate reduction of auditory input to the brain. Despite decreased peripheral sensitivity and output, salicylate induces a paradoxical enhancement of the sound-evoked field potential at the level of the primary auditory cortex (A1). Previous electrophysiologic studies have begun to characterize changes in thalamorecipient layers of A1; however, A1 is a complex neural circuit with recurrent intracortical connections. To describe the effects of acute systemic salicylate treatment on both thalamic and intracortical sound-driven activity across layers of A1, we applied current-source density (CSD) analysis to field potentials sampled across cortical layers in the anesthetized rat. CSD maps were normally characterized by a large, short-latency, monosynaptic, thalamically driven sink in granular layers followed by a lower amplitude, longer latency, polysynaptic, intracortically driven sink in supragranular layers. Following systemic administration of salicylate, there was a near doubling of both granular and supragranular sink amplitudes at higher sound levels. The supragranular sink amplitude input/output function changed from becoming asymptotic at approximately 50 dB to sharply nonasymptotic, often dominating the granular sink amplitude at higher sound levels. The supragranular sink also exhibited a significant decrease in peak latency, reflecting an acceleration of intracortical processing of the sound-evoked response. Additionally, multiunit (MU) activity was altered by salicylate; the normally onset/sustained MU response type was transformed into a primarily onset response type in granular and infragranular layers. The results from CSD analysis indicate that salicylate significantly enhances sound-driven response via intracortical circuits. PMID:22496535
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang; Yang, Zhaoqing; Kim, Tae Yun
2011-07-20
Through extensive field data collection and analysis efforts conducted since the 1950s, researchers have established an understanding of the characteristic features of circulation in Puget Sound. The pattern ranges from the classic fjordal behavior in some basins, with shallow brackish outflow and compensating inflow immediately below, to the typical two-layer flow observed in many partially mixed estuaries with saline inflow at depth. An attempt at reproducing this behavior by fitting an analytical formulation to past data is presented, followed by the application of a three-dimensional circulation and transport numerical model. The analytical treatment helped identify key physical processes and parameters,more » but quickly reconfirmed that response is complex and would require site-specific parameterization to include effects of sills and interconnected basins. The numerical model of Puget Sound, developed using unstructured-grid finite volume method, allowed resolution of the sub-basin geometric features, including presence of major islands, and site-specific strong advective vertical mixing created by bathymetry and multiple sills. The model was calibrated using available recent short-term oceanographic time series data sets from different parts of the Puget Sound basin. The results are compared against (1) recent velocity and salinity data collected in Puget Sound from 2006 and (2) a composite data set from previously analyzed historical records, mostly from the 1970s. The results highlight the ability of the model to reproduce velocity and salinity profile characteristics, their variations among Puget Sound subbasins, and tidally averaged circulation. Sensitivity of residual circulation to variations in freshwater inflow and resulting salinity gradient in fjordal sub-basins of Puget Sound is examined.« less
Bubbles in an acoustic field: an overview.
Ashokkumar, Muthupandian; Lee, Judy; Kentish, Sandra; Grieser, Franz
2007-04-01
Acoustic cavitation is the fundamental process responsible for the initiation of most of the sonochemical reactions in liquids. Acoustic cavitation originates from the interaction between sound waves and bubbles. In an acoustic field, bubbles can undergo growth by rectified diffusion, bubble-bubble coalescence, bubble dissolution or bubble collapse leading to the generation of primary radicals and other secondary chemical reactions. Surface active solutes have been used in association with a number of experimental techniques in order to isolate and understand these activities. A strobe technique has been used for monitoring the growth of a single bubble by rectified diffusion. Multibubble sonoluminescence has been used for monitoring the growth of the bubbles as well as coalescence between bubbles. The extent of bubble coalescence has also been monitored using a newly developed capillary technique. An overview of the various experimental results has been presented in order to highlight the complexities involved in acoustic cavitation processes, which on the other hand arise from a simple, mechanical interaction between sound waves and bubbles.
NASA Astrophysics Data System (ADS)
Afrillia, Yesy; Mawengkang, Herman; Ramli, Marwan; Fadlisyah; Putra Fhonna, Rizky
2017-12-01
Most of research have used signal and speech processing in order to recognize makhraj pattern and tajwid reading in Al-Quran by exploring the mel frequency ceptral coefficient (MFCC). However, to our knowledge so far there is no research has been conducted to recognize the chanting of Al-Quran verse using MFCC. This term is also well-known as nagham Al-Quran. The characteristics of nagham Al-Quran pattern is much more complex then makhraj and tajwid pattern. In nagham the wave of the sound has more variation which implies the level of noice is much higher and has sound duration longer. The data testing in this research was taken term by real-time recording. The evaluation measurement in the system performance of nagham Al-Quran pattern is based on true and false detection parameter with accuracy 80%. To measure this accuracy it is necessary to modify the MFCC or to give more data learning process with more variation.
Auditory memory can be object based.
Dyson, Benjamin J; Ishfaq, Feraz
2008-04-01
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.
Responses of auditory-cortex neurons to structural features of natural sounds.
Nelken, I; Rotman, Y; Bar Yosef, O
1999-01-14
Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.
Gerdes, Antje B. M.; Wieser, Matthias J.; Alpers, Georg W.
2014-01-01
In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research. PMID:25520679
Increased Activation in Superior Temporal Gyri as a Function of Increment in Phonetic Features
ERIC Educational Resources Information Center
Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten
2011-01-01
A common assumption is that phonetic sounds initiate unique processing in the superior temporal gyri and sulci (STG/STS). The anatomical areas subserving these processes are also implicated in the processing of non-phonetic stimuli such as music instrument sounds. The differential processing of phonetic and non-phonetic sounds was investigated in…
Neighing, barking, and drumming horses-object related sounds help and hinder picture naming.
Mädebach, Andreas; Wöhner, Stefan; Kieseler, Marie-Luise; Jescheniak, Jörg D
2017-09-01
The study presented here investigated how environmental sounds influence picture naming. In a series of four experiments participants named pictures (e.g., the picture of a horse) while hearing task-irrelevant sounds (e.g., neighing, barking, or drumming). Experiments 1 and 2 established two findings, facilitation from congruent sounds (e.g., picture: horse, sound: neighing) and interference from semantically related sounds (e.g., sound: barking), both relative to unrelated sounds (e.g., sound: drumming). Experiment 3 replicated the effects in a situation in which participants were not familiarized with the sounds prior to the experiment. Experiment 4 replicated the congruency facilitation effect, but showed that semantic interference was not obtained with distractor sounds which were not associated with target pictures (i.e., were not part of the response set). The general pattern of facilitation from congruent sound distractors and interference from semantically related sound distractors resembles the pattern commonly observed with distractor words. This parallelism suggests that the underlying processes are not specific to either distractor words or distractor sounds but instead reflect general aspects of semantic-lexical selection in language production. The results indicate that language production theories need to include a competitive selection mechanism at either the lexical processing stage, or the prelexical processing stage, or both. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Seo, Jung Hee; Mittal, Rajat
2010-01-01
A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129
Speech processing using maximum likelihood continuity mapping
Hogden, John E.
2000-01-01
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Speech processing using maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.E.
Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.
Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C
2015-12-01
The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types.
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
Striem-Amit, Ella; Cohen, Laurent; Dehaene, Stanislas; Amedi, Amir
2012-11-08
Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology. Copyright © 2012 Elsevier Inc. All rights reserved.
Dykstra, Andrew R; Halgren, Eric; Gutschalk, Alexander; Eskandar, Emad N; Cash, Sydney S
2016-01-01
In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.
Zorębski, Michał; Zorębski, Edward; Dzida, Marzena; Skowronek, Justyna; Jężak, Sylwia; Goodrich, Peter; Jacquemin, Johan
2016-04-14
Ultrasound absorption spectra of four 1-alkyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imides were determined as a function of the alkyl chain length on the cation from 1-propyl to 1-hexyl from 293.15 to 323.15 K at ambient pressure. Herein, the ultrasound absorption measurements were carried out using a standard pulse technique within a frequency range from 10 to 300 MHz. Additionally, the speed of sound, density, and viscosity have been measured. The presence of strong dissipative processes during the ultrasound wave propagation was found experimentally, i.e., relaxation processes in the megahertz range were observed for all compounds over the whole temperature range. The relaxation spectra (both relaxation amplitude and relaxation frequency) were shown to be dependent on the alkyl side chain length of the 1-alkyl-3-methylimidazolium ring. In most cases, a single-Debye model described the absorption spectra very well. However, a comparison of the determined spectra with the spectra of a few other imidazolium-based ionic liquids reported in the literature (in part recalculated in this work) shows that the complexity of the spectra increases rapidly with the elongation of the alkyl chain length on the cation. This complexity indicates that both the volume viscosity and the shear viscosity are involved in relaxation processes even in relatively low frequency ranges. As a consequence, the sound velocity dispersion is present at relatively low megahertz frequencies.
Letter-sound processing deficits in children with developmental dyslexia: An ERP study.
Moll, Kristina; Hasko, Sandra; Groth, Katharina; Bartling, Jürgen; Schulte-Körne, Gerd
2016-04-01
The time course during letter-sound processing was investigated in children with developmental dyslexia (DD) and typically developing (TD) children using electroencephalography. Thirty-eight children with DD and 25 TD children participated in a visual-auditory oddball paradigm. Event-related potentials (ERPs) elicited by standard and deviant stimuli in an early (100-190 ms) and late (560-750 ms) time window were analysed. In the early time window, ERPs elicited by the deviant stimulus were delayed and less left lateralized over fronto-temporal electrodes for children with DD compared to TD children. In the late time window, children with DD showed higher amplitudes extending more over right frontal electrodes. Longer latencies in the early time window and stronger right hemispheric activation in the late time window were associated with slower reading and naming speed. Additionally, stronger right hemispheric activation in the late time window correlated with poorer phonological awareness skills. Deficits in early stages of letter-sound processing influence later more explicit cognitive processes during letter-sound processing. Identifying the neurophysiological correlates of letter-sound processing and their relation to reading related skills provides insight into the degree of automaticity during letter-sound processing beyond behavioural measures of letter-sound-knowledge. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Direct CFD Predictions of Low Frequency Sounds Generated by a Helicopter Main Rotor
2010-05-01
modeling and grid constraints. NOTATION α Shaft tilt (corrected) or tip-path-plane angle BPF Blade passing frequency CT/σ Thrust coefficient to rotor...cyclic pitch angle, deg. LFSPL Low frequency sound metric (1st-6th BPF ), dB MFSPL Mid frequency sound metric (> 6th BPF ), dB OASPL Overall sound metric...Tunnel of the National Full- Scale Aerodynamic Complex (NFAC) at NASA Ames Research Center in 2008 (Fig. 2a), as a guide for prediction validation. The
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System
Anderson, Lucy A.
2016-01-01
High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621
Moore, Brian C J
2003-03-01
To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.
ERIC Educational Resources Information Center
Giordano, Bruno L.; McDonnell, John; McAdams, Stephen
2010-01-01
The neurocognitive processing of environmental sounds and linguistic stimuli shares common semantic resources and can lead to the activation of motor programs for the generation of the passively heard sound or speech. We investigated the extent to which the cognition of environmental sounds, like that of language, relies on symbolic mental…
Spherical loudspeaker array for local active control of sound.
Rafaely, Boaz
2009-05-01
Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.
Tajadura-Jiménez, Ana; Cohen, Helen; Bianchi-Berthouze, Nadia
2017-01-01
Neuroscientific studies have shown that human's mental body representations are not fixed but are constantly updated through sensory feedback, including sound feedback. This suggests potential new therapeutic sensory approaches for patients experiencing body-perception disturbances (BPD). BPD can occur in association with chronic pain, for example in Complex Regional Pain Syndrome (CRPS). BPD often impacts on emotional, social, and motor functioning. Here we present the results from a proof-of-principle pilot study investigating the potential value of using sound feedback for altering BPD and its related emotional state and motor behavior in those with CRPS. We build on previous findings that real-time alteration of the sounds produced by walking can alter healthy people's perception of their own body size, while also resulting in more active gait patterns and a more positive emotional state. In the present study we quantified the emotional state, BPD, pain levels and gait of twelve people with CRPS Type 1, who were exposed to real-time alteration of their walking sounds. Results confirm previous reports of the complexity of the BPD linked to CRPS, as participants could be classified into four BPD subgroups according to how they mentally visualize their body. Further, results suggest that sound feedback may affect the perceived size of the CRPS affected limb and the pain experienced, but that the effects may differ according to the type of BPD. Sound feedback affected CRPS descriptors and other bodily feelings and emotions including feelings of emotional dominance, limb detachment, position awareness, attention and negative feelings toward the limb. Gait also varied with sound feedback, affecting the foot contact time with the ground in a way consistent with experienced changes in body weight. Although, findings from this small pilot study should be interpreted with caution, they suggest potential applications for regenerating BDP and its related bodily feelings in a clinical setting for patients with chronic pain and BPD. PMID:28798671
Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart
2017-09-01
There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children. Copyright © 2017 Elsevier B.V. All rights reserved.
Speech-sound duration processing in a second language is specific to phonetic categories.
Nenonen, Sari; Shestakova, Anna; Huotilainen, Minna; Näätänen, Risto
2005-01-01
The mismatch negativity (MMN) component of the auditory event-related potential was used to determine the effect of native language, Russian, on the processing of speech-sound duration in a second language, Finnish, that uses duration as a cue for phonological distinction. The native-language effect was compared with Finnish vowels that either can or cannot be categorized using the Russian phonological system. The results showed that the duration-change MMN for the Finnish sounds that could be categorized through Russian was reduced in comparison with that for the Finnish sounds having no Russian equivalent. In the Finnish sounds that can be mapped through the Russian phonological system, the facilitation of the duration processing may be inhibited by the native Russian language. However, for the sounds that have no Russian equivalent, new vowel categories independent of the native Russian language have apparently been established, enabling a native-like duration processing of Finnish.
Inquiry Science for Liberal Arts Students: A Topical Course on Sound
NASA Astrophysics Data System (ADS)
Pine, Jerry; Hinckley, Joy; Mims, Sandra; Smith, Joel
1997-04-01
We have developed a topical general studies physics course for liberal arts students, and particularly for preservice elementary teachers. The course is taught entirely in a lab, and is based on a mix of student inquiries and ''sense-making'' in discussion. There are no lectures. A physics professor and a master elementary teacher co-lead. The students begin by conceptualizing the nature of sound by examining everyday phenomena, and then progress through a study of topics such as waves, interference, sysnthesis of complex sounds from pure tones, analysis of complex sounds into spectra, and independent projects. They use the computer program Soundedit Pro and the Macintosh interface as a powerful tool for analysis and synthesis. The student response has been extremely enthusiastic, though most have come to the course with very strong physics anxiety. The course has so far been trial-taught at five California campuses, and incorporatio into some of hte regular curricula seems promising.
Riede, Tobias; Goller, Franz
2010-10-01
Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.
Auditory-Motor Processing of Speech Sounds
Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.
2013-01-01
The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846
Classification of Complex Nonspeech Sounds. Panel on Classification of Complex Nonspeech Sounds
1989-04-14
learning of the discrimination task. Since reports on many of these studies have not yet been published, brief summaries of the studies are included below...tonal signal with a noise- producing auditory induction and introduced an intensity ramp that increased the intensity of the tone just before the onset... recorded hand clap signals . The physical properties of the hand claps can be altered (along the lines suggested by the multidimensional analysis
Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis
Fletcher, Phillip D.; Downey, Laura E.; Golden, Hannah L.; Clark, Camilla N.; Slattery, Catherine F.; Paterson, Ross W.; Schott, Jonathan M.; Rohrer, Jonathan D.; Rossor, Martin N.; Warren, Jason D.
2015-01-01
Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music (‘musicophilia’) occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717
NASA Astrophysics Data System (ADS)
Stoelinga, Christophe; Heo, Inseok; Long, Glenis; Lee, Jungmee; Lutfi, Robert; Chang, An-Chieh
2015-12-01
The human auditory system has a remarkable ability to "hear out" a wanted sound (target) in the background of unwanted sounds. One important property of sound which helps us hear-out the target is inharmonicity. When a single harmonic component of a harmonic complex is slightly mistuned, that component is heard to separate from the rest. At high harmonic numbers, where components are unresolved, the harmonic segregation effect is thought to result from detection of modulation of the time envelope (roughness cue) resulting from the mistuning. Neurophysiological research provides evidence that such envelope modulations are represented early in the auditory system, at the level of the auditory nerve. When the mistuned harmonic is a low harmonic, where components are resolved, the harmonic segregation is attributed to more centrally-located auditory processes, leading harmonic components to form a perceptual group heard separately from the mistuned component. Here we consider an alternative explanation that attributes the harmonic segregation to detection of modulation when both high and low harmonic numbers are mistuned. Specifically, we evaluate the possibility that distortion products in the cochlea generated by the mistuned component introduce detectable beating patterns for both high and low harmonic numbers. Distortion product otoacoustic emissions (DPOAEs) were measured using 3, 7, or 12-tone harmonic complexes with a fundamental frequency (F0) of 200 or 400 Hz. One of two harmonic components was mistuned at each F0: one when harmonics are expected to be resulted and the other from unresolved harmonics. Many non-harmonic DPOAEs are present whenever a harmonic component is mistuned. These non-harmonic DPOAEs are often separated by the amount of the mistuning (ΔF). This small frequency difference will generate a slow beating pattern at ΔF, because this beating is only present when a harmonic component is mistuned, it could provide a cue for behavioral detection of harmonic complex mistuning and may also be associated with the modulation of auditory nerve responses.
Possibilities of psychoacoustics to determine sound quality
NASA Astrophysics Data System (ADS)
Genuit, Klaus
For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.
Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Dahl, Milo D. (Editor)
2004-01-01
This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.
Multisensory brand search: How the meaning of sounds guides consumers' visual attention.
Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles
2016-06-01
Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kogan, Pablo; Arenas, Jorge P; Bermejo, Fernando; Hinalaf, María; Turra, Bruno
2018-06-13
Urban soundscapes are dynamic and complex multivariable environmental systems. Soundscapes can be organized into three main entities containing the multiple variables: Experienced Environment (EE), Acoustic Environment (AE), and Extra-Acoustic Environment (XE). This work applies a multidimensional and synchronic data-collecting methodology at eight urban environments in the city of Córdoba, Argentina. The EE was assessed by means of surveys, the AE by acoustic measurements and audio recordings, and the XE by photos, video, and complementary sources. In total, 39 measurement locations were considered, where data corresponding to 61 AE and 203 EE were collected. Multivariate analysis and GIS techniques were used for data processing. The types of sound sources perceived, and their extents make up part of the collected variables that belong to the EE, i.e. traffic, people, natural sounds, and others. Sources explaining most of the variance were traffic noise and natural sounds. Thus, a Green Soundscape Index (GSI) is defined here as the ratio of the perceived extents of natural sounds to traffic noise. Collected data were divided into three ranges according to GSI value: 1) perceptual predominance of traffic noise, 2) balanced perception, and 3) perceptual predominance of natural sounds. For each group, three additional variables from the EE and three from the AE were applied, which reported significant differences, especially between ranges 1 and 2 with 3. These results confirm the key role of perceiving natural sounds in a town environment and also support the proposal of a GSI as a valuable indicator to classify urban soundscapes. In addition, the collected GSI-related data significantly helps to assess the overall soundscape. It is noted that this proposed simple perceptual index not only allows one to assess and classify urban soundscapes but also contributes greatly toward a technique for separating environmental sound sources. Copyright © 2018 Elsevier B.V. All rights reserved.
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
NASA Astrophysics Data System (ADS)
Eshach, Haim
2014-06-01
This article describes the development and field test of the Sound Concept Inventory Instrument (SCII), designed to measure middle school students' concepts of sound. The instrument was designed based on known students' difficulties in understanding sound and the history of science related to sound and focuses on two main aspects of sound: sound has material properties, and sound has process properties. The final SCII consists of 71 statements that respondents rate as either true or false and also indicate their confidence on a five-point scale. Administration to 355 middle school students resulted in a Cronbach alpha of 0.906, suggesting a high reliability. In addition, the average percentage of students' answers to statements that associate sound with material properties is significantly higher than the average percentage of statements associating sound with process properties (p <0.001). The SCII is a valid and reliable tool that can be used to determine students' conceptions of sound.
Sučević, Jelena; Savić, Andrej M; Popović, Mirjana B; Styles, Suzy J; Ković, Vanja
2015-01-01
There is something about the sound of a pseudoword like takete that goes better with a spiky, than a curvy shape (Köhler, 1929:1947). Yet despite decades of research into sound symbolism, the role of this effect on real words in the lexicons of natural languages remains controversial. We report one behavioural and one ERP study investigating whether sound symbolism is active during normal language processing for real words in a speaker's native language, in the same way as for novel word forms. The results indicate that sound-symbolic congruence has a number of influences on natural language processing: Written forms presented in a congruent visual context generate more errors during lexical access, as well as a chain of differences in the ERP. These effects have a very early onset (40-80 ms, 100-160 ms, 280-320 ms) and are later overshadowed by familiar types of semantic processing, indicating that sound symbolism represents an early sensory-co-activation effect. Copyright © 2015 Elsevier Inc. All rights reserved.
Hearing in cetaceans: from natural history to experimental biology.
Mooney, T Aran; Yamato, Maya; Branstetter, Brian K
2012-01-01
Sound is a primary sensory cue for most marine mammals, and this is especially true for cetaceans. To passively and actively acquire information about their environment, cetaceans have some of the most derived ears of all mammals, capable of sophisticated, sensitive hearing and auditory processing. These capabilities have developed for survival in an underwater world where sound travels five times faster than in air, and where light is quickly attenuated and often limited at depth, at night, and in murky waters. Cetacean auditory evolution has capitalized on the ubiquity of sound cues and the efficiency of underwater acoustic communication. The sense of hearing is central to cetacean sensory ecology, enabling vital behaviours such as locating prey, detecting predators, identifying conspecifics, and navigating. Increasing levels of anthropogenic ocean noise appears to influence many of these activities. Here, we describe the historical progress of investigations on cetacean hearing, with a particular focus on odontocetes and recent advancements. While this broad topic has been studied for several centuries, new technologies in the past two decades have been leveraged to improve our understanding of a wide range of taxa, including some of the most elusive species. This chapter addresses topics including how sounds are received, what sounds are detected, hearing mechanisms for complex acoustic scenes, recent anatomical and physiological studies, the potential impacts of noise, and mysticete hearing. We conclude by identifying emerging research topics and areas which require greater focus. Copyright © 2012 Elsevier Ltd. All rights reserved.
2014-09-30
repeating pulse-like signals were investigated. Software prototypes were developed and integrated into distinct streams of reseach ; projects...to study complex sound archives spanning large spatial and temporal scales. A new post processing method for detection and classifcation was also...false positive rates. HK-ANN was successfully tested for a large minke whale dataset, but could easily be used on other signal types. Various
Time course of the influence of musical expertise on the processing of vocal and musical sounds.
Rigoulot, S; Pell, M D; Armony, J L
2015-04-02
Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
The influence of (central) auditory processing disorder in speech sound disorders.
Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein
2016-01-01
Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Auditory psychophysics and perception.
Hirsh, I J; Watson, C S
1996-01-01
In this review of auditory psychophysics and perception, we cite some important books, research monographs, and research summaries from the past decade. Within auditory psychophysics, we have singled out some topics of current importance: Cross-Spectral Processing, Timbre and Pitch, and Methodological Developments. Complex sounds and complex listening tasks have been the subject of new studies in auditory perception. We review especially work that concerns auditory pattern perception, with emphasis on temporal aspects of the patterns and on patterns that do not depend on the cognitive structures often involved in the perception of speech and music. Finally, we comment on some aspects of individual difference that are sufficiently important to question the goal of characterizing auditory properties of the typical, average, adult listener. Among the important factors that give rise to these individual differences are those involved in selective processing and attention.
Tutorial on the Psychophysics and Technology of Virtual Acoustic Displays
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)
1998-01-01
Virtual acoustics, also known as 3-D sound and auralization, is the simulation of the complex acoustic field experienced by a listener within an environment. Going beyond the simple intensity panning of normal stereo techniques, the goal is to process sounds so that they appear to come from particular locations in three-dimensional space. Although loudspeaker systems are being developed, most of the recent work focuses on using headphones for playback and is the outgrowth of earlier analog techniques. For example, in binaural recording, the sound of an orchestra playing classical music is recorded through small mics in the two "ear canals" of an anthropomorphic artificial or "dummy" head placed in the audience of a concert hall. When the recorded piece is played back over headphones, the listener passively experiences the illusion of hearing the violins on the left and the cellos on the right, along with all the associated echoes, resonances, and ambience of the original environment. Current techniques use digital signal processing to synthesize the acoustical properties that people use to localize a sound source in space. Thus, they provide the flexibility of a kind of digital dummy head, allowing a more active experience in which a listener can both design and move around or interact with a simulated acoustic environment in real time. Such simulations are being developed for a variety of application areas including architectural acoustics, advanced human-computer interfaces, telepresence and virtual reality, navigation aids for the visually-impaired, and as a test bed for psychoacoustical investigations of complex spatial cues. The tutorial will review the basic psychoacoustical cues that determine human sound localization and the techniques used to measure these cues as Head-Related Transfer Functions (HRTFs) for the purpose of synthesizing virtual acoustic environments. The only conclusive test of the adequacy of such simulations is an operational one in which the localization of real and synthesized stimuli are directly compared in psychophysical studies. To this end, the results of psychophysical experiments examining the perceptual validity of the synthesis technique will be reviewed and factors that can enhance perceptual accuracy and realism will be discussed. Of particular interest is the relationship between individual differences in HRTFs and in behavior, the role of reverberant cues in reducing the perceptual errors observed with virtual sound sources, and the importance of developing perceptually valid methods of simplifying the synthesis technique. Recent attempts to implement the synthesis technique in real time systems will also be discussed and an attempt made to interpret their quoted system specifications in terms of perceptual performance. Finally, some critical research and technology development issues for the future will be outlined.
How Sound Symbolism Is Processed in the Brain: A Study on Japanese Mimetic Words
Okuda, Jiro; Okada, Hiroyuki; Matsuda, Tetsuya
2014-01-01
Sound symbolism is the systematic and non-arbitrary link between word and meaning. Although a number of behavioral studies demonstrate that both children and adults are universally sensitive to sound symbolism in mimetic words, the neural mechanisms underlying this phenomenon have not yet been extensively investigated. The present study used functional magnetic resonance imaging to investigate how Japanese mimetic words are processed in the brain. In Experiment 1, we compared processing for motion mimetic words with that for non-sound symbolic motion verbs and adverbs. Mimetic words uniquely activated the right posterior superior temporal sulcus (STS). In Experiment 2, we further examined the generalizability of the findings from Experiment 1 by testing another domain: shape mimetics. Our results show that the right posterior STS was active when subjects processed both motion and shape mimetic words, thus suggesting that this area may be the primary structure for processing sound symbolism. Increased activity in the right posterior STS may also reflect how sound symbolic words function as both linguistic and non-linguistic iconic symbols. PMID:24840874
Synthesis of Systemic Functional Theory & Dynamical Systems Theory for Socio-Cultural Modeling
2011-01-26
is, language and other resources (e.g. images and sound resources) are conceptualised as inter-locking systems of meaning which realise four...hierarchical ranks and strata (e.g. sounds, word groups, clauses, and complex discourse structures in language, and elements, figures and episodes in images ...integrating platform for describing how language and other resources (e.g. images and sound) work together to fulfil particular objectives. While
The Development of a Finite Volume Method for Modeling Sound in Coastal Ocean Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Wen; Yang, Zhaoqing; Copping, Andrea E.
: As the rapid growth of marine renewable energy and off-shore wind energy, there have been concerns that the noises generated from construction and operation of the devices may interfere marine animals’ communication. In this research, a underwater sound model is developed to simulate sound prorogation generated by marine-hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite volume and finite difference methods are developed to solve the 3D Helmholtz equation of sound propagation in the coastal environment. For finite volume method, the grid system consists of triangular grids in horizontal plane and sigma-layers in vertical dimension. A 3Dmore » sparse matrix solver with complex coefficients is formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method is applied to efficiently solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model is then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generated by human activities in a range-dependent setting, such as offshore wind energy platform constructions and tidal stream turbines. As a proof of concept, initial validation of the finite difference solver is presented for two coastal wedge problems. Validation of finite volume method will be reported separately.« less
Vocal repertoire of the social giant otter.
Leuchtenberger, Caroline; Sousa-Lima, Renata; Duplaix, Nicole; Magnusson, William E; Mourão, Guilherme
2014-11-01
According to the "social intelligence hypothesis," species with complex social interactions have more sophisticated communication systems. Giant otters (Pteronura brasiliensis) live in groups with complex social interactions. It is likely that the vocal communication of giant otters is more sophisticated than previous studies suggest. The objectives of the current study were to describe the airborne vocal repertoire of giant otters in the Pantanal area of Brazil, to analyze call types within different behavioral contexts, and to correlate vocal complexity with level of sociability of mustelids to verify whether or not the result supports the social intelligence hypothesis. The behavior of nine giant otters groups was observed. Vocalizations recorded were acoustically and statistically analyzed to describe the species' repertoire. The repertoire was comprised by 15 sound types emitted in different behavioral contexts. The main behavioral contexts of each sound type were significantly associated with the acoustic variable ordination of different sound types. A strong correlation between vocal complexity and sociability was found for different species, suggesting that the communication systems observed in the family mustelidae support the social intelligence hypothesis.
Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.
2012-01-01
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505
ERIC Educational Resources Information Center
Todd, Juanita; Finch, Brayden; Smith, Ellen; Budd, Timothy W.; Schall, Ulrich
2011-01-01
Temporal and spectral sound information is processed asymmetrically in the brain with the left-hemisphere showing an advantage for processing the former and the right-hemisphere for the latter. Using monaural sound presentation we demonstrate a context and ability dependent ear-asymmetry in brain measures of temporal change detection. Our measure…
Temporal signatures of processing voiceness and emotion in sound
Gunter, Thomas C.
2017-01-01
Abstract This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance. PMID:28338796
Temporal signatures of processing voiceness and emotion in sound.
Schirmer, Annett; Gunter, Thomas C
2017-06-01
This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance. © The Author (2017). Published by Oxford University Press.
Integration and segregation in auditory scene analysis
NASA Astrophysics Data System (ADS)
Sussman, Elyse S.
2005-03-01
Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..
Emotional processing modulates attentional capture of irrelevant sound input in adolescents.
Gulotta, B; Sadia, G; Sussman, E
2013-04-01
The main goal of this study was to investigate how emotional processing modulates the allocation of attention to irrelevant background sound events in adolescence. We examined the effect of viewing positively and negatively valenced video clips on components of event-related brain potentials (ERPs), while irrelevant sounds were presented to the ears. All sounds evoked the P1, N1, P2, and N2 components. The infrequent, randomly occurring novel environmental sounds evoked the P3a component in all trial types. The main finding was that the P3a component was larger in amplitude when evoked by salient, distracting background sound events when participants were watching negatively charged video clips, compared to when viewing of the positive or neutral video clips. The results suggest that the threshold for involuntary attention to the novel sounds was lowered during viewing of the negative movie contexts. This indicates a survival mechanism, which would be needed for more automatic processing of irrelevant sounds to monitor the unattended environment in situations perceived as more threatening. Copyright © 2012 Elsevier B.V. All rights reserved.
Research and Implementation of Heart Sound Denoising
NASA Astrophysics Data System (ADS)
Liu, Feng; Wang, Yutai; Wang, Yanxiang
Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.
Monitoring Sea Surface Processes Using the High Frequency Ambient Sound Field
2006-09-30
Pacific (ITCZ 10ºN, 95ºW), 3) Bering Sea coastal shelf, 4) Ionian Sea, 5) Carr Inlet, Puget Sound , Washington, and 6) Haro Strait, Washington/BC...Southern Resident Killer Whale ( Puget Sound ). In coastal and inland waterways, anthropogenic noise is often present. These signals are usually...Monitoring Sea Surface Processes Using the High Frequency Ambient Sound Field Jeffrey A. Nystuen Applied Physics Laboratory University of
Effects of musical expertise on oscillatory brain activity in response to emotional sounds.
Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L
2017-08-01
Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Khangaonkar, Tarang
2010-11-19
Water circulation in Puget Sound, a large complex estuary system in the Pacific Northwest coastal ocean of the United States, is governed by multiple spatially and temporally varying forcings from tides, atmosphere (wind, heating/cooling, precipitation/evaporation, pressure), and river inflows. In addition, the hydrodynamic response is affected strongly by geomorphic features, such as fjord-like bathymetry and complex shoreline features, resulting in many distinguishing characteristics in its main and sub-basins. To better understand the details of circulation features in Puget Sound and to assist with proposed nearshore restoration actions for improving water quality and the ecological health of Puget Sound, a high-resolutionmore » (around 50 m in estuaries and tide flats) hydrodynamic model for the entire Puget Sound was needed. Here, a threedimensional circulation model of Puget Sound using an unstructured-grid finite volume coastal ocean model is presented. The model was constructed with sufficient resolution in the nearshore region to address the complex coastline, multi-tidal channels, and tide flats. Model open boundaries were extended to the entrance of the Strait of Juan de Fuca and the northern end of the Strait of Georgia to account for the influences of ocean water intrusion from the Strait of Juan de Fuca and the Fraser River plume from the Strait of Georgia, respectively. Comparisons of model results, observed data, and associated error statistics for tidal elevation, velocity, temperature, and salinity indicate that the model is capable of simulating the general circulation patterns on the scale of a large estuarine system as well as detailed hydrodynamics in the nearshore tide flats. Tidal characteristics, temperature/salinity stratification, mean circulation, and river plumes in estuaries with tide flats are discussed.« less
Ding, Nai; Pan, Xunyi; Luo, Cheng; Su, Naifei; Zhang, Wen; Zhang, Jianfeng
2018-01-31
How the brain groups sequential sensory events into chunks is a fundamental question in cognitive neuroscience. This study investigates whether top-down attention or specific tasks are required for the brain to apply lexical knowledge to group syllables into words. Neural responses tracking the syllabic and word rhythms of a rhythmic speech sequence were concurrently monitored using electroencephalography (EEG). The participants performed different tasks, attending to either the rhythmic speech sequence or a distractor, which was another speech stream or a nonlinguistic auditory/visual stimulus. Attention to speech, but not a lexical-meaning-related task, was required for reliable neural tracking of words, even when the distractor was a nonlinguistic stimulus presented cross-modally. Neural tracking of syllables, however, was reliably observed in all tested conditions. These results strongly suggest that neural encoding of individual auditory events (i.e., syllables) is automatic, while knowledge-based construction of temporal chunks (i.e., words) crucially relies on top-down attention. SIGNIFICANCE STATEMENT Why we cannot understand speech when not paying attention is an old question in psychology and cognitive neuroscience. Speech processing is a complex process that involves multiple stages, e.g., hearing and analyzing the speech sound, recognizing words, and combining words into phrases and sentences. The current study investigates which speech-processing stage is blocked when we do not listen carefully. We show that the brain can reliably encode syllables, basic units of speech sounds, even when we do not pay attention. Nevertheless, when distracted, the brain cannot group syllables into multisyllabic words, which are basic units for speech meaning. Therefore, the process of converting speech sound into meaning crucially relies on attention. Copyright © 2018 the authors 0270-6474/18/381178-11$15.00/0.
Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C
2017-09-01
In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.
Left Lateralized Enhancement of Orofacial Somatosensory Processing Due to Speech Sounds
ERIC Educational Resources Information Center
Ito, Takayuki; Johns, Alexis R.; Ostry, David J.
2013-01-01
Purpose: Somatosensory information associated with speech articulatory movements affects the perception of speech sounds and vice versa, suggesting an intimate linkage between speech production and perception systems. However, it is unclear which cortical processes are involved in the interaction between speech sounds and orofacial somatosensory…
NASA Technical Reports Server (NTRS)
Huerre, P.; Karamcheti, K.
1976-01-01
The theory of sound propagation is examined in a viscous, heat-conducting fluid, initially at rest and in a uniform state, and contained in a rigid, impermeable duct with isothermal walls. Topics covered include: (1) theoretical formulation of the small amplitude fluctuating motions of a viscous, heat-conducting and compressible fluid; (2) sound propagation in a two dimensional duct; and (3) perturbation study of the inplane modes.
2015-09-30
soundscapes , and unit of analysis methodology. The study has culminated in a complex analysis of all environmental factors that could be predictors of...regional soundscapes . To build the correlation matrices from ambient sound recordings, the raw data was first converted into a series of sound...sounds. To compare two different soundscape time periods, the correlation matrices for the two periods were then subtracted from each other
Effects of capacity limits, memory loss, and sound type in change deafness.
Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S
2017-11-01
Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.
Statistics of natural binaural sounds.
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.
Statistics of Natural Binaural Sounds
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658
The Sound Pattern of Japanese Surnames
ERIC Educational Resources Information Center
Tanaka, Yu
2017-01-01
Compound surnames in Japanese show complex phonological patterns, which pose challenges to current theories of phonology. This dissertation proposes an account of the segmental and prosodic issues in Japanese surnames and discusses their theoretical implications. Like regular compound words, compound surnames may undergo a sound alternation known…
Poppe, L.J.; Knebel, H.J.; Lewis, R.S.; DiGiacomo-Cohen, M. L.
2002-01-01
Sidescan sonar, bathymetric, subbottom, and bottom-photographic surveys and sediment sampling have improved our understanding of the processes that control the complex distribution of bottom sediments and benthic habitats in Long Island Sound. Although the deeper (>20 m) waters of the central Sound are long-term depositional areas characterized by relatively weak bottom-current regimes, our data reveal the localized presence of sedimentary furrows. These erosional bedforms occur in fine-grained cohesive sediments (silts and clayey silts), trend east-northeast, are irregularly spaced, and have indistinct troughs with gently sloping walls. The average width and relief of the furrows is 9.2 m and 0.4 m, respectively. The furrows average about 206 m long, but range in length from 30 m to over 1,300 m. Longitudinal ripples, bioturbation, and nutclam shell debris are common within the furrows. Although many of the furrows appear to end by gradually narrowing, some furrows show a "tuning fork" joining pattern. Most of these junctions open toward the east, indicating net westward sediment transport. However, a few junctions open toward the west suggesting that oscillating tidal currents are the dominant mechanism controlling furrow formation. Sedimentary furrows and longitudinal ripples typically form in environments which have recurring, directionally stable, and occasionally strong currents. The elongate geometry and regional bathymetry of Long Island Sound combine to constrain the dominant tidal and storm currents to east-west flow directions and permit the development of these bedforms. Through resuspension due to biological activity and the subsequent development of erosional bedforms, fine-grained cohesive sediment can be remobilized and made available for transport farther westward into the estuary.
Poppe, L.J.; Knebel, H.J.; Lewis, R.S.; DiGiacomo-Cohen, M. L.
2002-01-01
Sidescan sonar, bathymetric, subbottom, and bottom-photographic surveys and sediment sampling have improved our understanding of the processes that control the complex distribution of bottom sediments and benthic habitats in Long Island Sound. Although the deeper (>20 m) waters of the central Sound are long-term depositional areas characterized by relatively weak bottom-current regimes, our data reveal the localized presence of sedimentary furrows. These erosional bedforms occur in fine-grained cohesive sediments (silts and clayey silts), trend east-northeast, are irregularly spaced, and have indistinct troughs with gently sloping walls. The average width and relief of the furrows is 9.2 m and 0.4 m, respectively. The furrows average about 206 m long, but range in length from 30 m to over 1,300 m. Longitudinal ripples, bioturbation, and nutclam shell debris are common within the furrows. Although many of the furrows appear to end by gradually narrowing, some furrows show a "tuning fork" joining pattern. Most of these junctions open toward the east, indicating net westward sediment transport. However, a few junctions open toward the west suggesting that oscillating tidal currents are the dominant mechanism controlling furrow formation. Sedimentary furrows and longitudinal ripples typically form in environments which have recurring, directionally stable, and occasionally strong currents. The elongate geometry and regional bathymetry of Long Island Sound combine to constrain the dominant tidal and storm currents to east-west flow directions and permit the development of these bedforms. Through resuspension due to biological activity and the subsequent development of erosional bedforms, fine-grained cohesive sediment can be remobilized and made available for transport farther westward into the estuary.
Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception
Coffey, Emily B. J.; Chepesiuk, Alexander M. P.; Herholz, Sibylle C.; Baillet, Sylvain; Zatorre, Robert J.
2017-01-01
Speech-in-noise (SIN) perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response) contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions. PMID:28890684
NASA Astrophysics Data System (ADS)
Bichisao, Marta; Stallone, Angela
2017-04-01
Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.
Directional Acoustic Wave Manipulation by a Porpoise via Multiphase Forehead Structure
NASA Astrophysics Data System (ADS)
Zhang, Yu; Song, Zhongchang; Wang, Xianyan; Cao, Wenwu; Au, Whitlow W. L.
2017-12-01
Porpoises are small-toothed whales, and they can produce directional acoustic waves to detect and track prey with high resolution and a wide field of view. Their sound-source sizes are rather small in comparison with the wavelength so that beam control should be difficult according to textbook sonar theories. Here, we demonstrate that the multiphase material structure in a porpoise's forehead is the key to manipulating the directional acoustic field. Computed tomography (CT) derives the multiphase (bone-air-tissue) complex, tissue experiments obtain the density and sound-velocity multiphase gradient distributions, and acoustic fields and beam formation are numerically simulated. The results suggest the control of wave propagations and sound-beam formations is realized by cooperation of the whole forehead's tissues and structures. The melon size significantly impacts the side lobes of the beam and slightly influences the main beams, while the orientation of the vestibular sac mainly adjusts the main beams. By compressing the forehead complex, the sound beam can be expanded for near view. The porpoise's biosonar allows effective wave manipulations for its omnidirectional sound source, which can help the future development of miniaturized biomimetic projectors in underwater sonar, medical ultrasonography, and other ultrasonic imaging applications.
The PAC-MAN model: Benchmark case for linear acoustics in computational physics
NASA Astrophysics Data System (ADS)
Ziegelwanger, Harald; Reiter, Paul
2017-10-01
Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.
Ultrasonic Time Reversal Mirrors
NASA Astrophysics Data System (ADS)
Fink, Mathias; Montaldo, Gabriel; Tanter, Mickael
2004-11-01
For more than ten years, time reversal techniques have been developed in many different fields of applications including detection of defects in solids, underwater acoustics, room acoustics and also ultrasound medical imaging and therapy. The essential property that makes time reversed acoustics possible is that the underlying physical process of wave propagation would be unchanged if time were reversed. In a non dissipative medium, the equations governing the waves guarantee that for every burst of sound that diverges from a source there exists in theory a set of waves that would precisely retrace the path of the sound back to the source. If the source is pointlike, this allows focusing back on the source whatever the medium complexity. For this reason, time reversal represents a very powerful adaptive focusing technique for complex media. The generation of this reconverging wave can be achieved by using Time Reversal Mirrors (TRM). It is made of arrays of ultrasonic reversible piezoelectric transducers that can record the wavefield coming from the sources and send back its time-reversed version in the medium. It relies on the use of fully programmable multi-channel electronics. In this paper we present some applications of iterative time reversal mirrors to target detection in medical applications.
Light-induced vibration in the hearing organ
Ren, Tianying; He, Wenxuan; Li, Yizeng; Grosh, Karl; Fridberger, Anders
2014-01-01
The exceptional sensitivity of mammalian hearing organs is attributed to an active process, where force produced by sensory cells boost sound-induced vibrations, making soft sounds audible. This process is thought to be local, with each section of the hearing organ capable of amplifying sound-evoked movement, and nearly instantaneous, since amplification can work for sounds at frequencies up to 100 kHz in some species. To test these fundamental precepts, we developed a method for focally stimulating the living hearing organ with light. Light pulses caused intense and highly damped mechanical responses followed by traveling waves that developed with considerable delay. The delayed response was identical to movements evoked by click-like sounds. This shows that the active process is neither local nor instantaneous, but requires mechanical waves traveling from the cochlear base toward its apex. A physiologically-based mathematical model shows that such waves engage the active process, enhancing hearing sensitivity. PMID:25087606
NASA Astrophysics Data System (ADS)
Su, Guoshao; Shi, Yanjiong; Feng, Xiating; Jiang, Jianqing; Zhang, Jie; Jiang, Quan
2018-02-01
Rockbursts are markedly characterized by the ejection of rock fragments from host rocks at certain speeds. The rockburst process is always accompanied by acoustic signals that include acoustic emissions (AE) and sounds. A deep insight into the evolutionary features of AE and sound signals is important to improve the accuracy of rockburst prediction. To investigate the evolutionary features of AE and sound signals, rockburst tests on granite rock specimens under true-triaxial loading conditions were performed using an improved rockburst testing system, and the AE and sounds during rockburst development were recorded and analyzed. The results show that the evolutionary features of the AE and sound signals were obvious and similar. On the eve of a rockburst, a `quiescent period' could be observed in both the evolutionary process of the AE hits and the sound waveform. Furthermore, the time-dependent fractal dimensions of the AE hits and sound amplitude both showed a tendency to continuously decrease on the eve of the rockbursts. In addition, on the eve of the rockbursts, the main frequency of the AE and sound signals both showed decreasing trends, and the frequency spectrum distributions were both characterized by low amplitudes, wide frequency bands and multiple peak shapes. Thus, the evolutionary features of sound signals on the eve of rockbursts, as well as that of AE signals, can be used as beneficial information for rockburst prediction.
Interactive physically-based sound simulation
NASA Astrophysics Data System (ADS)
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.
NASA Astrophysics Data System (ADS)
Martellotta, Francesco; Álvarez-Morales, Lidia; Girón, Sara; Zamarreño, Teófilo
2018-05-01
Multi-rate sound decays are often found and studied in complex systems of coupled volumes where diffuse field conditions generally apply, although the openings connecting different sub-spaces are by themselves potential causes of non-diffuse behaviour. However, in presence of spaces in which curved surfaces clearly prevent diffuse field behaviour from being established, things become more complex and require more sophisticated tools (or, better, combinations of them) to be fully understood. As an example of such complexity, the crypt of the Cathedral of Cadiz is a relatively small space characterised by a central vaulted rotunda, with five radial galleries with flat and low ceiling. In addition, the crypt is connected to the main cathedral volume by means of several small openings. Acoustic measurements carried out in the crypt pointed out the existence of at least two decay processes combined, in some points, with flutter echoes. Application of conventional methods of analysis pointed out the existence of significant differences between early decay time and reverberation time, but was inconclusive in explaining the origin of the observed phenomena. The use of more robust Bayesian analysis permitted the conclusion that the late decay appearing in the crypt had a different rate than that observed in the cathedral, thus excluding the explanation based on acoustic coupling of different volumes. Finally, processing impulse responses collected by means of a B-format microphone to obtain directional intensity maps demonstrated that the late decay was originated from the rotunda where a repetitive reflection pattern appeared between the floor and the dome causing both flutter echoes and a longer reverberation time.
Steerable sound transport in a 3D acoustic network
NASA Astrophysics Data System (ADS)
Xia, Bai-Zhan; Jiao, Jun-Rui; Dai, Hong-Qing; Yin, Sheng-Wen; Zheng, Sheng-Jie; Liu, Ting-Ting; Chen, Ning; Yu, De-Jie
2017-10-01
Quasi-lossless and asymmetric sound transports, which are exceedingly desirable in various modern physical systems, are almost always based on nonlinear or angular momentum biasing effects with extremely high power levels and complex modulation schemes. A practical route for the steerable sound transport along any arbitrary acoustic pathway, especially in a three-dimensional (3D) acoustic network, can revolutionize the sound power propagation and the sound communication. Here, we design an acoustic device containing a regular-tetrahedral cavity with four cylindrical waveguides. A smaller regular-tetrahedral solid in this cavity is eccentrically emplaced to break spatial symmetry of the acoustic device. The numerical and experimental results show that the sound power flow can unimpededly transport between two waveguides away from the eccentric solid within a wide frequency range. Based on the quasi-lossless and asymmetric transport characteristic of the single acoustic device, we construct a 3D acoustic network, in which the sound power flow can flexibly propagate along arbitrary sound pathways defined by our acoustic devices with eccentrically emplaced regular-tetrahedral solids.
Prediction of far-field wind turbine noise propagation with parabolic equation.
Lee, Seongkyu; Lee, Dongjai; Honhoff, Saskia
2016-08-01
Sound propagation of wind farms is typically simulated by the use of engineering tools that are neglecting some atmospheric conditions and terrain effects. Wind and temperature profiles, however, can affect the propagation of sound and thus the perceived sound in the far field. A better understanding and application of those effects would allow a more optimized farm operation towards meeting noise regulations and optimizing energy yield. This paper presents the parabolic equation (PE) model development for accurate wind turbine noise propagation. The model is validated against analytic solutions for a uniform sound speed profile, benchmark problems for nonuniform sound speed profiles, and field sound test data for real environmental acoustics. It is shown that PE provides good agreement with the measured data, except upwind propagation cases in which turbulence scattering is important. Finally, the PE model uses computational fluid dynamics results as input to accurately predict sound propagation for complex flows such as wake flows. It is demonstrated that wake flows significantly modify the sound propagation characteristics.
Encoding of sound envelope transients in the auditory cortex of juvenile rats and adult rats.
Lu, Qi; Jiang, Cuiping; Zhang, Jiping
2016-02-01
Accurate neural processing of time-varying sound amplitude and spectral information is vital for species-specific communication. During postnatal development, cortical processing of sound frequency undergoes progressive refinement; however, it is not clear whether cortical processing of sound envelope transients also undergoes age-related changes. We determined the dependence of neural response strength and first-spike latency on sound rise-fall time across sound levels in the primary auditory cortex (A1) of juvenile (P20-P30) rats and adult (8-10 weeks) rats. A1 neurons were categorized as "all-pass", "short-pass", or "mixed" ("all-pass" at high sound levels to "short-pass" at lower sound levels) based on the normalized response strength vs. rise-fall time functions across sound levels. The proportions of A1 neurons within each of the three categories in juvenile rats were similar to that in adult rats. In general, with increasing rise-fall time, the average response strength decreased and the average first-spike latency increased in A1 neurons of both groups. At a given sound level and rise-fall time, the average normalized neural response strength did not differ significantly between the two age groups. However, the A1 neurons in juvenile rats showed greater absolute response strength, longer first-spike latency compared to those in adult rats. In addition, at a constant sound level, the average first-spike latency of juvenile A1 neurons was more sensitive to changes in rise-fall time. Our results demonstrate the dependence of the responses of rat A1 neurons on sound rise-fall time, and suggest that the response latency exhibit some age-related changes in cortical representation of sound envelope rise time. Copyright © 2015 Elsevier Ltd. All rights reserved.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Donkers, Franc C.L.; Schipul, Sarah E.; Baranek, Grace T.; Cleary, Katherine M.; Willoughby, Michael T.; Evans, Anna M.; Bulluck, John C.; Lovmo, Jeanne E.; Belger, Aysenil
2015-01-01
Neurobiological underpinnings of unusual sensory features in individuals with autism are unknown. Event-related potentials (ERPs) elicited by task-irrelevant sounds were used to elucidate neural correlates of auditory processing and associations with three common sensory response patterns (hyperresponsiveness; hyporesponsiveness; sensory seeking). Twenty-eight children with autism and 39 typically developing children (4–12 year-olds) completed an auditory oddball paradigm. Results revealed marginally attenuated P1 and N2 to standard tones and attenuated P3a to novel sounds in autism versus controls. Exploratory analyses suggested that within the autism group, attenuated N2 and P3a amplitudes were associated with greater sensory seeking behaviors for specific ranges of P1 responses. Findings suggest that attenuated early sensory as well as later attention-orienting neural responses to stimuli may underlie selective sensory features via complex mechanisms. PMID:24072639
From electromyographic activity to frequency modulation in zebra finch song.
Döppler, Juan F; Bush, Alan; Goller, Franz; Mindlin, Gabriel B
2018-02-01
Behavior emerges from the interaction between the nervous system and peripheral devices. In the case of birdsong production, a delicate and fast control of several muscles is required to control the configuration of the syrinx (the avian vocal organ) and the respiratory system. In particular, the syringealis ventralis muscle is involved in the control of the tension of the vibrating labia and thus affects the frequency modulation of the sound. Nevertheless, the translation of the instructions (which are electrical in nature) into acoustical features is complex and involves nonlinear, dynamical processes. In this work, we present a model of the dynamics of the syringealis ventralis muscle and the labia, which allows calculating the frequency of the generated sound, using as input the electrical activity recorded in the muscle. In addition, the model provides a framework to interpret inter-syllabic activity and hints at the importance of the biomechanical dynamics in determining behavior.
Multi-sensory learning and learning to read.
Blomert, Leo; Froyen, Dries
2010-09-01
The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter-speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter-speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter-speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter-sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter-speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter-speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar audiovisual associations. Copyright 2010 Elsevier B.V. All rights reserved.
Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing
2015-01-01
This intervention study investigated the growth of letter sound reading and growth of consonant–vowel–consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching pre-school children to decode, or read, single letters. The study compared a control group, which received the preschool’s standard letter-sound instruction, to an intervention group which received a 3-step letter-sound instruction intervention. The children’s growth in letter-sound reading and CVC word decoding abilities were assessed at baseline and 2, 4, 6 and 8 weeks. When compared to the control group, the growth of letter-sound reading ability was slightly higher for the intervention group. The rate of increase in letter-sound reading was significantly faster for the intervention group. In both groups, too few children learned to decode any CVC words to allow for analysis. Results of this study support the use of the intervention strategy in preschools for teaching children print-to-sound processing. PMID:26839494
Evidence for simultaneous sound production in the bowhead whale (Balaena mysticetus).
Tervo, Outi M; Christoffersen, Mads Fage; Parks, Susan E; Kristensen, Reinhardt Møbjerg; Madsen, Peter Teglberg
2011-10-01
Simultaneous production of two harmonically independent sounds, the two-voice phenomenon, is a well-known feature in bird song. Some toothed whales can click and whistle simultaneously, and a few studies have also reported simultaneous sound production by baleen whales. The mechanism for sound production in toothed whales has been largely uncovered within the last three decades, whereas mechanism for sound production in baleen whales remains poorly understood. This study provides three lines of evidence from recordings made in 2008 and 2009 in Disko Bay, Western Greenland, strongly indicating that bowhead whales are capable of simultaneous dual frequency sound production. This capability may function to enable more complex singing in an acoustically mediated reproductive advertisement display, as has been suggested for songbirds, and/or have significance in individual recognition. © 2011 Acoustical Society of America
An Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun
Fishes and other marine mammals suffer a range of potential effects from intense sound sources generated by anthropogenic underwater processes such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording devices (USR) were built to monitor the acoustic sound pressure waves generated by those anthropogenic underwater activities, so the relevant processing software becomes indispensable for analyzing the audio files recorded by these USRs. However, existing software packages did not meet performance and flexibility requirements. In this paper, we provide a detailed description of a new software package, named Aquatic Acoustic Metrics Interface (AAMI), which is a Graphicalmore » User Interface (GUI) designed for underwater sound monitoring and analysis. In addition to the general functions, such as loading and editing audio files recorded by USRs, the software can compute a series of acoustic metrics in physical units, monitor the sound's influence on fish hearing according to audiograms from different species of fishes and marine mammals, and batch process the sound files. The detailed applications of the software AAMI will be discussed along with several test case scenarios to illustrate its functionality.« less
[A method of synthesizing cicada sound for treatment of tinnitus].
Wang, Yangjing; He, Peiyu; Pan, Fan; Cui, Tao; Wang, Haiyan
2013-06-01
Masking therapy can make patients accustom to tinnitus. This therapy is safe and easy to implement, so that it has become a widely used treatment of curing tinnitus. According to surveys of tinnitus sounds, cicada sound is one of the most usual tinnituses. Meanwhile, we have not hitherto found published papers concerning how to synthesize cicada sound and to use it to ameliorate tinnitus. Inspired by the human acoustics theory, we proposed a method to synthesize medical masking sound and to realize the diversity by illustrating the process of synthesizing various cicada sounds. In addition, energy attenuation problem in spectrum shifting process has been successfully solved. Simulation results indicated that the proposed method achieved decent results and would have practical value for the future applications.
Auscultation of the lung: past lessons, future possibilities.
Murphy, R L
1981-01-01
Review of the history of auscultation of the lung reveals few scientific investigations. The majority of these have led to inconclusive results. The mechanism of production of normal breath sounds remains uncertain. Hypotheses for the generation of adventitious sounds are unproven. Advances in instrumentation for lung sound recording and analysis have provided little of clinical value. There has been a recent resurgence of interest in lung sounds. Space-age technology has improved methodology for sonic analysis significantly. Lung sounds are complex signals that probably reflect regional lung pathophysiology. If they were understood more clearly important non-invasive diagnostic tools could be devised and the value of clinical auscultation could be improved. A multidisciplinary effort will be required to achieve this. PMID:7268687
ERIC Educational Resources Information Center
Ruscello, Dennis M.; Douglas, Cara; Tyson, Tabitha; Durkee, Mark
2005-01-01
A young child with macroglossia of unknown cause was seen for treatment to modify resting tongue posture and improve speech sound production. Evaluation of the treatments indicated positive change in resting tongue posture and a modest change in speech sound production. Treatment for such patients can be complex and must consider orthodontic…
Handbook of Super 8 Production.
ERIC Educational Resources Information Center
Telzer, Ronnie, Ed.
This handbook is designed for anyone interested in producing super 8 films at any level of complexity and cost. Separate chapters present detailed discussions of the following topics: super 8 production systems and super 8 shooting and editing systems; budgeting; cinematography and sound recording; preparing to edit; editing; mixing sound tracks;…
NASA Astrophysics Data System (ADS)
Pec, Michał; Bujacz, Michał; Strumiłło, Paweł
2008-01-01
The use of Head Related Transfer Functions (HRTFs) in audio processing is a popular method of obtaining spatialized sound. HRTFs describe disturbances caused in the sound wave by the human body, especially by head and the ear pinnae. Since these shapes are unique, HRTFs differ greatly from person to person. For this reason measurement of personalized HRTFs is justified. Measured HRTFs also need further processing to be utilized in a system producing spatialized sound. This paper describes a system designed for efficient collecting of Head Related Transfer Functions as well as the measurement, interpolation and verification procedures.
Determining the locus of a processing zone in an in situ oil shale retort by sound monitoring
Elkington, W. Brice
1978-01-01
The locus of a processing zone advancing through a fragmented permeable mass of particles in an in situ oil shale retort in a subterranean formation containing oil shale is determined by monitoring for sound produced in the retort, preferably by monitoring for sound at at least two locations in a plane substantially normal to the direction of advancement of the processing zone. Monitoring can be effected by placing a sound transducer in a well extending through the formation adjacent the retort and/or in the fragmented mass such as in a well extending into the fragmented mass.
Navigating the auditory scene: an expert role for the hippocampus.
Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D
2012-08-29
Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.
Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J
2016-11-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.
2016-01-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647
Plasticity in the adult human auditory brainstem following short-term linguistic training
Song, Judy H.; Skoe, Erika; Wong, Patrick C. M.; Kraus, Nina
2009-01-01
Peripheral and central structures along the auditory pathway contribute to speech processing and learning. However, because speech requires the use of functionally and acoustically complex sounds which necessitates high sensory and cognitive demands, long-term exposure and experience using these sounds is often attributed to the neocortex with little emphasis placed on subcortical structures. The present study examines changes in the auditory brainstem, specifically the frequency following response (FFR), as native English-speaking adults learn to incorporate foreign speech sounds (lexical pitch patterns) in word identification. The FFR presumably originates from the auditory midbrain, and can be elicited pre-attentively. We measured FFRs to the trained pitch patterns before and after training. Measures of pitch-tracking were then derived from the FFR signals. We found increased accuracy in pitch-tracking after training, including a decrease in the number of pitch-tracking errors and a refinement in the energy devoted to encoding pitch. Most interestingly, this change in pitch-tracking accuracy only occurred in the most acoustically complex pitch contour (dipping contour), which is also the least familiar to our English-speaking subjects. These results not only demonstrate the contribution of the brainstem in language learning and its plasticity in adulthood, but they also demonstrate the specificity of this contribution (i.e., changes in encoding only occurs in specific, least familiar stimuli, not all stimuli). Our findings complement existing data showing cortical changes after second language learning, and are consistent with models suggesting that brainstem changes resulting from perceptual learning are most apparent when acuity in encoding is most needed. PMID:18370594
NASA Astrophysics Data System (ADS)
Tinoco, R. O.; Goldstein, E. B.; Coco, G.
2016-12-01
We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.
Gagnon, Bernadine; Miozzo, Michele
2017-01-01
Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044
Oikkonen, J.; Huang, Y.; Onkamo, P.; Ukkola-Vuoti, L.; Raijas, P.; Karma, K.; Vieland, V. J.; Järvelä, I.
2014-01-01
Humans have developed the perception, production and processing of sounds into the art of music. A genetic contribution to these skills of musical aptitude has long been suggested. We performed a genome-wide scan in 76 pedigrees (767 individuals) characterized for the ability to discriminate pitch (SP), duration (ST) and sound patterns (KMT), which are primary capacities for music perception. Using the Bayesian linkage and association approach implemented in program package KELVIN, especially designed for complex pedigrees, several SNPs near genes affecting the functions of the auditory pathway and neurocognitive processes were identified. The strongest association was found at 3q21.3 (rs9854612) with combined SP, ST and KMT test scores (COMB). This region is located a few dozen kilobases upstream of the GATA binding protein 2 (GATA2) gene. GATA2 regulates the development of cochlear hair cells and the inferior colliculus (IC), which are important in tonotopic mapping. The highest probability of linkage was obtained for phenotype SP at 4p14, located next to the region harboring the protocadherin 7 gene, PCDH7. Two SNPs rs13146789 and rs13109270 of PCDH7 showed strong association. PCDH7 has been suggested to play a role in cochlear and amygdaloid complexes. Functional class analysis showed that inner ear and schizophrenia related genes were enriched inside the linked regions. This study is the first to show the importance of auditory pathway genes in musical aptitude. PMID:24614497
Oikkonen, J; Huang, Y; Onkamo, P; Ukkola-Vuoti, L; Raijas, P; Karma, K; Vieland, V J; Järvelä, I
2015-02-01
Humans have developed the perception, production and processing of sounds into the art of music. A genetic contribution to these skills of musical aptitude has long been suggested. We performed a genome-wide scan in 76 pedigrees (767 individuals) characterized for the ability to discriminate pitch (SP), duration (ST) and sound patterns (KMT), which are primary capacities for music perception. Using the Bayesian linkage and association approach implemented in program package KELVIN, especially designed for complex pedigrees, several single nucleotide polymorphisms (SNPs) near genes affecting the functions of the auditory pathway and neurocognitive processes were identified. The strongest association was found at 3q21.3 (rs9854612) with combined SP, ST and KMT test scores (COMB). This region is located a few dozen kilobases upstream of the GATA binding protein 2 (GATA2) gene. GATA2 regulates the development of cochlear hair cells and the inferior colliculus (IC), which are important in tonotopic mapping. The highest probability of linkage was obtained for phenotype SP at 4p14, located next to the region harboring the protocadherin 7 gene, PCDH7. Two SNPs rs13146789 and rs13109270 of PCDH7 showed strong association. PCDH7 has been suggested to play a role in cochlear and amygdaloid complexes. Functional class analysis showed that inner ear and schizophrenia-related genes were enriched inside the linked regions. This study is the first to show the importance of auditory pathway genes in musical aptitude.
Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.
Padilla-Ortiz, Ana L; Ibarra, David
2018-01-01
Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.
Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D
2009-10-01
By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.
Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
Brain responses to sound intensity changes dissociate depressed participants and healthy controls.
Ruohonen, Elisa M; Astikainen, Piia
2017-07-01
Depression is associated with bias in emotional information processing, but less is known about the processing of neutral sensory stimuli. Of particular interest is processing of sound intensity which is suggested to indicate central serotonergic function. We tested weather event-related brain potentials (ERPs) to occasional changes in sound intensity can dissociate first-episode depressed, recurrent depressed and healthy control participants. The first-episode depressed showed larger N1 amplitude to deviant sounds compared to recurrent depression group and control participants. In addition, both depression groups, but not the control group, showed larger N1 amplitude to deviant than standard sounds. Whether these manifestations of sensory over-excitability in depression are directly related to the serotonergic neurotransmission requires further research. The method based on ERPs to sound intensity change is fast and low-cost way to objectively measure brain activation and holds promise as a future diagnostic tool. Copyright © 2017 Elsevier B.V. All rights reserved.
A prediction of templates in the auditory cortex system
NASA Astrophysics Data System (ADS)
Ghanbeigi, Kimia
In this study variation of human auditory evoked mismatch field amplitudes in response to complex tones as a function of the removal in single partials in the onset period was investigated. It was determined: 1-A single frequency elimination in a sound stimulus plays a significant role in human brain sound recognition. 2-By comparing the mismatches of the brain response due to a single frequency elimination in the "Starting Transient" and "Sustain Part" of the sound stimulus, it is found that the brain is more sensitive to frequency elimination in the Starting Transient. This study involves 4 healthy subjects with normal hearing. Neural activity was recorded with stimulus whole-head MEG. Verification of spatial location in the auditory cortex was determined by comparing with MRI images. In the first set of stimuli, repetitive ('standard') tones with five selected onset frequencies were randomly embedded in the string of rare ('deviant') tones with randomly varying inter stimulus intervals. In the deviant tones one of the frequency components was omitted relative to the deviant tones during the onset period. The frequency of the test partial of the complex tone was intentionally selected to preclude its reinsertion by generation of harmonics or combination tones due to either the nonlinearity of the ear, the electronic equipment or the brain processing. In the second set of stimuli, time structured as above, repetitive ('standard') tones with five selected sustained frequency components were embedded in the string of rare '(deviant') tones for which one of these selected frequencies was omitted in the sustained tone. In both measurements, the carefully frequency selection precluded their reinsertion by generation of harmonics or combination tones due to the nonlinearity of the ear, the electronic equipment and brain processing. The same considerations for selecting the test frequency partial were applied. Results. By comparing MMN of the two data sets, the relative contribution to sound recognition of the omitted partial frequency components in the onset and sustained regions has been determined. Conclusion. The presence of significant mismatch negativity, due to neural activity of auditory cortex, emphasizes that the brain recognizes the elimination of a single frequency of carefully chosen anharmonic frequencies. It was shown this mismatch is more significant if the single frequency elimination occurs in the onset period.
Parmentier, Fabrice B R; Pacheco-Unguetti, Antonia P; Valero, Sara
2018-01-01
Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants' biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants' biological needs.
Pacheco-Unguetti, Antonia P.; Valero, Sara
2018-01-01
Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants’ biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants’ biological needs. PMID:29300763
Benacchio, Simon; Mamou-Mani, Adrien; Chomette, Baptiste; Finel, Victor
2016-03-01
The vibrational behavior of musical instruments is usually studied using physical modeling and simulations. Recently, active control has proven its efficiency to experimentally modify the dynamical behavior of musical instruments. This approach could also be used as an experimental tool to systematically study fine physical phenomena. This paper proposes to use modal active control as an alternative to sound simulation to study the complex case of the coupling between classical guitar strings and soundboard. A comparison between modal active control and sound simulation investigates the advantages, the drawbacks, and the limits of these two approaches.
A Low Cost GPS System for Real-Time Tracking of Sounding Rockets
NASA Technical Reports Server (NTRS)
Markgraf, M.; Montenbruck, O.; Hassenpflug, F.; Turner, P.; Bull, B.; Bauer, Frank (Technical Monitor)
2001-01-01
In an effort to minimize the need for costly, complex, tracking radars, the German Space Operations Center has set up a research project for GPS based tracking of sounding rockets. As part of this project, a GPS receiver based on commercial technology for terrestrial applications has been modified to allow its use under the highly dynamical conditions of a sounding rocket flight. In addition, new antenna concepts are studied as an alternative to proven but costly wrap-around antennas.
Aquatic Acoustic Metrics Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-12-18
Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals.more » In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.« less
A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS
NASA Astrophysics Data System (ADS)
Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto
At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.
How does experience modulate auditory spatial processing in individuals with blindness?
Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C
2015-05-01
Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.
Pitch enhancement facilitates word learning across visual contexts
Filippi, Piera; Gingras, Bruno; Fitch, W. Tecumseh
2014-01-01
This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution. PMID:25566144
Looi, Valerie; Winter, Philip; Anderson, Ilona; Sucher, Catherine
2011-08-01
The purpose of this study was to develop a music quality rating test battery (MQRTB) and pilot test it by comparing appraisal ratings from cochlear implant (CI) recipients using the fine-structure processing (FSP) and high-definition continuous interleaved sampling (HDCIS) speech processing strategies. The development of the MQRTB involved three stages: (1) Selection of test items for the MQRTB; (2) Verification of its length and complexity with normally-hearing individuals; and (3) Pilot testing with CI recipients. Part 1 involved 65 adult listeners, Part 2 involved 10 normally-hearing adults, and Part 3 involved five adult MED-EL CI recipients. The MQRTB consisted of ten songs, with ratings made on scales assessing pleasantness, naturalness, richness, fullness, sharpness, and roughness. Results of the pilot study, which compared FSP and HDCIS for music, indicated that acclimatization to a strategy had a significant effect on ratings (p < 0.05). When acclimatized to FSP, the group rated FSP as closer to 'exactly as I want it to sound' than HDCIS (p < 0.05), and that HDCIS sounded significantly sharper and rougher than FSP. However when acclimatized to HDCIS, there were no significant differences between ratings. There was no effect of song familiarity or genre on ratings. Overall the results suggest that the use of FSP as the default strategy for MED-EL recipients would have a positive effect on music appreciation, and that the MQRTB is an effective tool for assessing music sound quality.
Cross-modal orienting of visual attention.
Hillyard, Steven A; Störmer, Viola S; Feng, Wenfeng; Martinez, Antigona; McDonald, John J
2016-03-01
This article reviews a series of experiments that combined behavioral and electrophysiological recording techniques to explore the hypothesis that salient sounds attract attention automatically and facilitate the processing of visual stimuli at the sound's location. This cross-modal capture of visual attention was found to occur even when the attracting sound was irrelevant to the ongoing task and was non-predictive of subsequent events. A slow positive component in the event-related potential (ERP) that was localized to the visual cortex was found to be closely coupled with the orienting of visual attention to a sound's location. This neural sign of visual cortex activation was predictive of enhanced perceptual processing and was paralleled by a desynchronization (blocking) of the ongoing occipital alpha rhythm. Further research is needed to determine the nature of the relationship between the slow positive ERP evoked by the sound and the alpha desynchronization and to understand how these electrophysiological processes contribute to improved visual-perceptual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Temporal processing deficit leads to impaired multisensory binding in schizophrenia.
Zvyagintsev, Mikhail; Parisi, Carmen; Mathiak, Klaus
2017-09-01
Schizophrenia has been characterised by neurodevelopmental dysconnectivity resulting in cognitive and perceptual dysmetria. Hence patients with schizophrenia may be impaired to detect the temporal relationship between stimuli in different sensory modalities. However, only a few studies described deficit in perception of temporally asynchronous multisensory stimuli in schizophrenia. We examined the perceptual bias and the processing time of synchronous and delayed sounds in the streaming-bouncing illusion in 16 patients with schizophrenia and a matched control group of 18 participants. Equal for patients and controls, the synchronous sound biased the percept of two moving squares towards bouncing as opposed to the more frequent streaming percept in the condition without sound. In healthy controls, a delay of the sound presentation significantly reduced the bias and led to prolonged processing time whereas patients with schizophrenia did not differentiate between this condition and the condition with synchronous sound. Schizophrenia leads to a prolonged window of simultaneity for audiovisual stimuli. Therefore, temporal processing deficit in schizophrenia can lead to hyperintegration of temporally unmatched multisensory stimuli.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary Mecham; Don Konoyer
2009-11-01
The Materials & Fuel Complex (MFC) facilities 799 Sodium Processing Facility (a single building consisting of two areas: the Sodium Process Area (SPA) and the Carbonate Process Area (CPA), 799A Caustic Storage Area, and 770C Nuclear Calibration Laboratory have been declared excess to future Department of Energy mission requirements. Transfer of these facilities from Nuclear Energy to Environmental Management, and an associated schedule for doing so, have been agreed upon by the two offices. The prerequisites for this transfer to occur are the removal of nonexcess materials and chemical inventory, deinventory of the calibration source in MFC-770C, and the reroutingmore » and/or isolation of utility and service systems. This report provides a description of the current physical condition and any hazards (material, chemical, nuclear or occupational) that may be associated with past operations of these facilities. This information will document conditions at time of transfer of the facilities from Nuclear Energy to Environmental Management and serve as the basis for disposition planning. The process used in obtaining this information included document searches, interviews and facility walk-downs. A copy of the facility walk-down checklist is included in this report as Appendix A. MFC-799/799A/770C are all structurally sound and associated hazardous or potentially hazardous conditions are well defined and well understood. All installed equipment items (tanks, filters, etc.) used to process hazardous materials remain in place and appear to have maintained their integrity. There is no evidence of leakage and all openings are properly sealed or closed off and connections are sound. The pits appear clean with no evidence of cracking or deterioration that could lead to migration of contamination. Based upon the available information/documentation reviewed and the overall conditions observed during the facilities walk-down, it is concluded that these facilities may be disposed of at minimal risk to human health, safety or the environment.« less
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
The effect of brain lesions on sound localization in complex acoustic environments.
Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg
2014-05-01
Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.
Milovanov, Riia; Huotilainen, Minna; Esquef, Paulo A A; Alku, Paavo; Välimäki, Vesa; Tervaniemi, Mari
2009-08-28
We examined 10-12-year old elementary school children's ability to preattentively process sound durations in music and speech stimuli. In total, 40 children had either advanced foreign language production skills and higher musical aptitude or less advanced results in both musicality and linguistic tests. Event-related potential (ERP) recordings of the mismatch negativity (MMN) show that the duration changes in musical sounds are more prominently and accurately processed than changes in speech sounds. Moreover, children with advanced pronunciation and musicality skills displayed enhanced MMNs to duration changes in both speech and musical sounds. Thus, our study provides further evidence for the claim that musical aptitude and linguistic skills are interconnected and the musical features of the stimuli could have a preponderant role in preattentive duration processing.
Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N.
2012-01-01
Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients. PMID:22891070
Temporal Fine Structure and Applications to Cochlear Implants
ERIC Educational Resources Information Center
Li, Xing
2013-01-01
Complex broadband sounds are decomposed by the auditory filters into a series of relatively narrowband signals, each of which conveys information about the sound by time-varying features. The slow changes in the overall amplitude constitute envelope, while the more rapid events, such as zero crossings, constitute temporal fine structure (TFS).…
ERIC Educational Resources Information Center
Stickney, Jeff Alan
2009-01-01
Comparing the early, analytic attempt to define "sound" teaching with the current use of criteria-based rating schemes, Jeff Stickney turns to Wittgenstein's holistic, contextualist approach to judging teaching against its complex "background" within our "form of life." To exemplify this approach, Stickney presents cases of classroom practice…
Phase Shifting and the Beating of Complex Waves
ERIC Educational Resources Information Center
Keeports, David
2011-01-01
At the introductory level, the demonstration and analysis of sound beating is usually limited to the superposition of two purely sinusoidal waves with equal amplitudes and very similar frequencies. Under such conditions, an observer hears the periodic variation of the loudness of a sound with an unchanging timbre. On the other hand, when complex…
Perception of Spectral Contrast by Hearing-Impaired Listeners
ERIC Educational Resources Information Center
Dreisbach, Laura E.; Leek, Marjorie R.; Lentz, Jennifer J.
2005-01-01
The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and…
A Literature Survey of Noise Pollution.
ERIC Educational Resources Information Center
Shih, H. H.
Physically, noise is a complex sound that has little or no periodicity. However, the essential characteristic of noise is its undesirability. Thus, noise can be defined as any annoying or unwanted sound. In recent years, the rapid increase of noise level in our environment has become a national public health hazard. Noise affects man's state of…
Ultrasonic tomography for in-process measurements of temperature in a multi-phase medium
Beller, Laurence S.
1993-01-01
A method and apparatus for the in-process measurement of internal particulate temperature utilizing ultrasonic tomography techniques to determine the speed of sound through a specimen material. Ultrasonic pulses are transmitted through a material, which can be a multi-phase material, over known flight paths and the ultrasonic pulse transit times through all sectors of the specimen are measured to determine the speed of sound. The speed of sound being a function of temperature, it is possible to establish the correlation between speed of sound and temperature, throughout a cross-section of the material, which correlation is programmed into a computer to provide for a continuous in-process measurement of temperature throughout the specimen.
[Analysis of the heart sound with arrhythmia based on nonlinear chaos theory].
Ding, Xiaorong; Guo, Xingming; Zhong, Lisha; Xiao, Shouzhong
2012-10-01
In this paper, a new method based on the nonlinear chaos theory was proposed to study the arrhythmia with the combination of the correlation dimension and largest Lyapunov exponent, through computing and analyzing these two parameters of 30 cases normal heart sound and 30 cases with arrhythmia. The results showed that the two parameters of the heart sounds with arrhythmia were higher than those with the normal, and there was significant difference between these two kinds of heart sounds. That is probably due to the irregularity of the arrhythmia which causes the decrease of predictability, and it's more complex than the normal heart sound. Therefore, the correlation dimension and the largest Lyapunov exponent can be used to analyze the arrhythmia and for its feature extraction.
Sound-induced Interfacial Dynamics in a Microfluidic Two-phase Flow
NASA Astrophysics Data System (ADS)
Mak, Sze Yi; Shum, Ho Cheung
2014-11-01
Retrieving sound wave by a fluidic means is challenging due to the difficulty in visualizing the very minute sound-induced fluid motion. This work studies the interfacial response of multiphase systems towards fluctuation in the flow. We demonstrate a direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interface shows a passive response to sound of different frequencies with sufficiently precise time resolution, enabling the recording of musical notes and even subsequent reconstruction with high fidelity. This suggests that sensing and transmitting vibrations as tiny as those induced by sound could be realized in low interfacial tension systems. The robust control of the interfacial dynamics could be adopted for droplet and complex-fiber generation.
Oki, T; Fukuda, N; Tabata, T; Yamada, H; Manabe, K; Fukuda, K; Abe, M; Iuchi, A; Ito, S
1997-03-01
We describe a patient with Ebstein's anomaly in whom Doppler echocardiography was used to clarify the mechanism responsible for 'sail sound' and tricuspid regurgitation associated with this condition. Phonocardiography revealed an additional early systolic heart sound, consisting of a first low-amplitude component (T1) and a second high-amplitude component (T2, 'sail sound'). In simultaneous recordings of the tricuspid valve motion using M mode echocardiography and phonocardiography, the closing of the tricuspid valve occurred with T1 which originated at the tip of the tricuspid leaflets, while T2 originated from the body of the tricuspid leaflets. Using color Doppler imaging, the tricuspid regurgitant signal was detected during pansystole, indicating a blue signal during the phase corresponding to T1 and a mosaic signal during the phase corresponding to T2 at end-systole. Thus, 'sail sound' in patients with Ebstein's anomaly is not simply a closing sound of the tricuspid valve, but a complex closing sound which includes a sudden stopping sound after the anterior and/or other tricuspid leaflets balloon out at systole.
Green symphonies: a call for studies on acoustic communication in plants
2013-01-01
Sound and its use in communication have significantly contributed to shaping the ecology, evolution, behavior, and ultimately the success of many animal species. Yet, the ability to use sound is not a prerogative of animals. Plants may also use sound, but we have been unable to effectively research what the ecological and evolutionary implications might be in a plant’s life. Why should plants emit and receive sound and is there information contained in those sounds? I hypothesize that it would be particularly advantageous for plants to learn about the surrounding environment using sound, as acoustic signals propagate rapidly and with minimal energetic or fitness costs. In fact, both emission and detection of sound may have adaptive value in plants by affecting responses in other organisms, plants, and animals alike. The systematic exploration of the functional, ecological, and evolutionary significance of sound in the life of plants is expected to prompt a reinterpretation of our understanding of these organisms and galvanize the emergence of novel concepts and perspectives on their communicative complexity. PMID:23754865
What is a melody? On the relationship between pitch and brightness of timbre.
Cousineau, Marion; Carcagno, Samuele; Demany, Laurent; Pressnitzer, Daniel
2013-01-01
Previous studies showed that the perceptual processing of sound sequences is more efficient when the sounds vary in pitch than when they vary in loudness. We show here that sequences of sounds varying in brightness of timbre are processed with the same efficiency as pitch sequences. The sounds used consisted of two simultaneous pure tones one octave apart, and the listeners' task was to make same/different judgments on pairs of sequences varying in length (one, two, or four sounds). In one condition, brightness of timbre was varied within the sequences by changing the relative level of the two pure tones. In other conditions, pitch was varied by changing fundamental frequency, or loudness was varied by changing the overall level. In all conditions, only two possible sounds could be used in a given sequence, and these two sounds were equally discriminable. When sequence length increased from one to four, discrimination performance decreased substantially for loudness sequences, but to a smaller extent for brightness sequences and pitch sequences. In the latter two conditions, sequence length had a similar effect on performance. These results suggest that the processes dedicated to pitch and brightness analysis, when probed with a sequence-discrimination task, share unexpected similarities.
Humpback whale bioacoustics: From form to function
NASA Astrophysics Data System (ADS)
Mercado, Eduardo, III
This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.
Psychophysiological acoustics of indoor sound due to traffic noise during sleep
NASA Astrophysics Data System (ADS)
Tulen, J. H. M.; Kumar, A.; Jurriëns, A. A.
1986-10-01
The relation between the physical characteristics of sound and an individual's perception of its as annoyance is complex and unclear. Sleep disturbance by sound is manifested in the physiological responses to the sound stimuli and the quality of sleep perceived in the morning. Both may result in deterioration of functioning during wakefulness. Therefore, psychophysiological responses to noise during sleep should be studied for the evaluation of the efficacy of sound insulation. Nocturnal sleep and indoor sound level were recorded in the homes of 12 subjects living along a highway with high traffic density. Double glazing sound insulation was used to create two experimental conditions: low insulation and high insulation. Twenty recordings were made per subject, ten recordings in each condition. During the nights with low insulation the quality of sleep was so low that both performance and mood were negatively affected. The enhancement of sound insulation was not effective enough to increase the restorative effects of sleep. The transient and peaky characteristics of traffic sound were also found to result in non-adaptive physiological responses during sleep. Sound insulation did have an effect on noise peak characteristics such as peak level, peak duration and slope. However, the number of sound peaks were found to be the same in both conditions. The relation of these sound peaks detected in the indoor recorded sound level signal to characteristics of passing vehicles was established, indicating that the sound peaks causing the psychophysiological disturbances during sleep were generated by the passing vehicles. Evidence is presented to show that the reduction in sound level is not a good measure of efficacy of sound insulation. The parameters of the sound peaks, as described in this paper, are a better representation of psychophysiological efficacy of sound insulation.
Bertucci, Frédéric; Parmentier, Eric; Lecellier, Gaël; Hawkins, Anthony D.; Lecchini, David
2016-01-01
Different marine habitats are characterised by different soundscapes. How or which differences may be representative of the habitat characteristics and/or community structure remains however to be explored. A growing project in passive acoustics is to find a way to use soundscapes to have information on the habitat and on its changes. In this study we have successfully tested the potential of two acoustic indices, i.e. the average sound pressure level and the acoustic complexity index based on the frequency spectrum. Inside and outside marine protected areas of Moorea Island (French Polynesia), sound pressure level was positively correlated with the characteristics of the substratum and acoustic complexity was positively correlated with fish diversity. It clearly shows soundscape can be used to evaluate the acoustic features of marine protected areas, which presented a significantly higher ambient sound pressure level and were more acoustically complex than non-protected areas. This study further emphasizes the importance of acoustics as a tool in the monitoring of marine environments and in the elaboration and management of future conservation plans. PMID:27629650
Bertucci, Frédéric; Parmentier, Eric; Lecellier, Gaël; Hawkins, Anthony D; Lecchini, David
2016-09-15
Different marine habitats are characterised by different soundscapes. How or which differences may be representative of the habitat characteristics and/or community structure remains however to be explored. A growing project in passive acoustics is to find a way to use soundscapes to have information on the habitat and on its changes. In this study we have successfully tested the potential of two acoustic indices, i.e. the average sound pressure level and the acoustic complexity index based on the frequency spectrum. Inside and outside marine protected areas of Moorea Island (French Polynesia), sound pressure level was positively correlated with the characteristics of the substratum and acoustic complexity was positively correlated with fish diversity. It clearly shows soundscape can be used to evaluate the acoustic features of marine protected areas, which presented a significantly higher ambient sound pressure level and were more acoustically complex than non-protected areas. This study further emphasizes the importance of acoustics as a tool in the monitoring of marine environments and in the elaboration and management of future conservation plans.
Endo, Hiroshi; Ino, Shuichi; Fujisaki, Waka
2017-09-01
Because chewing sounds influence perceived food textures, unpleasant textures of texture-modified diets might be improved by chewing sound modulation. Additionally, since inhomogeneous food properties increase perceived sensory intensity, the effects of chewing sound modulation might depend on inhomogeneity. This study examined the influences of texture inhomogeneity on the effects of chewing sound modulation. Three kinds of nursing care foods in two food process types (minced-/puréed-like foods for inhomogeneous/homogeneous texture respectively) were used as sample foods. A pseudo-chewing sound presentation system, using electromyogram signals, was used to modulate chewing sounds. Thirty healthy elderly participants participated in the experiment. In two conditions with and without the pseudo-chewing sound, participants rated the taste, texture, and evoked feelings in response to sample foods. The results showed that inhomogeneity strongly influenced the perception of food texture. Regarding the effects of the pseudo-chewing sound, taste was less influenced, the perceived food texture tended to change in the minced-like foods, and evoked feelings changed in both food process types. Though there were some food-dependent differences in the effects of the pseudo-chewing sound, the presentation of the pseudo-chewing sounds was more effective in foods with an inhomogeneous texture. In addition, it was shown that the pseudo-chewing sound might have positively influenced feelings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S
2011-01-01
Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p < 0.001) time to navigate the maze over the three trials thereby showing an improvement with training. In both sound-stimulated groups, the total time taken to reach the target decreased significantly (p < 0.01) in comparison to the unstimulated control group, indicating the facilitation of spatial learning. However, this decline was more at 24 h than at later posthatch ages. When tested for memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p < 0.001) time to traverse the maze, suggesting a temporary impairment in their retention of the learnt task. In both sound-stimulated groups at 24 h, the plasma corticosterone levels were significantly decreased (p < 0.001) and increased thereafter at 72 h (p < 0.001) and 120 h which may contribute to the differential response in spatial learning. Thus, prenatal auditory stimulation with either species-specific or complex rhythmic music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. 2011 S. Karger AG, Basel.
Lugli, Marco; Fine, Michael L
2007-11-01
The most sensitive hearing and peak frequencies of courtship calls of the stream goby, Padogobius martensii, fall within a quiet window at around 100 Hz in the ambient noise spectrum. Acoustic pressure was previously measured although Padogobius likely responds to particle motion. In this study a combination pressure (p) and particle velocity (u) detector was utilized to describe ambient noise of the habitat, the characteristics of the goby's sounds and their attenuation with distance. The ambient noise (AN) spectrum is generally similar for p and u (including the quiet window at noisy locations), although the energy distribution of u spectrum is shifted up by 50-100 Hz. The energy distribution of the goby's sounds is similar for p and u spectra of the Tonal sound, whereas the pulse-train sound exhibits larger p-u differences. Transmission loss was high for sound p and u: energy decays 6-10 dB10 cm, and sound pu ratio does not change with distance from the source in the nearfield. The measurement of particle velocity of stream AN and P. martensii sounds indicates that this species is well adapted to communicate acoustically in a complex noisy shallow-water environment.
External Acoustic Liners for Multi-Functional Aircraft Noise Reduction
NASA Technical Reports Server (NTRS)
Jones, Michael G. (Inventor); Czech, Michael J. (Inventor); Howerton, Brian M. (Inventor); Thomas, Russell H. (Inventor); Nark, Douglas M. (Inventor)
2017-01-01
Acoustic liners for aircraft noise reduction include one or more chambers that are configured to provide a pressure-release surface such that the engine noise generation process is inhibited and/or absorb sound by converting the sound into heat energy. The size and shape of the chambers can be selected to inhibit the noise generation process and/or absorb sound at selected frequencies.
Language, music, syntax and the brain.
Patel, Aniruddh D
2003-07-01
The comparative study of music and language is drawing an increasing amount of research interest. Like language, music is a human universal involving perceptually discrete elements organized into hierarchically structured sequences. Music and language can thus serve as foils for each other in the study of brain mechanisms underlying complex sound processing, and comparative research can provide novel insights into the functional and neural architecture of both domains. This review focuses on syntax, using recent neuroimaging data and cognitive theory to propose a specific point of convergence between syntactic processing in language and music. This leads to testable predictions, including the prediction that that syntactic comprehension problems in Broca's aphasia are not selective to language but influence music perception as well.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
NASA Astrophysics Data System (ADS)
Wright, Christopher G.
2011-12-01
This research examines the intellectual and linguistic resources that a group of African American boys brought to the study of the science of sound and the practice of representation. By taking a resource-rich view of the boys' linguistic and representational practices, my objective is to investigate children's abilities in producing, using, critiquing, and modifying representations. Specifically, this research looks to explore and identify the varieties of resources that African American boys utilize in developing scientific understanding. Using transcripts from group sessions, as well as the drawings produced during these sessions, I utilized a combination of discourse analysis to explore the boys' linguistic interactions during the critique of drawings with a focus on the boys' manipulation of line segments in order to explore their representational competencies. Analysis of the transcripts and the boys' drawings revealed several important findings. First, elements of Signifying were instrumental in the group's collective exploration of each other's drawings, and the ideas of sound transmission being represented in the drawings. Thus, I found that the boys' use of Signifying was key to their engagement win the practice of critique. Second, the boys' ideas regarding sound transmission were not fixed, stable misconceptions that could be "fixed" through instruction. Instead, I believe that their explanations and drawings were generated from a web of ideas regarding sound transmission. Lastly, the boys exhibited a form of meta-representational competency that included the production, modification, and manipulation of notations used to represent sound transmission. Despite this competency, the negotiation process necessary in constructing meaning of a drawing highlighted the complexities in developing a conventional understanding or meaning for representations. Additional research is necessary for exploring the intellectual and lingustic resources that children from communities of color bring to the science classroom. The objective of this research was not to highlight a single intellectual and linguistic resource that educators and educational researchers could expect to witness when working with African American boys. Instead, the objective was to highlight an approach to teaching and learning that investigated and highlighted the resources that children from communities of color have developed within their communities and from their varied life experiences that may be conducive to scientific exploration and language. Recognizing that all children bring a variety of resources that can be utilized and further developed in order to expand their understandings of scientific concepts or a representational practices must be continually explored if we are to begin the process of addressing inequitable access to science opportunities.
Earthquake forecasting during the complex Amatrice-Norcia seismic sequence
Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe
2017-01-01
Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016–2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences. PMID:28924610
Earthquake forecasting during the complex Amatrice-Norcia seismic sequence.
Marzocchi, Warner; Taroni, Matteo; Falcone, Giuseppe
2017-09-01
Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016-2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences.
Pinaud, Raphael; Terleph, Thomas A.; Tremere, Liisa A.; Phan, Mimi L.; Dagostin, André A.; Leão, Ricardo M.; Mello, Claudio V.; Vicario, David S.
2008-01-01
The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABAA-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABAA-mediated inhibition plays a pronounced role in NCM's auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM's neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABAA receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABAA-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks. PMID:18480371
Selective attention in normal and impaired hearing.
Shinn-Cunningham, Barbara G; Best, Virginia
2008-12-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
The neural basis of involuntary episodic memories.
Hall, Shana A; Rubin, David C; Miles, Amanda; Davis, Simon W; Wing, Erik A; Cabeza, Roberto; Berntsen, Dorthe
2014-10-01
Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that, because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented, panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a postscan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial-temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that, although there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems.
The Neural Basis of Involuntary Episodic Memories
Hall, Shana A.; Rubin, David C.; Miles, Amanda; Davis, Simon W.; Wing, Erik A.; Cabeza, Roberto; Berntsen, Dorthe
2014-01-01
Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a post-scan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that while there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems. PMID:24702453
Determination of heart rate variability with an electronic stethoscope.
Kamran, Haroon; Naggar, Isaac; Oniyuke, Francisca; Palomeque, Mercy; Chokshi, Priya; Salciccioli, Louis; Stewart, Mark; Lazar, Jason M
2013-02-01
Heart rate variability (HRV) is widely used to characterize cardiac autonomic function by measuring beat-to-beat alterations in heart rate. Decreased HRV has been found predictive of worse cardiovascular (CV) outcomes. HRV is determined from time intervals between QRS complexes recorded by electrocardiography (ECG) for several minutes to 24 h. Although cardiac auscultation with a stethoscope is performed routinely on patients, the human ear cannot detect heart sound time intervals. The electronic stethoscope digitally processes heart sounds, from which cardiac time intervals can be obtained. Accordingly, the objective of this study was to determine the feasibility of obtaining HRV from electronically recorded heart sounds. We prospectively studied 50 subjects with and without CV risk factors/disease and simultaneously recorded single lead ECG and heart sounds for 2 min. Time and frequency measures of HRV were calculated from R-R and S1-S1 intervals and were compared using intra-class correlation coefficients (ICC). The majority of the indices were strongly correlated (ICC 0.73-1.0), while the remaining indices were moderately correlated (ICC 0.56-0.63). In conclusion, we found HRV measures determined from S1-S1 are in agreement with those determined by single lead ECG, and we demonstrate and discuss differences in the measures in detail. In addition to characterizing cardiac murmurs and time intervals, the electronic stethoscope holds promise as a convenient low-cost tool to determine HRV in the hospital and outpatient settings as a practical extension of the physical examination.
Selective Attention in Normal and Impaired Hearing
Shinn-Cunningham, Barbara G.; Best, Virginia
2008-01-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202
Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C
2006-03-20
In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.
Gap prepulse inhibition of the auditory late response in healthy subjects.
Ku, Yunseo; Ahn, Joong Woo; Kwon, Chiheon; Suh, Myung-Whan; Lee, Jun Ho; Oh, Seung Ha; Kim, Hee Chan
2015-11-01
The gap-startle paradigm has been used as a behavioral method for tinnitus screening in animal studies. This study aimed to investigate gap prepulse inhibition (GPI) of the auditory late response (ALR) as the objective response of the gap-intense sound paradigm in humans. ALRs were recorded in response to gap-intense and no-gap-intense sound stimuli in 27 healthy subjects. The amplitudes of the baseline-to-peak (N1, P2, and N2) and the peak-to-peak (N1P2 and P2N2) were compared between two averaged ALRs. The variations in the inhibition ratios of N1P2 and P2N2 during the experiment were analyzed by increasing stimuli repetitions. The effect of stimulus parameter adjustments on GPI ratios was evaluated. No-gap-intense sound stimuli elicited greater peak amplitudes than gap-intense sound stimuli, and significant differences were found across all peaks. The overall mean inhibition ratios were significantly lower than 1.0, where the value 1.0 indicates that there were no differences between gap-intense and no-gap-intense sound responses. The initial decline in GPI ratios was shown in N1P2 and P2N2 complexes, and this reduction was nearly complete after 100 stimulus repetitions. Significant effects of gap length and interstimulus interval on GPI ratios were observed. We found significant inhibition of ALR peak amplitudes in performing the gap-intense sound paradigm in healthy subjects. The N1P2 complex represented GPI well in terms of suppression degree and test-retest reliability. Our findings offer practical information for the comparative study of healthy subjects and tinnitus patients using the gap-intense sound paradigm with the ALR. © 2015 Society for Psychophysiological Research.
Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede
2016-02-01
To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.
Pervasive Sound Sensing: A Weakly Supervised Training Approach.
Kelly, Daniel; Caulfield, Brian
2016-01-01
Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.
Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing
ERIC Educational Resources Information Center
Wolf, Gail Marie
2016-01-01
This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…
What the Toadfish Ear Tells the Toadfish Brain About Sound.
Edds-Walton, Peggy L
2016-01-01
Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.
Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis
Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun Daniel; Carlson, Thomas J.
2012-01-01
Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. In this paper, we provide a detailed description of a new software package, the Aquatic Acoustic Metrics Interface (AAMI), specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame. The features of the AAMI software are discussed, and several case studies are presented to illustrate its functionality. PMID:22969353
NASA Astrophysics Data System (ADS)
Perez de Arce, Jose
2002-11-01
Studies of ritual celebrations in central Chile conducted in the past 15 years show that the spatial component of sound is a crucial component of the whole. The sonic compositions of these rituals generate complex musical structures that the author has termed ''multi-orchestral polyphonies.'' Their origins have been documented from archaeological remains in a vast region of southern Andes (southern Peru, Bolivia, northern Argentina, north-central Chile). It consists of a combination of dance, space walk-through, spatial extension, multiple movements between listener and orchestra, and multiple relations between ritual and ambient sounds. The characteristics of these observables reveal a complex schematic relation between space and sound. This schema can be used as a valid hypothesis for the study of pre-Hispanic uses of acoustic ritual space. The acoustic features observed in this study are common in Andean ritual and, to some extent are seen in Mesoamerica as well.
Residual Error Based Anomaly Detection Using Auto-Encoder in SMD Machine Sound.
Oh, Dong Yul; Yun, Il Dong
2018-04-24
Detecting an anomaly or an abnormal situation from given noise is highly useful in an environment where constantly verifying and monitoring a machine is required. As deep learning algorithms are further developed, current studies have focused on this problem. However, there are too many variables to define anomalies, and the human annotation for a large collection of abnormal data labeled at the class-level is very labor-intensive. In this paper, we propose to detect abnormal operation sounds or outliers in a very complex machine along with reducing the data-driven annotation cost. The architecture of the proposed model is based on an auto-encoder, and it uses the residual error, which stands for its reconstruction quality, to identify the anomaly. We assess our model using Surface-Mounted Device (SMD) machine sound, which is very complex, as experimental data, and state-of-the-art performance is successfully achieved for anomaly detection.
NASA Astrophysics Data System (ADS)
Guo, Wenjie; Li, Tianyun; Zhu, Xiang; Miao, Yuyue
2018-05-01
The sound-structure coupling problem of a cylindrical shell submerged in a quarter water domain is studied. A semi-analytical method based on the double wave reflection method and the Graf's addition theorem is proposed to solve the vibration and acoustic radiation of an infinite cylindrical shell excited by an axially uniform harmonic line force, in which the acoustic boundary conditions consist of a free surface and a vertical rigid surface. The influences of the complex acoustic boundary conditions on the vibration and acoustic radiation of the cylindrical shell are discussed. It is found that the complex acoustic boundary has crucial influence on the vibration of the cylindrical shell when the cylindrical shell approaches the boundary, and the influence tends to vanish when the distances between the cylindrical shell and the boundaries exceed certain values. However, the influence of the complex acoustic boundary on the far-field sound pressure of the cylindrical shell cannot be ignored. The far-field acoustic directivity of the cylindrical shell varies with the distances between the cylindrical shell and the boundaries, besides the driving frequency. The work provides more understanding on the vibration and acoustic radiation behaviors of cylindrical shells with complex acoustic boundary conditions.
Ultrasonic tomography for in-process measurements of temperature in a multi-phase medium
Beller, L.S.
1993-01-26
A method and apparatus are described for the in-process measurement of internal particulate temperature utilizing ultrasonic tomography techniques to determine the speed of sound through a specimen material. Ultrasonic pulses are transmitted through a material, which can be a multi-phase material, over known flight paths and the ultrasonic pulse transit times through all sectors of the specimen are measured to determine the speed of sound. The speed of sound being a function of temperature, it is possible to establish the correlation between speed of sound and temperature, throughout a cross-section of the material, which correlation is programmed into a computer to provide for a continuous in-process measurement of temperature throughout the specimen.
Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts
ERIC Educational Resources Information Center
Delalande, Francois; Cornara, Silvia
2010-01-01
One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…
Pectoral sound generation in the blue catfish Ictalurus furcatus.
Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L
2015-03-01
Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.
MythBusters, Musicians, and MP3 Players: A Middle School Sound Study
ERIC Educational Resources Information Center
Putney, Ann
2011-01-01
Create your own speakers for an MP3 player while exploring the science of sound. Review of science notebooks, students' intriguing cabinet designs, and listening to students talk with a musician about the physics of an instrument show that complex concepts are being absorbed and extended with each new iteration. Science that matters to students…
"Sounds of Intent", Phase 2: Gauging the Music Development of Children with Complex Needs
ERIC Educational Resources Information Center
Ockelford, A.; Welch, G.; Jewell-Gore, L.; Cheng, E.; Vogiatzoglou, A.; Himonides, E.
2011-01-01
This article reports the latest phase of research in the "Sounds of intent" project, which is seeking, as a long-term goal, to map musical development in children and young people with severe, or profound and multiple learning difficulties (SLD or PMLD). Previous exploratory work had resulted in a framework of six putative…
The Importance of "What": Infants Use Featural Information to Index Events
ERIC Educational Resources Information Center
Kirkham, Natasha Z.; Richardson, Daniel C.; Wu, Rachel; Johnson, Scott P.
2012-01-01
Dynamic spatial indexing is the ability to encode, remember, and track the location of complex events. For example, in a previous study, 6-month-old infants were familiarized to a toy making a particular sound in a particular location, and later they fixated that empty location when they heard the sound presented alone ("Journal of Experimental…
Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria
2016-01-01
To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682
Thayer, Erin K.; Rathkey, Daniel; Miller, Marissa Fuqua; Palmer, Ryan; Mejicano, George C.; Pusic, Martin; Kalet, Adina; Gillespie, Colleen; Carney, Patricia A.
2016-01-01
Issue Medical educators and educational researchers continue to improve their processes for managing medical student and program evaluation data using sound ethical principles. This is becoming even more important as curricular innovations are occurring across undergraduate and graduate medical education. Dissemination of findings from this work is critical, and peer-reviewed journals often require an institutional review board (IRB) determination. Approach IRB data repositories, originally designed for the longitudinal study of biological specimens, can be applied to medical education research. The benefits of such an approach include obtaining expedited review for multiple related studies within a single IRB application and allowing for more flexibility when conducting complex longitudinal studies involving large datasets from multiple data sources and/or institutions. In this paper, we inform educators and educational researchers on our analysis of the use of the IRB data repository approach to manage ethical considerations as part of best practices for amassing, pooling, and sharing data for educational research, evaluation, and improvement purposes. Implications Fostering multi-institutional studies while following sound ethical principles in the study of medical education is needed, and the IRB data repository approach has many benefits, especially for longitudinal assessment of complex multi-site data. PMID:27443407
Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.
Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria
2016-09-01
To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." © The Author 2016. Published by Oxford University Press.
Evaluative Conditioning Induces Changes in Sound Valence
Bolders, Anna C.; Band, Guido P. H.; Stallen, Pieter Jan
2012-01-01
Through evaluative conditioning (EC) a stimulus can acquire an affective value by pairing it with another affective stimulus. While many sounds we encounter daily have acquired an affective value over life, EC has hardly been tested in the auditory domain. To get a more complete understanding of affective processing in auditory domain we examined EC of sound. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US). Congruency effects on an affective priming task for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether extinction occurs, i.e., whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results provide clear evidence for EC effects in the auditory domain. We will argue that both associative as well as propositional processes are likely to underlie these effects. PMID:22514545
Salient sounds activate human visual cortex automatically.
McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A
2013-05-22
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Salient sounds activate human visual cortex automatically
McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.
2013-01-01
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530
Sound absorption of metallic sound absorbers fabricated via the selective laser melting process
NASA Astrophysics Data System (ADS)
Cheng, Li-Wei; Cheng, Chung-Wei; Chung, Kuo-Chun; Kam, Tai-Yan
2017-01-01
The sound absorption capability of metallic sound absorbers fabricated using the additive manufacturing (selective laser melting) method is investigated via both the experimental and theoretical approaches. The metallic sound absorption structures composed of periodic cubic cells were made of laser-melted Ti6Al4 V powder. The acoustic impedance equations with different frequency-independent and frequency-dependent end corrections factors are employed to calculate the theoretical sound absorption coefficients of the metallic sound absorption structures. The calculated sound absorption coefficients are in close agreement with the experimental results for the frequencies ranging from 2 to 13 kHz.
Cheng, Liang; Wang, Shao-Hui; Peng, Kang; Liao, Xiao-Mei
2017-01-01
Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons.
Cheng, Liang; Wang, Shao-Hui; Peng, Kang
2017-01-01
Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons. PMID:28589040
Sound waves in hadronic matter
NASA Astrophysics Data System (ADS)
Wilk, Grzegorz; Włodarczyk, Zbigniew
2018-01-01
We argue that recent high energy CERN LHC experiments on transverse momenta distributions of produced particles provide us new, so far unnoticed and not fully appreciated, information on the underlying production processes. To this end we concentrate on the small (but persistent) log-periodic oscillations decorating the observed pT spectra and visible in the measured ratios R = σdata(pT) / σfit (pT). Because such spectra are described by quasi-power-like formulas characterised by two parameters: the power index n and scale parameter T (usually identified with temperature T), the observed logperiodic behaviour of the ratios R can originate either from suitable modifications of n or T (or both, but such a possibility is not discussed). In the first case n becomes a complex number and this can be related to scale invariance in the system, in the second the scale parameter T exhibits itself log-periodic oscillations which can be interpreted as the presence of some kind of sound waves forming in the collision system during the collision process, the wave number of which has a so-called self similar solution of the second kind. Because the first case was already widely discussed we concentrate on the second one and on its possible experimental consequences.
I’m Positive, But I’m Negative
Lindegger, Graham; Slack, Catherine; Wallace, Melissa; Newman, Peter
2015-01-01
HIV vaccine trials (HVTs) are ethically complex, and sound informed consent processes should facilitate optimal decision-making for participants. This study aimed to explore representations of critical HVT-related concepts to enhance the consent process. Four focus group discussions were conducted with participants from key constituencies at a South African HVT site. Thematic analysis was employed to identify representations of key HVT-related concepts. The findings suggest that (potential) participants may negotiate multiple, competing versions of HVT-related concepts in a somewhat unrecognized process, which may have significant implications for the consent process. Stakeholders involved in consent and engagement activities at sites should be assisted to elicit, engage, and resolve competing representations of HVT-related concepts. More empirical research is needed to explore how such stakeholders address competing representations in their interactions with potential participants. PMID:25819758
Geographical variation in sound production in the anemonefish Amphiprion akallopisos.
Parmentier, E; Lagardère, J P; Vandewalle, P; Fine, M L
2005-08-22
Because of pelagic-larval dispersal, coral-reef fishes are distributed widely with minimal genetic differentiation between populations. Amphiprion akallopisos, a clownfish that uses sound production to defend its anemone territory, has a wide but disjunct distribution in the Indian Ocean. We compared sounds produced by these fishes from populations in Madagascar and Indonesia, a distance of 6500 km. Differentiation of agonistic calls into distinct types indicates a complexity not previously recorded in fishes' acoustic communication. Moreover, various acoustic parameters, including peak frequency, pulse duration, number of peaks per pulse, differed between the two populations. The geographic comparison is the first to demonstrate 'dialects' in a marine fish species, and these differences in sound parameters suggest genetic divergence between these two populations. These results highlight the possible approach for investigating the role of sounds in fish behaviour in reproductive divergence and speciation.
Maggu, Akshay R; Liu, Fang; Antoniou, Mark; Wong, Patrick C M
2016-01-01
Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators ) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change.
Maggu, Akshay R.; Liu, Fang; Antoniou, Mark; Wong, Patrick C. M.
2016-01-01
Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change. PMID:28066218
Kocsis, Zsuzsanna; Winkler, István; Bendixen, Alexandra; Alain, Claude
2016-09-01
The auditory environment typically comprises several simultaneously active sound sources. In contrast to the perceptual segregation of two concurrent sounds, the perception of three simultaneous sound objects has not yet been studied systematically. We conducted two experiments in which participants were presented with complex sounds containing sound segregation cues (mistuning, onset asynchrony, differences in frequency or amplitude modulation or in sound location), which were set up to promote the perceptual organization of the tonal elements into one, two, or three concurrent sounds. In Experiment 1, listeners indicated whether they heard one, two, or three concurrent sounds. In Experiment 2, participants watched a silent subtitled movie while EEG was recorded to extract the object-related negativity (ORN) component of the event-related potential. Listeners predominantly reported hearing two sounds when the segregation promoting manipulations were applied to the same tonal element. When two different tonal elements received manipulations promoting them to be heard as separate auditory objects, participants reported hearing two and three concurrent sounds objects with equal probability. The ORN was elicited in most conditions; sounds that included the amplitude- or the frequency-modulation cue generated the smallest ORN amplitudes. Manipulating two different tonal elements yielded numerically and often significantly smaller ORNs than the sum of the ORNs elicited when the same cues were applied on a single tonal element. These results suggest that ORN reflects the presence of multiple concurrent sounds, but not their number. The ORN results are compatible with the horse-race principle of combining different cues of concurrent sound segregation. Copyright © 2016 Elsevier B.V. All rights reserved.
van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles
2015-01-01
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044
Sound transmission in archaic and modern whales: anatomical adaptations for underwater hearing.
Nummela, Sirpa; Thewissen, J G M; Bajpai, Sunil; Hussain, Taseer; Kumar, Kishor
2007-06-01
The whale ear, initially designed for hearing in air, became adapted for hearing underwater in less than ten million years of evolution. This study describes the evolution of underwater hearing in cetaceans, focusing on changes in sound transmission mechanisms. Measurements were made on 60 fossils of whole or partial skulls, isolated tympanics, middle ear ossicles, and mandibles from all six archaeocete families. Fossil data were compared with data on two families of modern mysticete whales and nine families of modern odontocete cetaceans, as well as five families of noncetacean mammals. Results show that the outer ear pinna and external auditory meatus were functionally replaced by the mandible and the mandibular fat pad, which posteriorly contacts the tympanic plate, the lateral wall of the bulla. Changes in the ear include thickening of the tympanic bulla medially, isolation of the tympanoperiotic complex by means of air sinuses, functional replacement of the tympanic membrane by a bony plate, and changes in ossicle shapes and orientation. Pakicetids, the earliest archaeocetes, had a land mammal ear for hearing in air, and used bone conduction underwater, aided by the heavy tympanic bulla. Remingtonocetids and protocetids were the first to display a genuine underwater ear where sound reached the inner ear through the mandibular fat pad, the tympanic plate, and the middle ear ossicles. Basilosaurids and dorudontids showed further aquatic adaptations of the ossicular chain and the acoustic isolation of the ear complex from the skull. The land mammal ear and the generalized modern whale ear are evolutionarily stable configurations, two ends of a process where the cetacean mandible might have been a keystone character. 2007 Wiley-Liss, Inc.
Modulation of EEG Theta Band Signal Complexity by Music Therapy
NASA Astrophysics Data System (ADS)
Bhattacharya, Joydeep; Lee, Eun-Jeong
The primary goal of this study was to investigate the impact of monochord (MC) sounds, a type of archaic sounds used in music therapy, on the neural complexity of EEG signals obtained from patients undergoing chemotherapy. The secondary goal was to compare the EEG signal complexity values for monochords with those for progressive muscle relaxation (PMR), an alternative therapy for relaxation. Forty cancer patients were randomly allocated to one of the two relaxation groups, MC and PMR, over a period of six months; continuous EEG signals were recorded during the first and last sessions. EEG signals were analyzed by applying signal mode complexity, a measure of complexity of neuronal oscillations. Across sessions, both groups showed a modulation of complexity of beta-2 band (20-29Hz) at midfrontal regions, but only MC group showed a modulation of complexity of theta band (3.5-7.5Hz) at posterior regions. Therefore, the neuronal complexity patterns showed different changes in EEG frequency band specific complexity resulting in two different types of interventions. Moreover, the different neural responses to listening to monochords and PMR were observed after regular relaxation interventions over a short time span.
What is a melody? On the relationship between pitch and brightness of timbre
Cousineau, Marion; Carcagno, Samuele; Demany, Laurent; Pressnitzer, Daniel
2014-01-01
Previous studies showed that the perceptual processing of sound sequences is more efficient when the sounds vary in pitch than when they vary in loudness. We show here that sequences of sounds varying in brightness of timbre are processed with the same efficiency as pitch sequences. The sounds used consisted of two simultaneous pure tones one octave apart, and the listeners’ task was to make same/different judgments on pairs of sequences varying in length (one, two, or four sounds). In one condition, brightness of timbre was varied within the sequences by changing the relative level of the two pure tones. In other conditions, pitch was varied by changing fundamental frequency, or loudness was varied by changing the overall level. In all conditions, only two possible sounds could be used in a given sequence, and these two sounds were equally discriminable. When sequence length increased from one to four, discrimination performance decreased substantially for loudness sequences, but to a smaller extent for brightness sequences and pitch sequences. In the latter two conditions, sequence length had a similar effect on performance. These results suggest that the processes dedicated to pitch and brightness analysis, when probed with a sequence-discrimination task, share unexpected similarities. PMID:24478638
Abnormal auditory pattern perception in schizophrenia.
Haigh, Sarah M; Coffman, Brian A; Murphy, Timothy K; Butera, Christiana D; Salisbury, Dean F
2016-10-01
Mismatch negativity (MMN) in response to deviation from physical sound parameters (e.g., pitch, duration) is reduced in individuals with long-term schizophrenia (Sz), suggesting deficits in deviance detection. However, MMN can appear at several time intervals as part of deviance detection. Understanding which part of the processing stream is abnormal in Sz is crucial for understanding MMN pathophysiology. We measured MMN to complex pattern deviants, which have been shown to produce multiple MMNs in healthy controls (HC). Both simple and complex MMNs were recorded from 27 Sz and 27 matched HC. For simple MMN, pitch- and duration-deviants were presented among frequent standard tones. For complex MMN, patterns of five single tones were repeatedly presented, with the occasional deviant group of tones containing an extra sixth tone. Sz showed smaller pitch MMN (p=0.009, ~110ms) and duration MMN (p=0.030, ~170ms) than healthy controls. For complex MMN, there were two deviance-related negativities. The first (~150ms) was not significantly different between HC and SZ. The second was significantly reduced in Sz (p=0.011, ~400ms). The topography of the late complex MMN was consistent with generators in anterior temporal cortex. Worse late MMN in Sz was associated with increased emotional withdrawal, poor attention, lack of spontaneity/conversation, and increased preoccupation. Late MMN blunting in schizophrenia suggests a deficit in later stages of deviance processing. Correlations with negative symptoms measures are preliminary, but suggest that abnormal complex auditory perceptual processes may compound higher-order cognitive and social deficits in the disorder. Copyright © 2016 Elsevier B.V. All rights reserved.
Woods, J
2001-01-01
The third generation cardiac institute will build on the successes of the past in structuring the service line, re-organizing to assimilate specialist interests, and re-positioning to expand cardiac services into cardiovascular services. To meet the challenges of an increasingly competitive marketplace and complex delivery system, the focus for this new model will shift away from improved structures, and toward improved processes. This shift will require a sound methodology for statistically measuring and sustaining process changes related to the optimization of cardiovascular care. In recent years, GE Medical Systems has successfully applied Six Sigma methodologies to enable cardiac centers to control key clinical and market development processes through its DMADV, DMAIC and Change Acceleration processes. Data indicates Six Sigma is having a positive impact within organizations across the United States, and when appropriately implemented, this approach can serve as a solid foundation for building the next generation cardiac institute.
Modelling and Order of Acoustic Transfer Functions Due to Reflections from Augmented Objects
NASA Astrophysics Data System (ADS)
Kuster, Martin; de Vries, Diemer
2006-12-01
It is commonly accepted that the sound reflections from real physical objects are much more complicated than what usually is and can be modelled by room acoustics modelling software. The main reason for this limitation is the level of detail inherent in the physical object in terms of its geometrical and acoustic properties. In the present paper, the complexity of the sound reflections from a corridor wall is investigated by modelling the corresponding acoustic transfer functions at several receiver positions in front of the wall. The complexity for different wall configurations has been examined and the changes have been achieved by altering its acoustic image. The results show that for a homogenous flat wall, the complexity is significant and for a wall including various smaller objects, the complexity is highly dependent on the position of the receiver with respect to the objects.
A dynamic auditory-cognitive system supports speech-in-noise perception in older adults
Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina
2013-01-01
Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of the auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we use structural equation modeling to evaluate interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55 to 79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. PMID:23541911
ERIC Educational Resources Information Center
Froyen, Dries; Willems, Gonny; Blomert, Leo
2011-01-01
The phonological deficit theory of dyslexia assumes that degraded speech sound representations might hamper the acquisition of stable letter-speech sound associations necessary for learning to read. However, there is only scarce and mainly indirect evidence for this assumed letter-speech sound association problem. The present study aimed at…
Poganiatz, I; Wagner, H
2001-04-01
Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.
EEG oscillations entrain their phase to high-level features of speech sound.
Zoefel, Benedikt; VanRullen, Rufin
2016-01-01
Phase entrainment of neural oscillations, the brain's adjustment to rhythmic stimulation, is a central component in recent theories of speech comprehension: the alignment between brain oscillations and speech sound improves speech intelligibility. However, phase entrainment to everyday speech sound could also be explained by oscillations passively following the low-level periodicities (e.g., in sound amplitude and spectral content) of auditory stimulation-and not by an adjustment to the speech rhythm per se. Recently, using novel speech/noise mixture stimuli, we have shown that behavioral performance can entrain to speech sound even when high-level features (including phonetic information) are not accompanied by fluctuations in sound amplitude and spectral content. In the present study, we report that neural phase entrainment might underlie our behavioral findings. We observed phase-locking between electroencephalogram (EEG) and speech sound in response not only to original (unprocessed) speech but also to our constructed "high-level" speech/noise mixture stimuli. Phase entrainment to original speech and speech/noise sound did not differ in the degree of entrainment, but rather in the actual phase difference between EEG signal and sound. Phase entrainment was not abolished when speech/noise stimuli were presented in reverse (which disrupts semantic processing), indicating that acoustic (rather than linguistic) high-level features play a major role in the observed neural entrainment. Our results provide further evidence for phase entrainment as a potential mechanism underlying speech processing and segmentation, and for the involvement of high-level processes in the adjustment to the rhythm of speech. Copyright © 2015 Elsevier Inc. All rights reserved.
The Brain Basis for Misophonia.
Kumar, Sukhbinder; Tansley-Hancock, Olana; Sedley, William; Winston, Joel S; Callaghan, Martina F; Allen, Micah; Cope, Thomas E; Gander, Phillip E; Bamiou, Doris-Eva; Griffiths, Timothy D
2017-02-20
Misophonia is an affective sound-processing disorder characterized by the experience of strong negative emotions (anger and anxiety) in response to everyday sounds, such as those generated by other people eating, drinking, chewing, and breathing [1-8]. The commonplace nature of these sounds (often referred to as "trigger sounds") makes misophonia a devastating disorder for sufferers and their families, and yet nothing is known about the underlying mechanism. Using functional and structural MRI coupled with physiological measurements, we demonstrate that misophonic subjects show specific trigger-sound-related responses in brain and body. Specifically, fMRI showed that in misophonic subjects, trigger sounds elicit greatly exaggerated blood-oxygen-level-dependent (BOLD) responses in the anterior insular cortex (AIC), a core hub of the "salience network" that is critical for perception of interoceptive signals and emotion processing. Trigger sounds in misophonics were associated with abnormal functional connectivity between AIC and a network of regions responsible for the processing and regulation of emotions, including ventromedial prefrontal cortex (vmPFC), posteromedial cortex (PMC), hippocampus, and amygdala. Trigger sounds elicited heightened heart rate (HR) and galvanic skin response (GSR) in misophonic subjects, which were mediated by AIC activity. Questionnaire analysis showed that misophonic subjects perceived their bodies differently: they scored higher on interoceptive sensibility than controls, consistent with abnormal functioning of AIC. Finally, brain structural measurements implied greater myelination within vmPFC in misophonic individuals. Overall, our results show that misophonia is a disorder in which abnormal salience is attributed to particular sounds based on the abnormal activation and functional connectivity of AIC. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Property-driven functional verification technique for high-speed vision system-on-chip processor
NASA Astrophysics Data System (ADS)
Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian
2017-04-01
The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.
Aubauer, R; Au, W W; Nachtigall, P E; Pawloski, D A; DeLong, C M
2000-05-01
Animal behavior experiments require not only stimulus control of the animal's behavior, but also precise control of the stimulus itself. In discrimination experiments with real target presentation, the complex interdependence between the physical dimensions and the backscattering process of an object make it difficult to extract and control relevant echo parameters separately. In other phantom-echo experiments, the echoes were relatively simple and could only simulate certain properties of targets. The echo-simulation method utilized in this paper can be used to transform any animal echolocation sound into phantom echoes of high fidelity and complexity. The developed phantom-echo system is implemented on a digital signal-processing board and gives an experimenter fully programmable control over the echo-generating process and the echo structure itself. In this experiment, the capability of a dolphin to discriminate between acoustically simulated phantom replicas of targets and their real equivalents was tested. Phantom replicas were presented in a probe technique during a materials discrimination experiment. The animal accepted the phantom echoes and classified them in the same manner as it classified real targets.
Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H
2016-08-01
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.
The influence of the level formants on the perception of synthetic vowel sounds
NASA Astrophysics Data System (ADS)
Kubzdela, Henryk; Owsianny, Mariuz
A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.
Intelligent Systems Approaches to Product Sound Quality Analysis
NASA Astrophysics Data System (ADS)
Pietila, Glenn M.
As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach. Next, an unsupervised jury clustering algorithm is used to identify and classify subgroups within a jury who have conflicting preferences. In addition, a nested Artificial Neural Network (ANN) architecture is developed to predict subjective preference based on objective sound quality metrics, in the presence of non-linear preferences. Finally, statistical decomposition and correlation algorithms are reviewed that can help an analyst establish a clear understanding of the variability of the product sounds used as inputs into the jury study and to identify correlations between preference scores and sound quality metrics in the presence of non-linearities.
Audiovisual emotional processing and neurocognitive functioning in patients with depression
Doose-Grünefeld, Sophie; Eickhoff, Simon B.; Müller, Veronika I.
2015-01-01
Alterations in the processing of emotional stimuli (e.g., facial expressions, prosody, music) have repeatedly been reported in patients with major depression. Such impairments may result from the likewise prevalent executive deficits in these patients. However, studies investigating this relationship are rare. Moreover, most studies to date have only assessed impairments in unimodal emotional processing, whereas in real life, emotions are primarily conveyed through more than just one sensory channel. The current study therefore aimed at investigating multi-modal emotional processing in patients with depression and to assess the relationship between emotional and neurocognitive impairments. Fourty one patients suffering from major depression and 41 never-depressed healthy controls participated in an audiovisual (faces-sounds) emotional integration paradigm as well as a neurocognitive test battery. Our results showed that depressed patients were specifically impaired in the processing of positive auditory stimuli as they rated faces significantly more fearful when presented with happy than with neutral sounds. Such an effect was absent in controls. Findings in emotional processing in patients did not correlate with Beck’s depression inventory score. Furthermore, neurocognitive findings revealed significant group differences for two of the tests. The effects found in audiovisual emotional processing, however, did not correlate with performance in the neurocognitive tests. In summary, our results underline the diversity of impairments going along with depression and indicate that deficits found for unimodal emotional processing cannot trivially be generalized to deficits in a multi-modal setting. The mechanisms of impairments therefore might be far more complex than previously thought. Our findings furthermore contradict the assumption that emotional processing deficits in major depression are associated with impaired attention or inhibitory functioning. PMID:25688188
Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success
Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.
2013-01-01
The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625
Methodology for fault detection in induction motors via sound and vibration signals
NASA Astrophysics Data System (ADS)
Delgado-Arredondo, Paulo Antonio; Morinigo-Sotelo, Daniel; Osornio-Rios, Roque Alfredo; Avina-Cervantes, Juan Gabriel; Rostro-Gonzalez, Horacio; Romero-Troncoso, Rene de Jesus
2017-01-01
Nowadays, timely maintenance of electric motors is vital to keep up the complex processes of industrial production. There are currently a variety of methodologies for fault diagnosis. Usually, the diagnosis is performed by analyzing current signals at a steady-state motor operation or during a start-up transient. This method is known as motor current signature analysis, which identifies frequencies associated with faults in the frequency domain or by the time-frequency decomposition of the current signals. Fault identification may also be possible by analyzing acoustic sound and vibration signals, which is useful because sometimes this information is the only available. The contribution of this work is a methodology for detecting faults in induction motors in steady-state operation based on the analysis of acoustic sound and vibration signals. This proposed approach uses the Complete Ensemble Empirical Mode Decomposition for decomposing the signal into several intrinsic mode functions. Subsequently, the frequency marginal of the Gabor representation is calculated to obtain the spectral content of the IMF in the frequency domain. This proposal provides good fault detectability results compared to other published works in addition to the identification of more frequencies associated with the faults. The faults diagnosed in this work are two broken rotor bars, mechanical unbalance and bearing defects.
ERIC Educational Resources Information Center
Faronii-Butler, Kishasha O.
2013-01-01
This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…
Litovsky, Ruth Y.; Godar, Shelly P.
2010-01-01
The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369
The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes
Gygi, Brian; Shafiro, Valeriy
2011-01-01
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about 5 percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naïve (untrained) listeners showed that this Incongruency Advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of −7.5 dB, but there is about 5 percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the Incongruency Advantage is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions. PMID:21355664
Why Do People Like Loud Sound? A Qualitative Study.
Welch, David; Fremaux, Guy
2017-08-11
Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address.
Tang, Jia; Fu, Zi-Ying; Wei, Chen-Xue; Chen, Qi-Cai
2015-08-01
In constant frequency-frequency modulation (CF-FM) bats, the CF-FM echolocation signals include both CF and FM components, yet the role of such complex acoustic signals in frequency resolution by bats remains unknown. Using CF and CF-FM echolocation signals as acoustic stimuli, the responses of inferior collicular (IC) neurons of Hipposideros armiger were obtained by extracellular recordings. We tested the effect of preceding CF or CF-FM sounds on the shape of the frequency tuning curves (FTCs) of IC neurons. Results showed that both CF-FM and CF sounds reduced the number of FTCs with tailed lower-frequency-side of IC neurons. However, more IC neurons experienced such conversion after adding CF-FM sound compared with CF sound. We also found that the Q 20 value of the FTC of IC neurons experienced the largest increase with the addition of CF-FM sound. Moreover, only CF-FM sound could cause an increase in the slope of the neurons' FTCs, and such increase occurred mainly in the lower-frequency edge. These results suggested that CF-FM sound could increase the accuracy of frequency analysis of echo and cut-off low-frequency elements from the habitat of bats more than CF sound.
2010-01-01
Background We investigated the processing of task-irrelevant and unexpected novel sounds and its modulation by working-memory load in children aged 9-10 and in adults. Environmental sounds (novels) were embedded amongst frequently presented standard sounds in an auditory-visual distraction paradigm. Each sound was followed by a visual target. In two conditions, participants evaluated the position of a visual stimulus (0-back, low load) or compared the position of the current stimulus with the one two trials before (2-back, high load). Processing of novel sounds were measured with reaction times, hit rates and the auditory event-related brain potentials (ERPs) Mismatch Negativity (MMN), P3a, Reorienting Negativity (RON) and visual P3b. Results In both memory load conditions novels impaired task performance in adults whereas they improved performance in children. Auditory ERPs reflect age-related differences in the time-window of the MMN as children showed a positive ERP deflection to novels whereas adults lack an MMN. The attention switch towards the task irrelevant novel (reflected by P3a) was comparable between the age groups. Adults showed more efficient reallocation of attention (reflected by RON) under load condition than children. Finally, the P3b elicited by the visual target stimuli was reduced in both age groups when the preceding sound was a novel. Conclusion Our results give new insights in the development of novelty processing as they (1) reveal that task-irrelevant novel sounds can result in contrary effects on the performance in a visual primary task in children and adults, (2) show a positive ERP deflection to novels rather than an MMN in children, and (3) reveal effects of auditory novels on visual target processing. PMID:20929535
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia
An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.
NASA Astrophysics Data System (ADS)
Hartmann, Timo; Tanner, Gregor; Xie, Gang; Chappell, David; Bajars, Janis
2016-09-01
Dynamical Energy Analysis (DEA) combined with the Discrete Flow Mapping technique (DFM) has recently been introduced as a mesh-based high frequency method modelling structure borne sound for complex built-up structures. This has proven to enhance vibro-acoustic simulations considerably by making it possible to work directly on existing finite element meshes circumventing time-consuming and costly re-modelling strategies. In addition, DFM provides detailed spatial information about the vibrational energy distribution within a complex structure in the mid-to-high frequency range. We will present here progress in the development of the DEA method towards handling complex FEM-meshes including Rigid Body Elements. In addition, structure borne transmission paths due to spot welds are considered. We will present applications for a car floor structure.
Loui, Psyche; Kroog, Kenneth; Zuk, Jennifer; Winner, Ellen; Schlaug, Gottfried
2011-01-01
Language and music are complex cognitive and neural functions that rely on awareness of one's own sound productions. Information on the awareness of vocal pitch, and its relation to phonemic awareness which is crucial for learning to read, will be important for understanding the relationship between tone-deafness and developmental language disorders such as dyslexia. Here we show that phonemic awareness skills are positively correlated with pitch perception–production skills in children. Children between the ages of seven and nine were tested on pitch perception and production, phonemic awareness, and IQ. Results showed a significant positive correlation between pitch perception–production and phonemic awareness, suggesting that the relationship between musical and linguistic sound processing is intimately linked to awareness at the level of pitch and phonemes. Since tone-deafness is a pitch-related impairment and dyslexia is a deficit of phonemic awareness, we suggest that dyslexia and tone-deafness may have a shared and/or common neural basis. PMID:21687467
Ways of the Lushootseed People: Ceremonies & Traditions of Northern Puget Sound Indians.
ERIC Educational Resources Information Center
United Indians of All Tribes Foundation, Seattle, WA.
The book is an attempt to create an appreciation of the complex Lushootseed language, spoken by American Indians in the area between Puget Sound and the Cascade Mountains northward to the Skagit River Valley. The book is divided into two parts: readings about Lushootseed life and a brief description of the Lushootseed language. The readings, taken…
Children Use Object-Level Category Knowledge to Detect Changes in Complex Auditory Scenes
ERIC Educational Resources Information Center
Vanden Bosch der Nederlanden, Christina M.; Snyder, Joel S.; Hannon, Erin E.
2016-01-01
Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a…
Light-weight low-frequency loudspeaker
NASA Astrophysics Data System (ADS)
Corsaro, Robert; Tressler, James
2002-05-01
In an aerospace application, we require a very low-mass sound generator with good performance at low audio frequencies (i.e., 30-400 Hz). A number of device configurations have been explored using various actuation technologies. Two particularly interesting devices have been developed, both using ``Thunder'' transducers (Face Intl. Corp.) as the actuation component. One of these devices has the advantage of high sound output but a complex phase spectrum, while the other has somewhat lower output but a highly uniform phase. The former is particularly novel in that the actuator is coupled to a flat, compliant diaphragm supported on the edges by an inflatable tube. This results in a radiating surface with very high modal complexity. Sound pressure levels measured in the far field (25 cm) using only 200-V peak drive (one-third or its rating) were nominally 74 6 dB over the band from 38 to 330 Hz. The second device essentially operates as a stiff low-mass piston, and is more suitable for our particular application, which is exploring the use of active controlled surface covers for reducing sound levels in payload fairing regions. [Work supported by NRL/ONR Smart Blanket program.
DNS study of speed of sound in two-phase flows with phase change
NASA Astrophysics Data System (ADS)
Fu, Kai; Deng, Xiaolong
2017-11-01
Heat transfer through pipe flow is important for the safety of thermal power plants. Normally it is considered incompressible. However, in some conditions compressibility effects could deteriorate the heat transfer efficiency and even result in pipe rupture, especially when there is obvious phase change, due to the much lower sound speed in liquid-gas mixture flows. Based on the stratified multiphase flow model (Chang and Liou, JCP 2007), we present a new approach to simulate the sound speed in 3-D compressible two-phase dispersed flows, in which each face is divided into gas-gas, gas-liquid, and liquid-liquid parts via reconstruction by volume fraction, and fluxes are calculated correspondingly. Applying it to well-distributed air-water bubbly flows, comparing with the experiment measurements in air water mixture (Karplus, JASA 1957), the effects of adiabaticity, viscosity, and isothermality are examined. Under viscous and isothermal condition, the simulation results match the experimental ones very well, showing the DNS study with current method is an effective way for the sound speed of complex two-phase dispersed flows. Including the two-phase Riemann solver with phase change (Fechter et al., JCP 2017), more complex problems can be numerically studied.
Sea-Floor geology and character of Eastern Rhode Island Sound West of Gay Head, Massachusetts
Poppe, L.J.; McMullen, K.Y.; Ackerman, S.D.; Blackwood, D.S.; Irwin, B.J.; Schaer, J.D.; Forrest, M.R.
2011-01-01
Gridded multibeam bathymetry covers approximately 102 square kilometers of sea floor in eastern Rhode Island Sound west of Gay Head, Massachusetts. Although originally collected for charting purposes during National Oceanic and Atmospheric Administration hydrographic survey H11922, these acoustic data and the sea-floor stations subsequently occupied to verify them (1) show the composition and terrain of the seabed, (2) provide information on sediment transport and benthic habitat, and (3) are part of an expanding series of studies that provide a fundamental framework for research and management activities (for example, windfarms and fisheries) along the Massachusetts inner continental shelf. Most of the sea floor in the study area has an undulating to faintly rippled appearance and is composed of bioturbated muddy sand, reflecting processes associated with sediment sorting and reworking. Shallower areas are composed of rippled sand and, where small fields of megaripples are present, indicate sedimentary environments characterized by processes associated with coarse bedload transport. Boulders and gravel were found on the floors of scour depressions and on top of an isolated bathymetric high where erosion has removed the Holocene marine sediments and exposed the underlying relict lag deposits of Pleistocene drift. The numerous scour depressions, which formed during storm-driven events, result in the juxtaposition of sea-floor areas with contrasting sedimentary environments and distinct gravel, sand, and muddy sand textures. This textural heterogeneity in turn creates a complex patchwork of habitats. Our observations of local variations in community structure suggest that this small-scale textural heterogeneity adds dramatically to the sound-wide benthic biological diversity.
Speech endpoint detection with non-language speech sounds for generic speech processing applications
NASA Astrophysics Data System (ADS)
McClain, Matthew; Romanowski, Brian
2009-05-01
Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.
Nonlinear frequency compression: effects on sound quality ratings of speech and music.
Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas
2013-03-01
Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.
Sun, Xiuwen; Li, Xiaoling; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun
2018-01-01
Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants' cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity of experiment design may be an important factor in crossmodal correspondence phenomena.
Sun, Xiuwen; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun
2018-01-01
Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity of experiment design may be an important factor in crossmodal correspondence phenomena. PMID:29507834
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
NASA Astrophysics Data System (ADS)
Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert
2005-12-01
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Sonar sound groups and increased terminal buzz duration reflect task complexity in hunting bats.
Hulgard, Katrine; Ratcliffe, John M
2016-02-09
More difficult tasks are generally regarded as such because they demand greater attention. Echolocators provide rare insight into this relationship because biosonar signals can be monitored. Here we show that bats produce longer terminal buzzes and more sonar sound groups during their approach to prey under presumably more difficult conditions. Specifically, we found Daubenton's bats, Myotis daubentonii, produced longer buzzes when aerial-hawking versus water-trawling prey, but that bats taking revolving air- and water-borne prey produced more sonar sound groups than did the bats when taking stationary prey. Buzz duration and sonar sound groups have been suggested to be independent means by which bats attend to would-be targets and other objects of interest. We suggest that for attacking bats both should be considered as indicators of task difficulty and that the buzz is, essentially, an extended sonar sound group.
Geographical variation in sound production in the anemonefish Amphiprion akallopisos
Parmentier, E; Lagardère, J.P; Vandewalle, P; Fine, M.L
2005-01-01
Because of pelagic-larval dispersal, coral-reef fishes are distributed widely with minimal genetic differentiation between populations. Amphiprion akallopisos, a clownfish that uses sound production to defend its anemone territory, has a wide but disjunct distribution in the Indian Ocean. We compared sounds produced by these fishes from populations in Madagascar and Indonesia, a distance of 6500 km. Differentiation of agonistic calls into distinct types indicates a complexity not previously recorded in fishes' acoustic communication. Moreover, various acoustic parameters, including peak frequency, pulse duration, number of peaks per pulse, differed between the two populations. The geographic comparison is the first to demonstrate ‘dialects’ in a marine fish species, and these differences in sound parameters suggest genetic divergence between these two populations. These results highlight the possible approach for investigating the role of sounds in fish behaviour in reproductive divergence and speciation. PMID:16087425
Tsunoda, Koichi; Sekimoto, Sotaro; Itoh, Kenji
2016-06-01
Conclusions The result suggested that mother tongue Japanese and non- mother tongue Japanese differ in their pattern of brain dominance when listening to sounds from the natural world-in particular, insect sounds. These results reveal significant support for previous findings from Tsunoda (in 1970). Objectives This study concentrates on listeners who show clear evidence of a 'speech' brain vs a 'music' brain and determines which side is most active in the processing of insect sounds, using with near-infrared spectroscopy. Methods The present study uses 2-channel Near Infrared Spectroscopy (NIRS) to provide a more direct measure of left- and right-brain activity while participants listen to each of three types of sounds: Japanese speech, Western violin music, or insect sounds. Data were obtained from 33 participants who showed laterality on opposite sides for Japanese speech and Western music. Results Results showed that a majority (80%) of the MJ participants exhibited dominance for insect sounds on the side that was dominant for language, while a majority (62%) of the non-MJ participants exhibited dominance for insect sounds on the side that was dominant for music.
Phonological Processing and Reading in Children with Speech Sound Disorders
ERIC Educational Resources Information Center
Rvachew, Susan
2007-01-01
Purpose: To examine the relationship between phonological processing skills prior to kindergarten entry and reading skills at the end of 1st grade, in children with speech sound disorders (SSD). Method: The participants were 17 children with SSD and poor phonological processing skills (SSD-low PP), 16 children with SSD and good phonological…
Graphemic Cohesion Effect in Reading and Writing Complex Graphemes
ERIC Educational Resources Information Center
Spinelli, Elsa; Kandel, Sonia; Guerassimovitch, Helena; Ferrand, Ludovic
2012-01-01
"AU" /o/ and "AN" /a/ in French are both complex graphemes, but they vary in their strength of association to their respective sounds. The letter sequence "AU" is systematically associated to the phoneme /o/, and as such is always parsed as a complex grapheme. However, "AN" can be associated with either one…
A Structural Theory of Pitch1,2,3
Laudanski, Jonathan; Zheng, Yi
2014-01-01
Abstract Musical notes can be ordered from low to high along a perceptual dimension called “pitch”. A characteristic property of these sounds is their periodic waveform, and periodicity generally correlates with pitch. Thus, pitch is often described as the perceptual correlate of the periodicity of the sound’s waveform. However, the existence and salience of pitch also depends in a complex way on other factors, in particular harmonic content. For example, periodic sounds made of high-order harmonics tend to have a weaker pitch than those made of low-order harmonics. Here we examine the theoretical proposition that pitch is the perceptual correlate of the regularity structure of the vibration pattern of the basilar membrane, across place and time—a generalization of the traditional view on pitch. While this proposition also attributes pitch to periodic sounds, we show that it predicts differences between resolved and unresolved harmonic complexes and a complex domain of existence of pitch, in agreement with psychophysical experiments. We also present a possible neural mechanism for pitch estimation based on coincidence detection, which does not require long delays, in contrast with standard temporal models of pitch. PMID:26464959
DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS
Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.
2014-01-01
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
Lewis, James W.; Talkington, William J.; Walker, Nathan A.; Spirou, George A.; Jajosky, Audrey; Frum, Chris
2009-01-01
The ability to detect and rapidly process harmonic sounds, which in nature are typical of animal vocalizations and speech, can be critical for communication among conspecifics and for survival. Single-unit studies have reported neurons in auditory cortex sensitive to specific combinations of frequencies (e.g. harmonics), theorized to rapidly abstract or filter for specific structures of incoming sounds, where large ensembles of such neurons may constitute spectral templates. We studied the contribution of harmonic structure to activation of putative spectral templates in human auditory cortex by using a wide variety of animal vocalizations, as well as artificially constructed iterated rippled noises (IRNs). Both the IRNs and vocalization sounds were quantitatively characterized by calculating a global harmonics-to-noise ratio (HNR). Using fMRI we identified HNR-sensitive regions when presenting either artificial IRNs and/or recordings of natural animal vocalizations. This activation included regions situated between functionally defined primary auditory cortices and regions preferential for processing human non-verbal vocalizations or speech sounds. These results demonstrate that the HNR of sound reflects an important second-order acoustic signal attribute that parametrically activates distinct pathways of human auditory cortex. Thus, these results provide novel support for putative spectral templates, which may subserve a major role in the hierarchical processing of vocalizations as a distinct category of behaviorally relevant sound. PMID:19228981
Justen, Christoph; Herbert, Cornelia
2016-01-01
So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatio-temporal dynamics. Event-related potentials (ERPs) were assessed while participants (N = 12 healthy subjects) listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz) and high (1000 Hz) pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low-resolution brain electromagnetic tomography (sLORETA) revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA) in the N2a/mismatch negativity (MMN) as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing [e.g., right anterior/posterior cingulate cortex (ACC/PPC)] as well as the right inferior parietal lobule (IPL) showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC) or the P3 (IPL). None of these brain regions showed enhanced activation while listening passively to low (500 Hz) and high (1000 Hz) pure tones. Taken together, the current results indicate (1) a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2) activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3) their differential temporal activation during deviance (N2a/MMN – ACC/PCC) and target detection (P3 – IPL) of self- vs. other-related movement sounds. PMID:27777557
Spectral characteristics of wake vortex sound during roll-up
DOT National Transportation Integrated Search
2003-12-01
This report presents an analysis of the sound spectra generated by a trailing aircraft vortex during its rolling-up process. The : study demonstrates that a rolling-up vortex could produce low frequency (less than 100 Hz) sound with very high intensi...
Encoding of natural and artificial stimuli in the auditory midbrain
NASA Astrophysics Data System (ADS)
Lyzwa, Dominika
How complex acoustic stimuli are encoded in the main center of convergence in the auditory midbrain is not clear. Here, the representation of neural spiking responses to natural and artificial sounds across this subcortical structure is investigated based on neurophysiological recordings from the mammalian midbrain. Neural and stimulus correlations of neuronal pairs are analyzed with respect to the neurons' distance, and responses to different natural communication sounds are discriminated. A model which includes linear and nonlinear neural response properties of this nucleus is presented and employed to predict temporal spiking responses to new sounds. Supported by BMBF Grant 01GQ0811.
Radar soundings of the ionosphere of Mars.
Gurnett, D A; Kirchner, D L; Huff, R L; Morgan, D D; Persoon, A M; Averkamp, T F; Duru, F; Nielsen, E; Safaeinili, A; Plaut, J J; Picardi, G
2005-12-23
We report the first radar soundings of the ionosphere of Mars with the MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) instrument on board the orbiting Mars Express spacecraft. Several types of ionospheric echoes are observed, ranging from vertical echoes caused by specular reflection from the horizontally stratified ionosphere to a wide variety of oblique and diffuse echoes. The oblique echoes are believed to arise mainly from ionospheric structures associated with the complex crustal magnetic fields of Mars. Echoes at the electron plasma frequency and the cyclotron period also provide measurements of the local electron density and magnetic field strength.
Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang
2011-07-01
In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Attention modifies sound level detection in young children.
Sussman, Elyse S; Steinschneider, Mitchell
2011-07-01
Have you ever shouted your child's name from the kitchen while they were watching television in the living room to no avail, so you shout their name again, only louder? Yet, still no response. The current study provides evidence that young children process loudness changes differently than pitch changes when they are engaged in another task such as watching a video. Intensity level changes were physiologically detected only when they were behaviorally relevant, but frequency level changes were physiologically detected without task relevance in younger children. This suggests that changes in pitch rather than changes in volume may be more effective in evoking a response when sounds are unexpected. Further, even though behavioral ability may appear to be similar in younger and older children, attention-based physiologic responses differ from automatic physiologic processes in children. Results indicate that 1) the automatic auditory processes leading to more efficient higher-level skills continue to become refined through childhood; and 2) there are different time courses for the maturation of physiological processes encoding the distinct acoustic attributes of sound pitch and sound intensity. The relevance of these findings to sound perception in real-world environments is discussed.
Jansson-Verkasalo, Eira; Eggers, Kurt; Järvenpää, Anu; Suominen, Kalervo; Van den Bergh, Bea; De Nil, Luc; Kujala, Teija
2014-09-01
Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS). Participants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC. There were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS. The results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology. The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter. Copyright © 2014 Elsevier Inc. All rights reserved.
Liu, B; Wang, Z; Wu, G; Meng, X
2011-04-28
In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Monitoring Sea Surface Processes Using the High Frequency Ambient Sound Field
2005-09-30
time 2.2 sec). This has been identified as a Southern Resident Killer Whale ( Puget Sound ). 6 In coastal and inland waterways, anthropogenic noise...ITCZ 10ºN, 95ºW), 3) Bering Sea coastal shelf, 4) Ionian Sea, 5) Carr Inlet, Puget Sound , Washington, and 6) Haro Strait, Washington/BC. The sound ...and 9 m/s). Figure 8. A comparison of cumulative distribution functions (CDFs) for rain, drizzle and shipping in Carr Inlet, Puget Sound . The
Melting and vibrational properties of planetary materials under deep Earth conditions
NASA Astrophysics Data System (ADS)
Jackson, Jennifer
2013-06-01
The large chemical, density, and dynamical contrasts associated with the juxtaposition of a liquid iron-dominant alloy and silicates at Earth's core-mantle boundary (CMB) are associated with a rich range of complex seismological features. For example, seismic heterogeneity at this boundary includes small patches of anomalously low sound velocities, called ultralow-velocity zones. Their small size (5 to 40 km thick) and depth (about 2800 km) present unique challenges for seismic characterization and geochemical interpretation. In this contribution, we will present recent nuclear resonant inelastic x-ray scattering measurements on iron-bearing silicates, oxides, and metals, and their application towards our understanding of Earth's interior. Specifically, we will present measurements on silicates and oxide minerals that are important in Earth's upper and lower mantles, as well as iron to over 1 megabar in pressure. The nuclear resonant inelastic x-ray scattering method provides specific vibrational information, e.g., the phonon density of states, and in combination with compression data permits the determination of sound velocities and other vibrational information under high pressure and high temperature. For example, accurate determination of the sound velocities and density of chemically complex Earth materials is essential for understanding the distribution and behavior of minerals and iron-alloys with depth. The high statistical quality of the data in combination with high energy resolution and a small x-ray focus size permit accurate evaluation of the vibrational-related quantities of iron-bearing Earth materials as a function of pressure, such as the Grüneisen parameter, thermal pressure, sound velocities, and iron isotope fractionation quantities. Finally, we will present a novel method detecting the solid-liquid phase boundary of compressed iron at high temperatures using synchrotron Mössbauer spectroscopy. Our approach is unique because the dynamics of the iron atoms are monitored. This process is described by the Lamb-Mössbauer factor, which is related to the mean-square displacement of the iron atoms. We will discuss the implications of our results as they relate to Earth's core and core-mantle boundary regions.
AVE-SESAME IV: 25 mb sounding data
NASA Technical Reports Server (NTRS)
Sienkiewicz, M. E.; Gilchrist, L. P.; Turner, R. E.
1980-01-01
The rawinsonde sounding program for the AVE-SESAME 4 experiment is descirbed and tabulated data at 25 mb for the 23 National Weather Service and 20 special stations participating in the experiment are presented. Soundings were taken at 3 hr intervals beginning at 1200 GMT on May 9, 1979, and ending at 1200 GMT on May 10, 1979 (nine sounding times). The method of processing is discussed, estimates of the rms errors in the data are presented, and an example of contact data is given. Reasons are given for the termination of soundings below 100 mb, and soundings are listed which exhibit abnormal characteristics.
NASA Astrophysics Data System (ADS)
Miner, Nadine Elizabeth
1998-09-01
This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.
Quarto, Tiziana; Blasi, Giuseppe; Pallesen, Karen Johanne; Bertolino, Alessandro; Brattico, Elvira
2014-01-01
The ability to recognize emotions contained in facial expressions are affected by both affective traits and states and varies widely between individuals. While affective traits are stable in time, affective states can be regulated more rapidly by environmental stimuli, such as music, that indirectly modulate the brain state. Here, we tested whether a relaxing or irritating sound environment affects implicit processing of facial expressions. Moreover, we investigated whether and how individual traits of anxiety and emotional control interact with this process. 32 healthy subjects performed an implicit emotion processing task (presented to subjects as a gender discrimination task) while the sound environment was defined either by a) a therapeutic music sequence (MusiCure), b) a noise sequence or c) silence. Individual changes in mood were sampled before and after the task by a computerized questionnaire. Additionally, emotional control and trait anxiety were assessed in a separate session by paper and pencil questionnaires. Results showed a better mood after the MusiCure condition compared with the other experimental conditions and faster responses to happy faces during MusiCure compared with angry faces during Noise. Moreover, individuals with higher trait anxiety were faster in performing the implicit emotion processing task during MusiCure compared with Silence. These findings suggest that sound-induced affective states are associated with differential responses to angry and happy emotional faces at an implicit stage of processing, and that a relaxing sound environment facilitates the implicit emotional processing in anxious individuals. PMID:25072162
Linking the Shapes of Alphabet Letters to Their Sounds: The Case of Hebrew
ERIC Educational Resources Information Center
Treiman, Rebecca; Levin, Iris; Kessler, Brett
2012-01-01
Learning the sounds of letters is an important part of learning a writing system. Most previous studies of this process have examined English, focusing on variations in the phonetic iconicity of letter names as a reason why some letter sounds (such as that of b, where the sound is at the beginning of the letter's name) are easier to learn than…
Electrophysiological models of neural processing.
Nelson, Mark E
2011-01-01
The brain is an amazing information processing system that allows organisms to adaptively monitor and control complex dynamic interactions with their environment across multiple spatial and temporal scales. Mathematical modeling and computer simulation techniques have become essential tools in understanding diverse aspects of neural processing ranging from sub-millisecond temporal coding in the sound localization circuity of barn owls to long-term memory storage and retrieval in humans that can span decades. The processing capabilities of individual neurons lie at the core of these models, with the emphasis shifting upward and downward across different levels of biological organization depending on the nature of the questions being addressed. This review provides an introduction to the techniques for constructing biophysically based models of individual neurons and local networks. Topics include Hodgkin-Huxley-type models of macroscopic membrane currents, Markov models of individual ion-channel currents, compartmental models of neuronal morphology, and network models involving synaptic interactions among multiple neurons.
A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing
NASA Astrophysics Data System (ADS)
Cobos, Maximo; Lopez, JoseJ; Spors, Sascha
2010-12-01
Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.
Design of forging process variables under uncertainties
NASA Astrophysics Data System (ADS)
Repalle, Jalaja; Grandhi, Ramana V.
2005-02-01
Forging is a complex nonlinear process that is vulnerable to various manufacturing anomalies, such as variations in billet geometry, billet/die temperatures, material properties, and workpiece and forging equipment positional errors. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion, and reduced productivity. Identifying, quantifying, and controlling the uncertainties will reduce variability risk in a manufacturing environment, which will minimize the overall production cost. In this article, various uncertainties that affect the forging process are identified, and their cumulative effect on the forging tool life is evaluated. Because the forging process simulation is time-consuming, a response surface model is used to reduce computation time by establishing a relationship between the process performance and the critical process variables. A robust design methodology is developed by incorporating reliability-based optimization techniques to obtain sound forging components. A case study of an automotive-component forging-process design is presented to demonstrate the applicability of the method.
Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David
2014-01-01
Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782
Burgess, Matthew K.; Bedrosian, Paul A.; Buesch, David C.
2014-01-01
Between 2010 and 2012, a total of 79 time-domain electromagnetic (TEM) soundings were collected in 12 groundwater basins in the U.S. Army Fort Irwin National Training Center (NTC) study area to help improve the understanding of the hydrogeology of the NTC. The TEM data are discussed in this chapter in the context of geologic observations of the study area, the details of which are provided in the other chapters of this volume. Selection of locations for TEM soundings in unexplored basins was guided by gravity data that estimated depth to pre-Tertiary basement complex of crystalline rock and alluvial thickness. Some TEM data were collected near boreholes with geophysical logs. The TEM response at locations near boreholes was used to evaluate sounding data for areas without boreholes. TEM models also were used to guide site selection of subsequent boreholes drilled as part of this study. Following borehole completion, geophysical logs were used to ground-truth and reinterpret previously collected TEM data. This iterative process was used to site subsequent TEM soundings and borehole locations as the study progressed. Although each groundwater subbasin within the NTC boundaries was explored using the TEM method, collection of TEM data was focused in those basins identified as best suited for development of water resources. At the NTC, TEM estimates of some lithologic thicknesses and electrical properties in the unsaturated zone are in good accordance with borehole data; however, water-table elevations were not easily identifiable from TEM data.
Virtual environment display for a 3D audio room simulation
NASA Astrophysics Data System (ADS)
Chapin, William L.; Foster, Scott
1992-06-01
Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.
Different Timescales for the Neural Coding of Consonant and Vowel Sounds
Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.
2013-01-01
Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334
Computer-aided auscultation learning system for nursing technique instruction.
Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih
2008-01-01
Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.
Effects of musical training on sound pattern processing in high-school students.
Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse
2009-05-01
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
USSR and Eastern Europe Scientific Abstracts, Geophysics, Astronomy and Space, Number 398
1977-05-25
Determining Ship’s Speed 25 Compensation of Cross Coupling Effect in Marine Gravimetry ... 26 Korteweg-De Vries Equation for Internal Waves in...winters and increased precipitation , favorable conditions for vegetation. [287] RADIOACOUSTIC SOUNDING OF THE ATMOSPHERE Moscow IZVESTIYA AKADEMII...complex of geophys- ical and geological methods was used: seismic profiling, gravimetry , mag- netometry, depth sounding and dredging. The director
The sensorimotor and social sides of the architecture of speech.
Pezzulo, Giovanni; Barca, Laura; D'Ausilio, Alessando
2014-12-01
Speech is a complex skill to master. In addition to sophisticated phono-articulatory abilities, speech acquisition requires neuronal systems configured for vocal learning, with adaptable sensorimotor maps that couple heard speech sounds with motor programs for speech production; imitation and self-imitation mechanisms that can train the sensorimotor maps to reproduce heard speech sounds; and a "pedagogical" learning environment that supports tutor learning.
The Relationship between Auditory Temporal Processing, Phonemic Awareness, and Reading Disability.
ERIC Educational Resources Information Center
Bretherton, Lesley; Holmes, V. M.
2003-01-01
Investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in 8- to 12-year-olds with a reading disability, placed in groups based on performance on Tallal's tone-order judgment task. Found that a tone-order deficit did not relate to performance on order processing of speech sounds, to…