Sample records for sound coding strategy

  1. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    PubMed Central

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  2. Evolutionary trends in directional hearing

    PubMed Central

    Carr, Catherine E.; Christensen-Dalsgaard, Jakob

    2016-01-01

    Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850

  3. Proportional spike-timing precision and firing reliability underlie efficient temporal processing of periodicity and envelope shape cues

    PubMed Central

    Zheng, Y.

    2013-01-01

    Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues. PMID:23636724

  4. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  5. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  6. Different Timescales for the Neural Coding of Consonant and Vowel Sounds

    PubMed Central

    Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.

    2013-01-01

    Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334

  7. The natural history of sound localization in mammals--a story of neuronal inhibition.

    PubMed

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  8. The natural history of sound localization in mammals – a story of neuronal inhibition

    PubMed Central

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds. PMID:25324726

  9. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  10. How do auditory cortex neurons represent communication sounds?

    PubMed

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Intensity-invariant coding in the auditory system.

    PubMed

    Barbour, Dennis L

    2011-11-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  13. Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant.

    PubMed

    Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa

    2016-07-01

    The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied.

  14. An initial study of voice characteristics of children using two different sound coding strategies in comparison to normal hearing children.

    PubMed

    Coelho, Ana Cristina; Brasolotto, Alcione Ghedini; Bevilacqua, Maria Cecília

    2015-06-01

    To compare some perceptual and acoustic characteristics of the voices of children who use the advanced combination encoder (ACE) or fine structure processing (FSP) speech coding strategies, and to investigate whether these characteristics differ from children with normal hearing. Acoustic analysis of the sustained vowel /a/ was performed using the multi-dimensional voice program (MDVP). Analyses of sequential and spontaneous speech were performed using the real time pitch. Perceptual analyses of these samples were performed using visual-analogic scales of pre-selected parameters. Seventy-six children from three years to five years and 11 months of age participated. Twenty-eight were users of ACE, 23 were users of FSP, and 25 were children with normal hearing. Although both groups with CI presented with some deviated vocal features, the users of ACE presented with voice quality more like children with normal hearing than the users of FSP. Sound processing of ACE appeared to provide better conditions for auditory monitoring of the voice, and consequently, for better control of the voice production. However, these findings need to be further investigated due to the lack of comparative studies published to understand exactly which attributes of sound processing are responsible for differences in performance.

  15. The development of the Nucleus Freedom Cochlear implant system.

    PubMed

    Patrick, James F; Busby, Peter A; Gibson, Peter J

    2006-12-01

    Cochlear Limited (Cochlear) released the fourth-generation cochlear implant system, Nucleus Freedom, in 2005. Freedom is based on 25 years of experience in cochlear implant research and development and incorporates advances in medicine, implantable materials, electronic technology, and sound coding. This article presents the development of Cochlear's implant systems, with an overview of the first 3 generations, and details of the Freedom system: the CI24RE receiver-stimulator, the Contour Advance electrode, the modular Freedom processor, the available speech coding strategies, the input processing options of Smart Sound to improve the signal before coding as electrical signals, and the programming software. Preliminary results from multicenter studies with the Freedom system are reported, demonstrating better levels of performance compared with the previous systems. The final section presents the most recent implant reliability data, with the early findings at 18 months showing improved reliability of the Freedom implant compared with the earlier Nucleus 3 System. Also reported are some of the findings of Cochlear's collaborative research programs to improve recipient outcomes. Included are studies showing the benefits from bilateral implants, electroacoustic stimulation using an ipsilateral and/or contralateral hearing aid, advanced speech coding, and streamlined speech processor programming.

  16. Striving for Optimum Noise-Decreasing Strategies in Critical Care: Initial Measurements and Observations.

    PubMed

    Disher, Timothy C; Benoit, Britney; Inglis, Darlene; Burgess, Stacy A; Ellsmere, Barbara; Hewitt, Brenda E; Bishop, Tanya M; Sheppard, Christopher L; Jangaard, Krista A; Morrison, Gavin C; Campbell-Yeo, Marsha L

    To identify baseline sound levels, patterns of sound levels, and potential barriers and facilitators to sound level reduction. The study setting was neonatal and pediatric intensive care units in a tertiary care hospital. Participants were staff in both units and parents of currently hospitalized children or infants. One 24-hour sound measurements and one 4-hour sound measurement linked to observed sound events were conducted in each area of the center's neonatal intensive care unit. Two of each measurement type were conducted in the pediatric intensive care unit. Focus groups were conducted with parents and staff. Transcripts were analyzed with descriptive content analysis and themes were compared against results from quantitative measurements. Sound levels exceeded recommended standards at nearly every time point. The most common code was related to talking. Themes from focus groups included the critical care context and sound levels, effects of sound levels, and reducing sound levels-the way forward. Results are consistent with work conducted in other critical care environments. Staff and families realize that high sound levels can be a problem, but feel that the culture and context are not supportive of a quiet care space. High levels of ambient sound suggest that the largest changes in sound levels are likely to come from design and equipment purchase decisions. L10 and Lmax appear to be the best outcomes for measurement of behavioral interventions.

  17. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Development of the wind turbine sound prediction code began as part of an effort understand and reduce the noise generated by Mod-1. Tone sound levels predicted with this code are in good agreement with measured data taken in the vicinity Mod-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may amplify the actual sound levels by 6 dB. Parametric analysis using the code shows that the predominant contributors to Mod-1 rotor noise are (1) the velocity deficit in the wake of the support tower, (2) the high rotor speed, and (3) off-optimum operation.

  18. A neurally inspired musical instrument classification system based upon the sound onset.

    PubMed

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  19. Coding of sounds in the auditory system and its relevance to signal processing and coding in cochlear implants.

    PubMed

    Moore, Brian C J

    2003-03-01

    To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.

  20. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry

    DTIC Science & Technology

    2014-05-29

    its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the

  1. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Since regular operation of the DOE/NASA MOD-1 wind turbine began in October 1979 about 10 nearby households have complained of noise from the machine. Development of the NASA-LeRC with turbine sound prediction code began in May 1980 as part of an effort to understand and reduce the noise generated by MOD-1. Tone sound levels predicted with this code are in generally good agreement with measured data taken in the vicinity MOD-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may be amplifying the actual sound levels by about 6 dB. Parametric analysis using the code has shown that the predominant contributions to MOD-1 rotor noise are: (1) the velocity deficit in the wake of the support tower; (2) the high rotor speed; and (3) off column operation.

  2. Intelligibility in speech maskers with a binaural cochlear implant sound coding strategy inspired by the contralateral medial olivocochlear reflex.

    PubMed

    Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Gorospe, José M; Ruiz, Santiago Santa Cruz; Benito, Fernando; Wilson, Blake S

    2017-05-01

    We have recently proposed a binaural cochlear implant (CI) sound processing strategy inspired by the contralateral medial olivocochlear reflex (the MOC strategy) and shown that it improves intelligibility in steady-state noise (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). The aim here was to evaluate possible speech-reception benefits of the MOC strategy for speech maskers, a more natural type of interferer. Speech reception thresholds (SRTs) were measured in six bilateral and two single-sided deaf CI users with the MOC strategy and with a standard (STD) strategy. SRTs were measured in unilateral and bilateral listening conditions, and for target and masker stimuli located at azimuthal angles of (0°, 0°), (-15°, +15°), and (-90°, +90°). Mean SRTs were 2-5 dB better with the MOC than with the STD strategy for spatially separated target and masker sources. For bilateral CI users, the MOC strategy (1) facilitated the intelligibility of speech in competition with spatially separated speech maskers in both unilateral and bilateral listening conditions; and (2) led to an overall improvement in spatial release from masking in the two listening conditions. Insofar as speech is a more natural type of interferer than steady-state noise, the present results suggest that the MOC strategy holds potential for promising outcomes for CI users. Copyright © 2017. Published by Elsevier B.V.

  3. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  4. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations

    PubMed Central

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. PMID:26545618

  5. Clinical evaluation of cochlear implant sound coding taking into account conjectural masking functions, MP3000™.

    PubMed

    Buechner, Andreas; Beynon, Andy; Szyfter, Witold; Niemczyk, Kazimierz; Hoppe, Ulrich; Hey, Matthias; Brokx, Jan; Eyles, Julie; Van de Heyning, Paul; Paludetti, Gaetano; Zarowski, Andrzej; Quaranta, Nicola; Wesarg, Thomas; Festen, Joost; Olze, Heidi; Dhooge, Ingeborg; Müller-Deile, Joachim; Ramos, Angel; Roman, Stephane; Piron, Jean-Pierre; Cuda, Domenico; Burdo, Sandro; Grolman, Wilko; Vaillard, Samantha Roux; Huarte, Alicia; Frachet, Bruno; Morera, Constantine; Garcia-Ibáñez, Luis; Abels, Daniel; Walger, Martin; Müller-Mazotta, Jochen; Leone, Carlo Antonio; Meyer, Bernard; Dillier, Norbert; Steffens, Thomas; Gentine, André; Mazzoli, Manuela; Rypkema, Gerben; Killian, Matthijs; Smoorenburg, Guido

    2011-11-01

    Efficacy of the SPEAK and ACE coding strategies was compared with that of a new strategy, MP3000™, by 37 European implant centers including 221 subjects. The SPEAK and ACE strategies are based on selection of 8-10 spectral components with the highest levels, while MP3000 is based on the selection of only 4-6 components, with the highest levels relative to an estimate of the spread of masking. The pulse rate per component was fixed. No significant difference was found for the speech scores and for coding preference between the SPEAK/ACE and MP3000 strategies. Battery life was 24% longer for the MP3000 strategy. With MP3000 the best results were found for a selection of six components. In addition, the best results were found for a masking function with a low-frequency slope of 50 dB/Bark and a high-frequency slope of 37 dB/Bark (50/37) as compared to the other combinations examined of 40/30 and 20/15 dB/Bark. The best results found for the steepest slopes do not seem to agree with current estimates of the spread of masking in electrical stimulation. Future research might reveal if performance with respect to SPEAK/ACE can be enhanced by increasing the number of channels in MP3000 beyond 4-6 and it should shed more light on the optimum steepness of the slopes of the masking functions applied in MP3000.

  6. Effects of irrelevant sounds on phonological coding in reading comprehension and short-term memory.

    PubMed

    Boyle, R; Coltheart, V

    1996-05-01

    The effects of irrelevant sounds on reading comprehension and short-term memory were studied in two experiments. In Experiment 1, adults judged the acceptability of written sentences during irrelevant speech, accompanied and unaccompanied singing, instrumental music, and in silence. Sentences varied in syntactic complexity: Simple sentences contained a right-branching relative clause (The applause pleased the woman that gave the speech) and syntactically complex sentences included a centre-embedded relative clause (The hay that the farmer stored fed the hungry animals). Unacceptable sentences either sounded acceptable (The dog chased the cat that eight up all his food) or did not (The man praised the child that sight up his spinach). Decision accuracy was impaired by syntactic complexity but not by irrelevant sounds. Phonological coding was indicated by increased errors on unacceptable sentences that sounded correct. These errors rates were unaffected by irrelevant sounds. Experiment 2 examined effects of irrelevant sounds on ordered recall of phonologically similar and dissimilar word lists. Phonological similarity impaired recall. Irrelevant speech reduced recall but did not interact with phonological similarity. The results of these experiments question assumptions about the relationship between speech input and phonological coding in reading and the short-term store.

  7. Development of an Acoustic Signal Analysis Tool “Auto-F” Based on the Temperament Scale

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    The MIDI interface is originally designed for electronic musical instruments but we consider this music-note based coding concept can be extended for general acoustic signal description. We proposed applying the MIDI technology to coding of bio-medical auscultation sound signals such as heart sounds for retrieving medical records and performing telemedicine. Then we have tried to extend our encoding targets including vocal sounds, natural sounds and electronic bio-signals such as ECG, using Generalized Harmonic Analysis method. Currently, we are trying to separate vocal sounds included in popular songs and encode both vocal sounds and background instrumental sounds into separate MIDI channels. And also, we are trying to extract articulation parameters such as MIDI pitch-bend parameters in order to reproduce natural acoustic sounds using a GM-standard MIDI tone generator. In this paper, we present an overall algorithm of our developed acoustic signal analysis tool, based on those research works, which can analyze given time-based signals on the musical temperament scale. The prominent feature of this tool is producing high-precision MIDI codes, which reproduce the similar signals as the given source signal using a GM-standard MIDI tone generator, and also providing analyzed texts in the XML format.

  8. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    PubMed

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  9. L-type calcium channels refine the neural population code of sound level

    PubMed Central

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  10. Incorporation of feedback during beat synchronization is an index of neural maturation and reading skills.

    PubMed

    Woodruff Carr, Kali; Fitzroy, Ahren B; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina

    2017-01-01

    Speech communication involves integration and coordination of sensory perception and motor production, requiring precise temporal coupling. Beat synchronization, the coordination of movement with a pacing sound, can be used as an index of this sensorimotor timing. We assessed adolescents' synchronization and capacity to correct asynchronies when given online visual feedback. Variability of synchronization while receiving feedback predicted phonological memory and reading sub-skills, as well as maturation of cortical auditory processing; less variable synchronization during the presence of feedback tracked with maturation of cortical processing of sound onsets and resting gamma activity. We suggest the ability to incorporate feedback during synchronization is an index of intentional, multimodal timing-based integration in the maturing adolescent brain. Precision of temporal coding across modalities is important for speech processing and literacy skills that rely on dynamic interactions with sound. Synchronization employing feedback may prove useful as a remedial strategy for individuals who struggle with timing-based language learning impairments. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Coding strategies for cochlear implants under adverse environments

    NASA Astrophysics Data System (ADS)

    Tahmina, Qudsia

    Cochlear implants are electronic prosthetic devices that restores partial hearing in patients with severe to profound hearing loss. Although most coding strategies have significantly improved the perception of speech in quite listening conditions, there remains limitations on speech perception under adverse environments such as in background noise, reverberation and band-limited channels, and we propose strategies that improve the intelligibility of speech transmitted over the telephone networks, reverberated speech and speech in the presence of background noise. For telephone processed speech, we propose to examine the effects of adding low-frequency and high- frequency information to the band-limited telephone speech. Four listening conditions were designed to simulate the receiving frequency characteristics of telephone handsets. Results indicated improvement in cochlear implant and bimodal listening when telephone speech was augmented with high frequency information and therefore this study provides support for design of algorithms to extend the bandwidth towards higher frequencies. The results also indicated added benefit from hearing aids for bimodal listeners in all four types of listening conditions. Speech understanding in acoustically reverberant environments is always a difficult task for hearing impaired listeners. Reverberated sounds consists of direct sound, early reflections and late reflections. Late reflections are known to be detrimental to speech intelligibility. In this study, we propose a reverberation suppression strategy based on spectral subtraction to suppress the reverberant energies from late reflections. Results from listening tests for two reverberant conditions (RT60 = 0.3s and 1.0s) indicated significant improvement when stimuli was processed with SS strategy. The proposed strategy operates with little to no prior information on the signal and the room characteristics and therefore, can potentially be implemented in real-time CI speech processors. For speech in background noise, we propose a mechanism underlying the contribution of harmonics to the benefit of electroacoustic stimulations in cochlear implants. The proposed strategy is based on harmonic modeling and uses synthesis driven approach to synthesize the harmonics in voiced segments of speech. Based on objective measures, results indicated improvement in speech quality. This study warrants further work into development of algorithms to regenerate harmonics of voiced segments in the presence of noise.

  12. L-type calcium channels refine the neural population code of sound level.

    PubMed

    Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana

    2016-12-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.

  13. Histochemical changes of occlusal surface enamel of permanent teeth, where dental caries is questionable vs sound enamel surfaces.

    PubMed

    Michalaki, M; Oulis, C J; Pandis, N; Eliades, G

    2016-12-01

    This in vitro study was to classify questionable for caries occlusal surfaces (QCOS) of permanent teeth according to ICDAS codes 1, 2, and 3 and to compare them in terms of enamel mineral composition with the areas of sound tissue of the same tooth. Partially impacted human molars (60) extracted for therapeutic reasons with QCOS were used in the study, photographed via a polarised light microscope and classified according to the ICDAS II (into codes 1, 2, or 3). The crowns were embedded in clear self-cured acrylic resin and longitudinally sectioned at the levels of the characterised lesions and studied by SEM/EDX, to assess enamel mineral composition of the QCOS. Univariate and multivariate random effect regressions were used for Ca (wt%), P (wt%), and Ca/P (wt%). The EDX analysis indicated changes in the Ca and P contents that were more prominent in ICDAS-II code 3 lesions compared to codes 1 and 2 lesions. In these lesions, Ca (wt%) and P (wt%) concentrations were significantly decreased (p = 0.01) in comparison with sound areas. Ca and P (wt%) contents were significantly lower (p = 0.02 and p = 0.01 respectively) for code 3 areas in comparison with codes 1 and 2 areas. Significantly higher (p = 0.01) Ca (wt%) and P (wt%) contents were found on sound areas compared to the lesion areas. The enamel of occlusal surfaces of permanent teeth with ICDAS 1, 2, and 3 lesions was found to have different Ca/P compositions, necessitating further investigation on whether these altered surfaces might behave differently on etching preparation before fissure sealant placement, compared to sound surfaces.

  14. Constructing Noise-Invariant Representations of Sound in the Auditory Pathway

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.

    2013-01-01

    Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596

  15. Qualities of Single Electrode Stimulation as a Function of Rate and Place of Stimulation with a Cochlear Implant

    PubMed Central

    Landsberger, David M.; Vermeire, Katrien; Claes, Annes; Van Rompaey, Vincent; Van de Heyning, Paul

    2015-01-01

    Objectives Although it has been previously shown that changes in temporal coding produce changes in pitch in all cochlear regions, research has suggested that temporal coding might be best encoded in relatively apical locations. We hypothesized that although temporal coding may provide useable information at any cochlear location, low rates of stimulation might provide better sound quality in apical regions that are more likely to encode temporal information in the normal ear. In the present study, sound qualities of single electrode pulse trains were scaled to provide insight into the combined effects of cochlear location and stimulation rate on sound quality. Design Ten long term users of MED-EL cochlear implants with 31 mm electrode arrays (Standard or FLEXSOFT) were asked to scale the sound quality of single electrode pulse trains in terms of how “Clean”, “Noisy”, “High”, and “Annoying” they sounded. Pulse trains were presented on most electrodes between 1 and 12 representing the entire range of the long electrode array at stimulation rates of 100, 150, 200, 400, or 1500 pulses per second. Results While high rates of stimulation are scaled as having a “Clean” sound quality across the entire array, only the most apical electrodes (typically 1 through 3) were considered “Clean” at low rates. Low rates on electrodes 6 through 12 were not rated as “Clean” while the low rate quality of electrodes 4 and 5 were typically in between. Scaling of “Noisy” responses provided an approximately inverse pattern as “Clean” responses. “High” responses show the trade-off between rate and place of stimulation on pitch. Because “High” responses did not correlate with “Clean” responses, subjects were not rating sound quality based on pitch. Conclusions If explicit temporal coding is to be provided in a cochlear implant, it is likely to sound better when provided apically. Additionally, the finding that low rates sound clean only at apical places of stimulation is consistent with previous findings that a change in rate of stimulation corresponds to an equivalent change in perceived pitch at apical locations. Collectively, the data strongly suggests that temporal coding with a cochlear implant is optimally provided by electrodes placed well into the second cochlear turn. PMID:26583480

  16. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    PubMed Central

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  17. 37 CFR 270.2 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... records of such use shall be kept and made available. (b) Definitions. (1) A Collective is a collection... Code and adopted pursuant to 37 CFR 251.63(b), or by decision of a Copyright Arbitration Royalty Panel... the sound recording is found; (7) The catalog number; (8) The International Standard Recording Code...

  18. 37 CFR 270.2 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... records of such use shall be kept and made available. (b) Definitions. (1) A Collective is a collection... Code and adopted pursuant to 37 CFR 251.63(b), or by decision of a Copyright Arbitration Royalty Panel... the sound recording is found; (7) The catalog number; (8) The International Standard Recording Code...

  19. Deep electrode insertion and sound coding in cochlear implants.

    PubMed

    Hochmair, Ingeborg; Hochmair, Erwin; Nopp, Peter; Waller, Melissa; Jolly, Claude

    2015-04-01

    Present-day cochlear implants demonstrate remarkable speech understanding performance despite the use of non-optimized coding strategies concerning the transmission of tonal information. Most systems rely on place pitch information despite possibly large deviations from correct tonotopic placement of stimulation sites. Low frequency information is limited as well because of the constant pulse rate stimulation generally used and, being even more restrictive, of the limited insertion depth of the electrodes. This results in a compromised perception of music and tonal languages. Newly available flexible long straight electrodes permit deep insertion reaching the apical region with little or no insertion trauma. This article discusses the potential benefits of deep insertion which are obtained using pitch-locked temporal stimulation patterns. Besides the access to low frequency information, further advantages of deeply inserted long electrodes are the possibility to better approximate the correct tonotopic location of contacts, the coverage of a wider range of cochlear locations, and the somewhat reduced channel interaction due to the wider contact separation for a given number of channels. A newly developed set of strategies has been shown to improve speech understanding in noise and to enhance sound quality by providing a more "natural" impression, which especially becomes obvious when listening to music. The benefits of deep insertion should not, however, be compromised by structural damage during insertion. The small cross section and the high flexibility of the new electrodes can help to ensure less traumatic insertions as demonstrated by patients' hearing preservation rate. This article is part of a Special Issue entitled . Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Hearing sounds, understanding actions: action representation in mirror neurons.

    PubMed

    Kohler, Evelyne; Keysers, Christian; Umiltà, M Alessandra; Fogassi, Leonardo; Gallese, Vittorio; Rizzolatti, Giacomo

    2002-08-02

    Many object-related actions can be recognized by their sound. We found neurons in monkey premotor cortex that discharge when the animal performs a specific action and when it hears the related sound. Most of the neurons also discharge when the monkey observes the same action. These audiovisual mirror neurons code actions independently of whether these actions are performed, heard, or seen. This discovery in the monkey homolog of Broca's area might shed light on the origin of language: audiovisual mirror neurons code abstract contents-the meaning of actions-and have the auditory access typical of human language to these contents.

  1. In Their Own Words: Interviews with Musicians Reveal the Advantages and Disadvantages of Wearing Earplugs.

    PubMed

    Beach, Elizabeth F; O'Brien, Ian

    2017-06-01

    Musicians are at risk of hearing loss from sound exposure, and earplugs form part of many musicians' hearing conservation practices. Although musicians typically report a range of difficulties when wearing earplugs, there are many who have managed to successfully incorporate earplugs into their practice of music. The study aim was to provide a detailed account of earplug usage from the perspective of the musician, including motivating factors, practical strategies, and attitudes. In-depth interviews with 23 musicians were transcribed and content analysis was performed. Responses were coded and classified into three main themes: advantages, disadvantages, and usage patterns and strategies, together with an overlapping fourth theme, youth perspectives. Several positive aspects of wearing earplugs were identified, including long-term hearing protection and reduced levels of fatigue and pain. Musicians reported that earplugs present few problems for communication, improve sound clarity in ensembles, are discreet, and are easy to handle. However, earplugs also present challenges, including an overall dullness of sound, reduced immediacy, and an impaired ability to judge balance and intonation due to the occlusion effect, all of which influence usage habits and patterns. The experiences of the younger musicians and long-term users of earplugs indicate that practice, persistence, and a flexible approach are required for successful earplug usage. In time, there may be greater acceptance of earplugs, particularly amongst a new generation of musicians, some of whom regard the earplugs as a performance enhancement tool as well as a protective device.

  2. Numerical Simulation of Noise from Supersonic Jets Passing Through a Rigid Duct

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2012-01-01

    The generation, propagation and radiation of sound from a perfectly expanded Mach 2.5 cold supersonic jet flowing through an enclosed rigid-walled duct with an upstream J-deflector have been numerically simulated with the aid of OVERFLOW Navier-Stokes CFD code. A one-equation turbulence model is considered. While the near-field sound sources are computed by the CFD code, the far-field sound is evaluated by Kirchhoff surface integral formulation. Predictions of the farfield directivity of the OASPL (Overall Sound Pressure Level) agree satisfactorily with the experimental data previously reported by the author. Calculations also suggest that there is significant entrainment of air into the duct, with the mass flow rate of entrained air being about three times the jet exit mass flow rate.

  3. 75 FR 33696 - Safety Zone: July Firework Display in Captain of the Port, Puget Sound AOR

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-15

    ...-AA00 Safety Zone: July Firework Display in Captain of the Port, Puget Sound AOR AGENCY: Coast Guard... Captain of the Port, Puget Sound AOR. (a) Safety Zone. The following area is a designated safety zone: all..., Captain of the Port, Puget Sound. [FR Doc. 2010-14294 Filed 6-14-10; 8:45 am] BILLING CODE 9110-04-P ...

  4. Learning Midlevel Auditory Codes from Natural Sound Statistics.

    PubMed

    Młynarski, Wiktor; McDermott, Josh H

    2018-03-01

    Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.

  5. SPAIDE: A Real-time Research Platform for the Clarion CII/90K Cochlear Implant

    NASA Astrophysics Data System (ADS)

    Van Immerseel, L.; Peeters, S.; Dykmans, P.; Vanpoucke, F.; Bracke, P.

    2005-12-01

    SPAIDE ( sound-processing algorithm integrated development environment) is a real-time platform of Advanced Bionics Corporation (Sylmar, Calif, USA) to facilitate advanced research on sound-processing and electrical-stimulation strategies with the Clarion CII and 90K implants. The platform is meant for testing in the laboratory. SPAIDE is conceptually based on a clear separation of the sound-processing and stimulation strategies, and, in specific, on the distinction between sound-processing and stimulation channels and electrode contacts. The development environment has a user-friendly interface to specify sound-processing and stimulation strategies, and includes the possibility to simulate the electrical stimulation. SPAIDE allows for real-time sound capturing from file or audio input on PC, sound processing and application of the stimulation strategy, and streaming the results to the implant. The platform is able to cover a broad range of research applications; from noise reduction and mimicking of normal hearing, over complex (simultaneous) stimulation strategies, to psychophysics. The hardware setup consists of a personal computer, an interface board, and a speech processor. The software is both expandable and to a great extent reusable in other applications.

  6. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    PubMed

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.

  7. Numerical simulation of turbulent jet noise, part 2

    NASA Technical Reports Server (NTRS)

    Metcalfe, R. W.; Orszag, S. A.

    1976-01-01

    Results on the numerical simulation of jet flow fields were used to study the radiated sound field, and in addition, to extend and test the capabilities of the turbulent jet simulation codes. The principal result of the investigation was the computation of the radiated sound field from a turbulent jet. In addition, the computer codes were extended to account for the effects of compressibility and eddy viscosity, and the treatment of the nonlinear terms of the Navier-Stokes equations was modified so that they can be computed in a semi-implicit way. A summary of the flow model and a description of the numerical methods used for its solution are presented. Calculations of the radiated sound field are reported. In addition, the extensions that were made to the fundamental dynamical codes are described. Finally, the current state-of-the-art for computer simulation of turbulent jet noise is summarized.

  8. Simulation of Jet Noise with OVERFLOW CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, M.; Caimi, R.; Voska, N. (Technical Monitor)

    2002-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  9. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  10. Memory for pictures and sounds: independence of auditory and visual codes.

    PubMed

    Thompson, V A; Paivio, A

    1994-09-01

    Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.

  11. Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Schreier, F.; Garcia, S. Gimeno; Milz, M.; Kottayil, A.; Höpfner, M.; von Clarmann, T.; Stiller, G.

    2013-05-01

    An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric sounding - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. Results of this intercomparison and a discussion of reasons of the observed differences are presented.

  12. Helioseismic Constraints on New Solar Models from the MoSEC Code

    NASA Technical Reports Server (NTRS)

    Elliott, J. R.

    1998-01-01

    Evolutionary solar models are computed using a new stellar evolution code, MOSEC (Modular Stellar Evolution Code). This code has been designed with carefully controlled truncation errors in order to achieve a precision which reflects the increasingly accurate determination of solar interior structure by helioseismology. A series of models is constructed to investigate the effects of the choice of equation of state (OPAL or MHD-E, the latter being a version of the MHD equation of state recalculated by the author), the inclusion of helium and heavy-element settling and diffusion, and the inclusion of a simple model of mixing associated with the solar tachocline. The neutrino flux predictions are discussed, while the sound speed of the computed models is compared to that of the sun via the latest inversion of SOI-NMI p-mode frequency data. The comparison between models calculated with the OPAL and MHD-E equations of state is particularly interesting because the MHD-E equation of state includes relativistic effects for the electrons, whereas neither MHD nor OPAL do. This has a significant effect on the sound speed of the computed model, worsening the agreement with the solar sound speed. Using the OPAL equation of state and including the settling and diffusion of helium and heavy elements produces agreement in sound speed with the helioseismic results to within about +.-0.2%; the inclusion of mixing slightly improves the agreement.

  13. Improvement of the predicted aural detection code ICHIN (I Can Hear It Now)

    NASA Technical Reports Server (NTRS)

    Mueller, Arnold W.; Smith, Charles D.; Lemasurier, Phillip

    1993-01-01

    Acoustic tests were conducted to study the far-field sound pressure levels and aural detection ranges associated with a Sikorsky S-76A helicopter in straight and level flight at various advancing blade tip Mach numbers. The flight altitude was nominally 150 meters above ground level. This paper compares the normalized predicted aural detection distances, based on the measured far-field sound pressure levels, to the normalized measured aural detection distances obtained from sound jury response measurements obtained during the same test. Both unmodified and modified versions of the prediction code ICHIN-6 (I Can Hear It Now) were used to produce the results for this study.

  14. SAFETY ON UNTRUSTED NETWORK DEVICES (SOUND)

    DTIC Science & Technology

    2017-10-10

    in the Cyber & Communication Technologies Group , but not on the SOUND project, would review the code, design and perform attacks against a live...3.5 Red Team As part of our testing , we planned to conduct Red Team assessments. In these assessments, a group of engineers from BAE who worked...developed under the DARPA CRASH program and SOUND were designed to be companion projects. SAFE focused on the processor and the host, SOUND focused on

  15. Musical Sound Quality in Cochlear Implant Users: A Comparison in Bass Frequency Perception Between Fine Structure Processing and High-Definition Continuous Interleaved Sampling Strategies.

    PubMed

    Roy, Alexis T; Carver, Courtney; Jiradejvong, Patpong; Limb, Charles J

    2015-01-01

    Med-El cochlear implant (CI) patients are typically programmed with either the fine structure processing (FSP) or high-definition continuous interleaved sampling (HDCIS) strategy. FSP is the newer-generation strategy and aims to provide more direct encoding of fine structure information compared with HDCIS. Since fine structure information is extremely important in music listening, FSP may offer improvements in musical sound quality for CI users. Despite widespread clinical use of both strategies, few studies have assessed the possible benefits in music perception for the FSP strategy. The objective of this study is to measure the differences in musical sound quality discrimination between the FSP and HDCIS strategies. Musical sound quality discrimination was measured using a previously designed evaluation, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). In this evaluation, participants were required to detect sound quality differences between an unaltered real-world musical stimulus and versions of the stimulus in which various amount of bass (low) frequency information was removed via a high-pass filer. Eight CI users, currently using the FSP strategy, were enrolled in this study. In the first session, participants completed the CI-MUSHRA evaluation with their FSP strategy. Patients were then programmed with the clinical-default HDCIS strategy, which they used for 2 months to allow for acclimatization. After acclimatization, each participant returned for the second session, during which they were retested with HDCIS, and then switched back to their original FSP strategy and tested acutely. Sixteen normal-hearing (NH) controls completed a CI-MUSHRA evaluation for comparison, in which NH controls listened to music samples under normal acoustic conditions, without CI stimulation. Sensitivity to high-pass filtering more closely resembled that of NH controls when CI users were programmed with the clinical-default FSP strategy compared with performance when programmed with HDCIS (mixed-design analysis of variance, p < 0.05). The clinical-default FSP strategy offers improvements in musical sound quality discrimination for CI users with respect to bass frequency perception. This improved bass frequency discrimination may in turn support enhanced musical sound quality. This is the first study that has demonstrated objective improvements in musical sound quality discrimination with the newer-generation FSP strategy. These positive results may help guide the selection of processing strategies for Med-El CI patients. In addition, CI-MUSHRA may also provide a novel method for assessing the benefits of newer processing strategies in the future.

  16. Design of Phoneme MIDI Codes Using the MIDI Encoding Tool “Auto-F” and Realizing Voice Synthesizing Functions Based on Musical Sounds

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    Using our previously developed audio to MIDI code converter tool “Auto-F”, from given vocal acoustic signals we can create MIDI data, which enable to playback the voice-like signals with a standard MIDI synthesizer. Applying this tool, we are constructing a MIDI database, which consists of previously converted simple harmonic structured MIDI codes from a set of 71 Japanese male and female syllable recorded signals. And we are developing a novel voice synthesizing system based on harmonically synthesizing musical sounds, which can generate MIDI data and playback voice signals with a MIDI synthesizer by giving Japanese plain (kana) texts, referring to the syllable MIDI code database. In this paper, we propose an improved MIDI converter tool, which can produce temporally higher-resolution MIDI codes. Then we propose an algorithm separating a set of 20 consonant and vowel phoneme MIDI codes from 71 syllable MIDI converted codes in order to construct a voice synthesizing system. And, we present the evaluation results of voice synthesizing quality between these separated phoneme MIDI codes and their original syllable MIDI codes by our developed 4-syllable word listening tests.

  17. Assessment and improvement of sound quality in cochlear implant users

    PubMed Central

    Caldwell, Meredith T.; Jiam, Nicole T.

    2017-01-01

    Objectives Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Data Sources Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Results Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant‐mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI‐MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. Conclusions In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. Level of Evidence NA PMID:28894831

  18. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks.

    PubMed

    Dai, Lengshi; Shinn-Cunningham, Barbara G

    2016-01-01

    Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands.

  19. Are minidisc recorders adequate for the study of respiratory sounds?

    PubMed

    Kraman, Steve S; Wodicka, George R; Kiyokawa, Hiroshi; Pasterkamp, Hans

    2002-01-01

    Digital audio tape (DAT) recorders have become the de facto gold standard recording devices for lung sounds. Sound recorded on DAT is compact-disk (CD) quality with adequate sensitivity from below 20 Hz to above 20 KHz. However, DAT recorders have drawbacks. Although small, they are relatively heavy, the recording mechanism is complex and delicate, and finding one desired track out of many is inconvenient. A more recent development in portable recording devices is the minidisc (MD) recorder. These recorders are widely available, inexpensive, small and light, rugged, mechanically simple, and record digital data in tracks that may be named and accessed directly. Minidiscs hold as much recorded sound as a compact disk but in about 1/5 of the recordable area. The data compression is achieved by use of a technique known as adaptive transform acoustic coding for minidisc (ATRAC). This coding technique makes decisions about what components of the sound would not be heard by a human listener and discards the digital information that represents these sounds. Most of this compression takes place on sounds above 5.5 KHz. As the intended use of these recorders is the storage and reproduction of music, it is unknown whether ATRAC will discard or distort significant portions of typical lung sound signals. We determined the suitability of MD recorders for respiratory sound research by comparing a variety of normal and pathologic lung sounds that were digitized directly into a computer and also after recording by a DAT recorder and 2 different MD recorders (Sharp and Sony). We found that the frequency spectra and waveforms of respiratory sounds were not distorted in any important way by recording on the two MD recorders tested.

  20. 37 CFR 270.1 - Notice of use of sound recordings under statutory license.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Notice and by the date of the signature. (e) Filing notices; fees. The original and three copies shall be... sound recordings when used under either section 112(e) or 114(d)(2) of title 17, United States Code, or... notice to sound recording copyright owners of the use of their works under section 112(e) or 114(d)(2) of...

  1. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  2. Speech processing using maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.E.

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  3. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    PubMed

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  4. Turbofan noise generation. Volume 2: Computer programs

    NASA Technical Reports Server (NTRS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-01-01

    The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.

  5. Turbofan noise generation. Volume 2: Computer programs

    NASA Astrophysics Data System (ADS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-07-01

    The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.

  6. Nihilism, relativism, and Engelhardt.

    PubMed

    Wreen, M

    1998-01-01

    This paper is a critical analysis of Tristram Engelhardt's attempts to avoid unrestricted nihilism and relativism. The focus of attention is his recent book, The Foundations of Bioethics (Oxford University Press, 1996). No substantive or "content-full" bioethics (e.g., that of Roman Catholicism or the Samurai) has an intersubjectively verifiable and universally binding foundation, Engelhardt thinks, for unaided secular reason cannot show that any particular substantive morality (or moral code) is correct. He thus seems to be committed to either nihilism or relativism. The first is the view that there is not even one true or valid moral code, and the second is the view that there is a plurality of true or valid moral codes. However, Engelhardt rejects both nihilism and relativism, at least in unrestricted form. Strictly speaking, he himself is a universalist, someone who believes that there is a single true moral code. Two argumentative strategies are employed by him to fend off unconstrained nihilism and relativism. The first argues that although all attempts to establish a content-full morality on the basis of secular reason fail, secular reason can still establish a content-less, purely procedural morality. Although not content-full and incapable of providing positive direction in life, much less a meaning of life, such a morality does limit the range of relativism and nihilism. The second argues that there is a single true, content-full morality. Grace and revelation, however, are needed to make it available to us; secular reason alone is not up to the task. This second line of argument is not pursued in The Foundations at any length, but it does crop up at times, and if it is sound, nihilism and relativism can be much more thoroughly routed than the first line of argument has it. Engelhardt's position and argumentative strategies are exposed at length and accorded a detailed critical examination. In the end, it is concluded that neither strategy will do, and that Engelhardt is probably committed to some form of relativism.

  7. Launch Summary for 1979

    NASA Technical Reports Server (NTRS)

    Vostreys, R. W.

    1980-01-01

    Spacecraft launching for 1979 are identified and listed under the categories of (1) sounding rockets, and (2) artificial Earth satellites and space probes. The sounding rockets section includes a listing of the experiments, index of launch sites and tables of the meanings and codes used in the launch listing.

  8. An open access database for the evaluation of heart sound algorithms.

    PubMed

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  9. The Fast Scattering Code (FSC): Validation Studies and Program Guidelines

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Dunn, Mark H.

    2011-01-01

    The Fast Scattering Code (FSC) is a frequency domain noise prediction program developed at the NASA Langley Research Center (LaRC) to simulate the acoustic field produced by the interaction of known, time harmonic incident sound with bodies of arbitrary shape and surface impedance immersed in a potential flow. The code uses the equivalent source method (ESM) to solve an exterior 3-D Helmholtz boundary value problem (BVP) by expanding the scattered acoustic pressure field into a series of point sources distributed on a fictitious surface placed inside the actual scatterer. This work provides additional code validation studies and illustrates the range of code parameters that produce accurate results with minimal computational costs. Systematic noise prediction studies are presented in which monopole generated incident sound is scattered by simple geometric shapes - spheres (acoustically hard and soft surfaces), oblate spheroids, flat disk, and flat plates with various edge topologies. Comparisons between FSC simulations and analytical results and experimental data are presented.

  10. Modeling the Effects of Transbasin Nonlinear Internal Waves Through the South China Sea Basin

    DTIC Science & Technology

    2013-06-01

    sound propagation through the SCS needs to be developed to help maintain tactical superiority. This model will provide valuable information for...METHODOLOGY A. ACOUSTIC MODEL 1. Ray Trace Theory Modeling of sound propagation through the ocean requires solving the governing spherical wave equation...arrival structure simulation code. The model permits the study of the physics and phenomenology of sound propagation though the SCS

  11. Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

    PubMed Central

    Stilp, Christian E.; Kluender, Keith R.

    2012-01-01

    To the extent that sensorineural systems are efficient, redundancy should be extracted to optimize transmission of information, but perceptual evidence for this has been limited. Stilp and colleagues recently reported efficient coding of robust correlation (r = .97) among complex acoustic attributes (attack/decay, spectral shape) in novel sounds. Discrimination of sounds orthogonal to the correlation was initially inferior but later comparable to that of sounds obeying the correlation. These effects were attenuated for less-correlated stimuli (r = .54) for reasons that are unclear. Here, statistical properties of correlation among acoustic attributes essential for perceptual organization are investigated. Overall, simple strength of the principal correlation is inadequate to predict listener performance. Initial superiority of discrimination for statistically consistent sound pairs was relatively insensitive to decreased physical acoustic/psychoacoustic range of evidence supporting the correlation, and to more frequent presentations of the same orthogonal test pairs. However, increased range supporting an orthogonal dimension has substantial effects upon perceptual organization. Connectionist simulations and Eigenvalues from closed-form calculations of principal components analysis (PCA) reveal that perceptual organization is near-optimally weighted to shared versus unshared covariance in experienced sound distributions. Implications of reduced perceptual dimensionality for speech perception and plausible neural substrates are discussed. PMID:22292057

  12. Songbirds and humans apply different strategies in a sound sequence discrimination task.

    PubMed

    Seki, Yoshimasa; Suzuki, Kenta; Osawa, Ayumi M; Okanoya, Kazuo

    2013-01-01

    The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an "AAB" or "ABB" rule. The sound elements used were taken from a variety of male (M) and female (F) calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: (1) memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and (2) using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e., AAB and ABB); MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  13. Critical role for cochlear hair cell BK channels for coding the temporal structure and dynamic range of auditory information for central auditory processing

    PubMed Central

    Kurt, Simone; Sausbier, Matthias; Rüttiger, Lukas; Brandt, Niels; Moeller, Christoph K.; Kindler, Jennifer; Sausbier, Ulrike; Zimmermann, Ulrike; van Straaten, Harald; Neuhuber, Winfried; Engel, Jutta; Knipper, Marlies; Ruth, Peter; Schulze, Holger

    2012-01-01

    Large conductance, voltage- and Ca2+-activated K+ (BK) channels in inner hair cells (IHCs) of the cochlea are essential for hearing. However, germline deletion of BKα, the pore-forming subunit KCNMA1 of the BK channel, surprisingly did not affect hearing thresholds in the first postnatal weeks, even though altered IHC membrane time constants, decreased IHC receptor potential alternating current/direct current ratio, and impaired spike timing of auditory fibers were reported in these mice. To investigate the role of IHC BK channels for central auditory processing, we generated a conditional mouse model with hair cell-specific deletion of BKα from postnatal day 10 onward. This had an unexpected effect on temporal coding in the central auditory system: neuronal single and multiunit responses in the inferior colliculus showed higher excitability and greater precision of temporal coding that may be linked to the improved discrimination of temporally modulated sounds observed in behavioral training. The higher precision of temporal coding, however, was restricted to slower modulations of sound and reduced stimulus-driven activity. This suggests a diminished dynamic range of stimulus coding that is expected to impair signal detection in noise. Thus, BK channels in IHCs are crucial for central coding of the temporal fine structure of sound and for detection of signals in a noisy environment.—Kurt, S., Sausbier, M., Rüttiger, L., Brandt, N., Moeller, C. K., Kindler, J., Sausbier, U., Zimmermann, U., van Straaten, H., Neuhuber, W., Engel, J., Knipper, M., Ruth, P., Schulze, H. Critical role for cochlear hair cell BK channels for coding the temporal structure and dynamic range of auditory information for central auditory processing. PMID:22691916

  14. Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl’s Auditory Forebrain

    PubMed Central

    2017-01-01

    Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698

  15. Optimal CINAHL search strategies for identifying therapy studies and review articles.

    PubMed

    Wong, Sharon S L; Wilczynski, Nancy L; Haynes, R Brian

    2006-01-01

    To design optimal search strategies for locating sound therapy studies and review articles in CiNAHL in the year 2000. An analytic survey was conducted, comparing hand searches of 75 journals with retrievals from CINAHL for 5,020 candidate search terms and 17,900 combinations for therapy and 5,977 combinations for review articles. All articles were rated with purpose and quality indicators. Candidate search strategies were used in CINAHL, and the retrievals were compared with results of the hand searches. The proposed search strategies were treated as "diagnostic tests" for sound studies and the manual review of the literature was treated as the "gold standard." Operating characteristics of the search strategies were calculated. Of the 1,383 articles about treatment, 506 (36.6%) met basic criteria for scientific merit and 127 (17.9%) of the 711 articles classified as a review met the criteria for systematic reviews. For locating sound treatment studies, a three-term strategy maximized sensitivity at 99.4% but with compromised specificity at 58.3%, and a two-term strategy maximized specificity at 98.5% but with compromised sensitivity at 52.0%. For detecting systematic reviews, a three-term strategy maximized sensitivity at 91.3% while keeping specificity high at 95.4%, and a single-term strategy maximized specificity at 99.6% but with compromised sensitivity at 42.5%. Three-term search strategies optimizing sensitivity and specificity achieved these values over 91% for detecting sound treatment studies and over 76% for detecting systematic reviews. Search strategies combining indexing terms and text words can achieve high sensitivity and specificity for retrieving sound treatment studies and review articles in CINAHL.

  16. On Writing and Handwriting

    ERIC Educational Resources Information Center

    Kucera, Miloš

    2010-01-01

    Writing is often considered secondary to the spoken language, as it is only coded sound-by-sound. But other scholars have demonstrated that writing is similar to "arithmetic": a cognitive structuring, a shift to the meta-level ("for the eye"). "Handwriting" (referred to here as the cursive writing in the sense of…

  17. Hierarchical neurocomputations underlying concurrent sound segregation: connecting periphery to percept.

    PubMed

    Bidelman, Gavin M; Alain, Claude

    2015-02-01

    Natural soundscapes often contain multiple sound sources at any given time. Numerous studies have reported that in human observers, the perception and identification of concurrent sounds is paralleled by specific changes in cortical event-related potentials (ERPs). Although these studies provide a window into the cerebral mechanisms governing sound segregation, little is known about the subcortical neural architecture and hierarchy of neurocomputations that lead to this robust perceptual process. Using computational modeling, scalp-recorded brainstem/cortical ERPs, and human psychophysics, we demonstrate that a primary cue for sound segregation, i.e., harmonicity, is encoded at the auditory nerve level within tens of milliseconds after the onset of sound and is maintained, largely untransformed, in phase-locked activity of the rostral brainstem. As then indexed by auditory cortical responses, (in)harmonicity is coded in the signature and magnitude of the cortical object-related negativity (ORN) response (150-200 ms). The salience of the resulting percept is then captured in a discrete, categorical-like coding scheme by a late negativity response (N5; ~500 ms latency), just prior to the elicitation of a behavioral judgment. Subcortical activity correlated with cortical evoked responses such that weaker phase-locked brainstem responses (lower neural harmonicity) generated larger ORN amplitude, reflecting the cortical registration of multiple sound objects. Studying multiple brain indices simultaneously helps illuminate the mechanisms and time-course of neural processing underlying concurrent sound segregation and may lead to further development and refinement of physiologically driven models of auditory scene analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Sensor system for heart sound biomonitor

    NASA Astrophysics Data System (ADS)

    Maple, Jarrad L.; Hall, Leonard T.; Agzarian, John; Abbott, Derek

    1999-09-01

    Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually, rather than through a conventional stethoscope. A system whereby a digital stethoscope interfaces directly to a PC will be directly along with signal processing algorithms, adopted. The sensor is based on a noise cancellation microphone, with a 450 Hz bandwidth and is sampled at 2250 samples/sec with 12-bit resolution. Further to this, we discuss for comparison a piezo-based sensor with a 1 kHz bandwidth. A major problem is that the recording of the heart sound into these devices is subject to unwanted background noise which can override the heart sound and results in a poor visual representation. This noise originates from various sources such as skin contact with the stethoscope diaphragm, lung sounds, and other surrounding sounds such as speech. Furthermore we demonstrate a solution using 'wavelet denoising'. The wavelet transform is used because of the similarity between the shape of wavelets and the time-domain shape of a heartbeat sound. Thus coding of the waveform into the wavelet domain is achieved with relatively few wavelet coefficients, in contrast to the many Fourier components that would result from conventional decomposition. We show that the background noise can be dramatically reduced by a thresholding operation in the wavelet domain. The principle is that the background noise codes into many small broadband wavelet coefficients that can be removed without significant degradation of the signal of interest.

  19. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise

    PubMed Central

    White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina

    2015-01-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025

  20. Retrieving Atmospheric Temperature and Moisture Profiles from NPP CRIS/ATMS Sensors Using Crimss EDR Algorithm

    NASA Technical Reports Server (NTRS)

    Liu, X.; Kizer, S.; Barnet, C.; Dvakarla, M.; Zhou, D. K.; Larar, A. M.

    2012-01-01

    The Joint Polar Satellite System (JPSS) is a U.S. National Oceanic and Atmospheric Administration (NOAA) mission in collaboration with the U.S. National Aeronautical Space Administration (NASA) and international partners. The NPP Cross-track Infrared Microwave Sounding Suite (CrIMSS) consists of the infrared (IR) Crosstrack Infrared Sounder (CrIS) and the microwave (MW) Advanced Technology Microwave Sounder (ATMS). The CrIS instrument is hyperspectral interferometer, which measures high spectral and spatial resolution upwelling infrared radiances. The ATMS is a 22-channel radiometer similar to Advanced Microwave Sounding Units (AMSU) A and B. It measures top of atmosphere MW upwelling radiation and provides capability of sounding below clouds. The CrIMSS Environmental Data Record (EDR) algorithm provides three EDRs, namely the atmospheric vertical temperature, moisture and pressure profiles (AVTP, AVMP and AVPP, respectively), with the lower tropospheric AVTP and the AVMP being JPSS Key Performance Parameters (KPPs). The operational CrIMSS EDR an algorithm was originally designed to run on large IBM computers with dedicated data management subsystem (DMS). We have ported the operational code to simple Linux systems by replacing DMS with appropriate interfaces. We also changed the interface of the operational code so that we can read data from both the CrIMSS science code and the operational code and be able to compare lookup tables, parameter files, and output results. The detail of the CrIMSS EDR algorithm is described in reference [1]. We will present results of testing the CrIMSS EDR operational algorithm using proxy data generated from the Infrared Atmospheric Sounding Interferometer (IASI) satellite data and from the NPP CrIS/ATMS data.

  1. Transitioning from Analog to Digital Audio Recording in Childhood Speech Sound Disorders

    ERIC Educational Resources Information Center

    Shriberg, Lawrence D.; Mcsweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.

    2005-01-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing…

  2. A User''s Guide to the Zwikker-Kosten Transmission Line Code (ZKTL)

    NASA Technical Reports Server (NTRS)

    Kelly, J. J.; Abu-Khajeel, H.

    1997-01-01

    This user's guide documents updates to the Zwikker-Kosten Transmission Line Code (ZKTL). This code was developed for analyzing new liner concepts developed to provide increased sound absorption. Contiguous arrays of multi-degree-of-freedom (MDOF) liner elements serve as the model for these liner configurations, and Zwikker and Kosten's theory of sound propagation in channels is used to predict the surface impedance. Transmission matrices for the various liner elements incorporate both analytical and semi-empirical methods. This allows standard matrix techniques to be employed in the code to systematically calculate the composite impedance due to the individual liner elements. The ZKTL code consists of four independent subroutines: 1. Single channel impedance calculation - linear version (SCIC) 2. Single channel impedance calculation - nonlinear version (SCICNL) 3. Multi-channel, multi-segment, multi-layer impedance calculation - linear version (MCMSML) 4. Multi-channel, multi-segment, multi-layer impedance calculation - nonlinear version (MCMSMLNL) Detailed examples, comments, and explanations for each liner impedance computation module are included. Also contained in the guide are depictions of the interactive execution, input files and output files.

  3. Multistability in auditory stream segregation: a predictive coding view

    PubMed Central

    Winkler, István; Denham, Susan; Mill, Robert; Bőhm, Tamás M.; Bendixen, Alexandra

    2012-01-01

    Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm. PMID:22371621

  4. How effectively do horizontal and vertical response strategies of long-finned pilot whales reduce sound exposure from naval sonar?

    PubMed

    Wensveen, Paul J; von Benda-Beckmann, Alexander M; Ainslie, Michael A; Lam, Frans-Peter A; Kvadsheim, Petter H; Tyack, Peter L; Miller, Patrick J O

    2015-05-01

    The behaviour of a marine mammal near a noise source can modulate the sound exposure it receives. We demonstrate that two long-finned pilot whales both surfaced in synchrony with consecutive arrivals of multiple sonar pulses. We then assess the effect of surfacing and other behavioural response strategies on the received cumulative sound exposure levels and maximum sound pressure levels (SPLs) by modelling realistic spatiotemporal interactions of a pilot whale with an approaching source. Under the propagation conditions of our model, some response strategies observed in the wild were effective in reducing received levels (e.g. movement perpendicular to the source's line of approach), but others were not (e.g. switching from deep to shallow diving; synchronous surfacing after maximum SPLs). Our study exemplifies how simulations of source-whale interactions guided by detailed observational data can improve our understanding about motivations behind behaviour responses observed in the wild (e.g., reducing sound exposure, prey movement). Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Computation of Sound Generated by Flow Over a Circular Cylinder: An Acoustic Analogy Approach

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Cox, Jared S.; Rumsey, Christopher L.; Younis, Bassam A.

    1997-01-01

    The sound generated by viscous flow past a circular cylinder is predicted via the Lighthill acoustic analogy approach. The two dimensional flow field is predicted using two unsteady Reynolds-averaged Navier-Stokes solvers. Flow field computations are made for laminar flow at three Reynolds numbers (Re = 1000, Re = 10,000, and Re = 90,000) and two different turbulent models at Re = 90,000. The unsteady surface pressures are utilized by an acoustics code that implements Farassat's formulation 1A to predict the acoustic field. The acoustic code is a 3-D code - 2-D results are found by using a long cylinder length. The 2-D predictions overpredict the acoustic amplitude; however, if correlation lengths in the range of 3 to 10 cylinder diameters are used, the predicted acoustic amplitude agrees well with experiment.

  6. Verification of the Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia

    DTIC Science & Technology

    1989-07-01

    TECHNICAL REPORT HL-89-14 VERIFICATION OF THE HYDRODYNAMIC AND Si SEDIMENT TRANSPORT HYBRID MODELING SYSTEM FOR CUMBERLAND SOUND AND I’) KINGS BAY...Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia 12 PERSONAL AUTHOR(S) Granat...Hydrodynamic results from RMA-2V were used in the numerical sediment transport code STUDH in modeling the interaction of the flow transport and

  7. Evaluation of Variable-Depth Liner Configurations for Increased Broadband Noise Reduction

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Watson, W. R.; Nark, D. M.; Howerton, B. M.

    2015-01-01

    This paper explores the effects of variable-depth geometry on the amount of noise reduction that can be achieved with acoustic liners. Results for two variable-depth liners tested in the NASA Langley Grazing Flow Impedance Tube demonstrate significant broadband noise reduction. An impedance prediction model is combined with two propagation codes to predict corresponding sound pressure level profiles over the length of the Grazing Flow Impedance Tube. The comparison of measured and predicted sound pressure level profiles is sufficiently favorable to support use of these tools for investigation of a number of proposed variable-depth liner configurations. Predicted sound pressure level profiles for these proposed configurations reveal a number of interesting features. Liner orientation clearly affects the sound pressure level profile over the length of the liner, but the effect on the total attenuation is less pronounced. The axial extent of attenuation at an individual frequency continues well beyond the location where the liner depth is optimally tuned to the quarter-wavelength of that frequency. The sound pressure level profile is significantly affected by the way in which variable-depth segments are distributed over the length of the liner. Given the broadband noise reduction capability for these liner configurations, further development of impedance prediction models and propagation codes specifically tuned for this application is warranted.

  8. Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse

    PubMed Central

    Moser, Tobias; Neef, Andreas; Khimich, Darina

    2006-01-01

    Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948

  9. Coded wire tag recoveries from pink salmon in Prince William sound salmon fisheries, 1993. Restoration project 93067. Exxon Valdez oil spill restoration project final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharr, S.; Peckham, C.J.; Sharp, D.G.

    1995-11-01

    Coded wire tags applied to pink salmon fry in 1992 at four hatcheries in Prince William Sound were recovered in the commercial catch of 1993 and used to provide inseason estimates of hatchery contributions. These estimates were used by fishery managers to target the numerically superior hatchery returns, and reduce the pressure on oil-damaged wild stocks. Inseason estimates were made in two stages. The postseason analysis revealed that of a catch of 3.51 million pink salmon, 1.12 million were estimated to be of wild origin.

  10. Music 4C, a multi-voiced synthesis program with instruments defined in C

    NASA Astrophysics Data System (ADS)

    Beauchamp, James W.

    2003-04-01

    Music 4C is a program which runs under Unix (including Linux) and provides a means for the synthesis of arbitrary signals as defined by the C code. The program is actually a loose translation of an earlier program, Music 4BF [H. S. Howe, Jr., Electronic Music Synthesis (Norton, 1975)]. A set of instrument definitions are driven by a numerical score which consists of a series of ``events.'' Each event gives an instrument name, start time and duration, and a number of parameters (e.g., pitch) which describe the event. Each instrument definition consists of event parameters, performance variables, initializations, and a synthesis algorithmic code. Thus, the synthetic signal, no matter how complex, is precisely defined. Moreover, the resulting sounds can be overlaid in any arbitrary pattern. The program serves as a mixer of algorithmically produced sounds or recorded sounds taken from sample files or synthesized from spectrum files. A score file can be entered by hand, generated from a program, translated from a MIDI file, or generated from an alpha-numeric score using an auxiliary program, Notepro. Output sample files are in wav, snd, or aiff format. The program is provided in the C source code for download.

  11. Selective and Efficient Neural Coding of Communication Signals Depends on Early Acoustic and Social Environment

    PubMed Central

    Amin, Noopur; Gastpar, Michael; Theunissen, Frédéric E.

    2013-01-01

    Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment. PMID:23630587

  12. Sound Speed and Attenuation in Multiphase Media

    DTIC Science & Technology

    2007-09-30

    Number: N00014-04-1-0164 LONG-TERM GOALS One research goal developed from conducted shallow water (SW) acoustic transmission experiments in...code 1 only 14. ABSTRACT One research goal developed from conducted shallow water (SW) acoustic transmission experiments in sandy-silty areas [1...propagation code, such as Kraken [11], or with a poroelastic -parabolic-equation code, Ram, [ 12,13 ] with a depth dependent profiles and frequency

  13. Cochlear neuropathy and the coding of supra-threshold sound.

    PubMed

    Bharadwaj, Hari M; Verhulst, Sarah; Shaheen, Luke; Liberman, M Charles; Shinn-Cunningham, Barbara G

    2014-01-01

    Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  14. 37 CFR 201.22 - Advance notices of potential infringement of works consisting of sounds, images, or both.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Advance notices of potential infringement of works consisting of sounds, images, or both. (a) Definitions... section 411(b) of title 17 of the United States Code, and in accordance with the provisions of this..., provided registration for the work is made within three months after its first transmission. (2) For...

  15. National Oceanic and Atmospheric Administration's Cetacean and Sound Mapping Effort: Continuing Forward with an Integrated Ocean Noise Strategy.

    PubMed

    Harrison, Jolie; Ferguson, Megan; Gedamke, Jason; Hatch, Leila; Southall, Brandon; Van Parijs, Sofie

    2016-01-01

    To help manage chronic and cumulative impacts of human activities on marine mammals, the National Oceanic and Atmospheric Administration (NOAA) convened two working groups, the Underwater Sound Field Mapping Working Group (SoundMap) and the Cetacean Density and Distribution Mapping Working Group (CetMap), with overarching effort of both groups referred to as CetSound, which (1) mapped the predicted contribution of human sound sources to ocean noise and (2) provided region/time/species-specific cetacean density and distribution maps. Mapping products were presented at a symposium where future priorities were identified, including institutionalization/integration of the CetSound effort within NOAA-wide goals and programs, creation of forums and mechanisms for external input and funding, and expanded outreach/education. NOAA is subsequently developing an ocean noise strategy to articulate noise conservation goals and further identify science and management actions needed to support them.

  16. On the Relevance of Natural Stimuli for the Study of Brainstem Correlates: The Example of Consonance Perception

    PubMed Central

    Cousineau, Marion; Bidelman, Gavin M.; Peretz, Isabelle; Lehmann, Alexandre

    2015-01-01

    Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception. PMID:26720000

  17. Memory for product sounds: the effect of sound and label type.

    PubMed

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  18. Neural coding strategies in auditory cortex.

    PubMed

    Wang, Xiaoqin

    2007-07-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.

  19. A Binaural Cochlear Implant Sound Coding Strategy Inspired by the Contralateral Medial Olivocochlear Reflex

    PubMed Central

    Eustaquio-Martín, Almudena; Stohl, Joshua S.; Wolford, Robert D.; Schatzer, Reinhold; Wilson, Blake S.

    2016-01-01

    Objectives: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Design: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. Results: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. Conclusions: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids. PMID:26862711

  20. A Binaural Cochlear Implant Sound Coding Strategy Inspired by the Contralateral Medial Olivocochlear Reflex.

    PubMed

    Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Wilson, Blake S

    2016-01-01

    In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.

  1. Modeling the propagation of nonlinear three-dimensional acoustic beams in inhomogeneous media.

    PubMed

    Jing, Yuan; Cleveland, Robin O

    2007-09-01

    A three-dimensional model of the forward propagation of nonlinear sound beams in inhomogeneous media, a generalized Khokhlov-Zabolotskaya-Kuznetsov equation, is described. The Texas time-domain code (which accounts for paraxial diffraction, nonlinearity, thermoviscous absorption, and absorption and dispersion associated with multiple relaxation processes) was extended to solve for the propagation of nonlinear beams for the case where all medium properties vary in space. The code was validated with measurements of the nonlinear acoustic field generated by a phased array transducer operating at 2.5 MHz in water. A nonuniform layer of gel was employed to create an inhomogeneous medium. There was good agreement between the code and measurements in capturing the shift in the pressure distribution of both the fundamental and second harmonic due to the gel layer. The results indicate that the numerical tool described here is appropriate for propagation of nonlinear sound beams through weakly inhomogeneous media.

  2. Reducing the ingress of urban noise through natural ventilation openings.

    PubMed

    Oldham, D J; de Salis, M H; Sharples, S

    2004-01-01

    For buildings in busy urban areas affected by high levels of road traffic noise the potential to use natural ventilation can be limited by excessive noise entering through ventilation openings. This paper is concerned with techniques to reduce noise ingress into naturally ventilated buildings while minimizing airflow path resistance. A combined experimental and theoretical approach to the interaction of airflow and sound transmission through ventilators for natural ventilation applications is described. A key element of the investigation has been the development of testing facilities capable of measuring the airflow and sound transmission losses for a range of ventilation noise control strategies. It is demonstrated that a combination of sound reduction mechanisms -- one covering low frequency sound and another covering high frequency sound -- is required to attenuate effectively noise from typical urban sources. A method is proposed for quantifying the acoustic performance of different strategies to enable comparisons and informed decisions to be made leading to the possibility of a design methodology for optimizing the ventilation and acoustic performance of different strategies. The need for employing techniques for combating low frequency sound in tandem with techniques for reducing high frequency sound in reducing the ingress of noise from urban sources such as road traffic to acceptable levels is demonstrated. A technique is proposed for enabling the acoustic and airflow performance of apertures for natural ventilation systems to be designed simultaneously.

  3. Broadband transmission-type coding metamaterial for wavefront manipulation for airborne sound

    NASA Astrophysics Data System (ADS)

    Li, Kun; Liang, Bin; Yang, Jing; Yang, Jun; Cheng, Jian-chun

    2018-07-01

    The recent advent of coding metamaterials, as a new class of acoustic metamaterials, substantially reduces the complexity in the design and fabrication of acoustic functional devices capable of manipulating sound waves in exotic manners by arranging coding elements with discrete phase states in specific sequences. It is therefore intriguing, both physically and practically, to pursue a mechanism for realizing broadband acoustic coding metamaterials that control transmitted waves with a fine resolution of the phase profile. Here, we propose the design of a transmission-type acoustic coding device and demonstrate its metamaterial-based implementation. The mechanism is that, instead of relying on resonant coding elements that are necessarily narrow-band, we build weak-resonant coding elements with a helical-like metamaterial with a continuously varying pitch that effectively expands the working bandwidth while maintaining the sub-wavelength resolution of the phase profile that is vital for the production of complicated wave fields. The effectiveness of our proposed scheme is numerically verified via the demonstration of three distinctive examples of acoustic focusing, anomalous refraction, and vortex beam generation in the prescribed frequency band on the basis of 1- and 2-bit coding sequences. Simulation results agree well with theoretical predictions, showing that the designed coding devices with discrete phase profiles are efficient in engineering the wavefront of outcoming waves to form the desired spatial pattern. We anticipate the realization of coding metamaterials with broadband functionality and design flexibility to open up possibilities for novel acoustic functional devices for the special manipulation of transmitted waves and underpin diverse applications ranging from medical ultrasound imaging to acoustic detections.

  4. Training in Compensatory Strategies Enhances Rapport in Interactions Involving People with Möbius Syndrome

    PubMed Central

    Michael, John; Bogart, Kathleen; Tylén, Kristian; Krueger, Joel; Bech, Morten; Østergaard, John Rosendahl; Fusaroli, Riccardo

    2015-01-01

    In the exploratory study reported here, we tested the efficacy of an intervention designed to train teenagers with Möbius syndrome (MS) to increase the use of alternative communication strategies (e.g., gestures) to compensate for their lack of facial expressivity. Specifically, we expected the intervention to increase the level of rapport experienced in social interactions by our participants. In addition, we aimed to identify the mechanisms responsible for any such increase in rapport. In the study, five teenagers with MS interacted with three naïve participants without MS before the intervention, and with three different naïve participants without MS after the intervention. Rapport was assessed by self-report and by behavioral coders who rated videos of the interactions. Individual non-verbal behavior was assessed via behavioral coders, whereas verbal behavior was automatically extracted from the sound files. Alignment was assessed using cross recurrence quantification analysis and mixed-effects models. The results showed that observer-coded rapport was greater after the intervention, whereas self-reported rapport did not change significantly. Observer-coded gesture and expressivity increased in participants with and without MS, whereas overall linguistic alignment decreased. Fidgeting and repetitiveness of verbal behavior also decreased in both groups. In sum, the intervention may impact non-verbal and verbal behavior in participants with and without MS, increasing rapport as well as overall gesturing, while decreasing alignment. PMID:26500605

  5. The Balance of Excitatory and Inhibitory Synaptic Inputs for Coding Sound Location

    PubMed Central

    Ono, Munenori

    2014-01-01

    The localization of high-frequency sounds in the horizontal plane uses an interaural-level difference (ILD) cue, yet little is known about the synaptic mechanisms that underlie processing this cue in the inferior colliculus (IC) of mouse. Here, we study the synaptic currents that process ILD in vivo and use stimuli in which ILD varies around a constant average binaural level (ABL) to approximate sounds on the horizontal plane. Monaural stimulation in either ear produced EPSCs and IPSCs in most neurons. The temporal properties of monaural responses were well matched, suggesting connected functional zones with matched inputs. The EPSCs had three patterns in response to ABL stimuli, preference for the sound field with the highest level stimulus: (1) contralateral; (2) bilateral highly lateralized; or (3) at the center near 0 ILD. EPSCs and IPSCs were well correlated except in center-preferred neurons. Summation of the monaural EPSCs predicted the binaural excitatory response but less well than the summation of monaural IPSCs. Binaural EPSCs often showed a nonlinearity that strengthened the response to specific ILDs. Extracellular spike and intracellular current recordings from the same neuron showed that the ILD tuning of the spikes was sharper than that of the EPSCs. Thus, in the IC, balanced excitatory and inhibitory inputs may be a general feature of synaptic coding for many types of sound processing. PMID:24599475

  6. Transitioning from analog to digital audio recording in childhood speech sound disorders.

    PubMed

    Shriberg, Lawrence D; McSweeny, Jane L; Anderson, Bruce E; Campbell, Thomas F; Chial, Michael R; Green, Jordan R; Hauner, Katherina K; Moore, Christopher A; Rusiewicz, Heather L; Wilson, David L

    2005-06-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants' speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise.

  7. Transitioning from analog to digital audio recording in childhood speech sound disorders

    PubMed Central

    Shriberg, Lawrence D.; McSweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.

    2014-01-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants’ speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise. PMID:16019779

  8. The Role of Inhibition in a Computational Model of an Auditory Cortical Neuron during the Encoding of Temporal Information

    PubMed Central

    Bendor, Daniel

    2015-01-01

    In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex. PMID:25879843

  9. A study of low-cost, robust assistive listening system (ALS) based on digital wireless technology.

    PubMed

    Israsena, P; Dubsok, P; Pan-Ngum, S

    2008-11-01

    We have developed a simple, low-cost digital wireless broadcasting system prototype, intended for a classroom of hearing impaired students. The system is designed to be a low-cost alternative to an existing FM system. The system implemented is for short-range communication, with a one-transmitter, multiple-receiver configuration, which is typical for these classrooms. The data is source-coded for voice-band quality, FSK modulated, and broadcasted via a 915 MHz radio frequency. A DES encryption can optionally be added for better information security. Test results show that the system operating range is approximately ten metres, and the sound quality is close to telephone quality as intended. We also discuss performance issues such as sound, power and size, as well as transmission protocols. The test results are the proof of concept that the prototype is a viable alternative to an existing FM system. Improvements can be made to the system's sound quality via techniques such as channel coding, which is also discussed.

  10. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  11. Juno: Morse code "HI" received from Earth

    NASA Image and Video Library

    2017-03-22

    During its close flyby of Earth in 2013, NASA's Jupiter-bound Juno spacecraft listened for -- and heard -- a coordinated, global transmission from amateur radio operators using its radio and plasma wave science instrument. The message said "HI" in Morse code. More details about this sound can be found here: photojournal.jpl.nasa.gov/catalog/PIA17744

  12. Neural Coding of Relational Invariance in Speech: Human Language Analogs to the Barn Owl.

    ERIC Educational Resources Information Center

    Sussman, Harvey M.

    1989-01-01

    The neuronal model shown to code sound-source azimuth in the barn owl by H. Wagner et al. in 1987 is used as the basis for a speculative brain-based human model, which can establish contrastive phonetic categories to solve the problem of perception "non-invariance." (SLD)

  13. Computational fluid dynamics simulation of sound propagation through a blade row.

    PubMed

    Zhao, Lei; Qiao, Weiyang; Ji, Liang

    2012-10-01

    The propagation of sound waves through a blade row is investigated numerically. A wave splitting method in a two-dimensional duct with arbitrary mean flow is presented, based on which pressure amplitude of different wave mode can be extracted at an axial plane. The propagation of sound wave through a flat plate blade row has been simulated by solving the unsteady Reynolds average Navier-Stokes equations (URANS). The transmission and reflection coefficients obtained by Computational Fluid Dynamics (CFD) are compared with semi-analytical results. It indicates that the low order URANS scheme will cause large errors if the sound pressure level is lower than -100 dB (with as reference pressure the product of density, main flow velocity, and speed of sound). The CFD code has sufficient precision when solving the interaction of sound wave and blade row providing the boundary reflections have no substantial influence. Finally, the effects of flow Mach number, blade thickness, and blade turning angle on sound propagation are studied.

  14. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.

  15. PyEPL: a cross-platform experiment-programming library.

    PubMed

    Geller, Aaron S; Schlefer, Ian K; Sederberg, Per B; Jacobs, Joshua; Kahana, Michael J

    2007-11-01

    PyEPL (the Python Experiment-Programming Library) is a Python library which allows cross-platform and object-oriented coding of behavioral experiments. It provides functions for displaying text and images onscreen, as well as playing and recording sound, and is capable of rendering 3-D virtual environments forspatial-navigation tasks. It is currently tested for Mac OS X and Linux. It interfaces with Activewire USB cards (on Mac OS X) and the parallel port (on Linux) for synchronization of experimental events with physiological recordings. In this article, we first present two sample programs which illustrate core PyEPL features. The examples demonstrate visual stimulus presentation, keyboard input, and simulation and exploration of a simple 3-D environment. We then describe the components and strategies used in implementing PyEPL.

  16. PyEPL: A cross-platform experiment-programming library

    PubMed Central

    Geller, Aaron S.; Schleifer, Ian K.; Sederberg, Per B.; Jacobs, Joshua; Kahana, Michael J.

    2009-01-01

    PyEPL (the Python Experiment-Programming Library) is a Python library which allows cross-platform and object-oriented coding of behavioral experiments. It provides functions for displaying text and images onscreen, as well as playing and recording sound, and is capable of rendering 3-D virtual environments for spatial-navigation tasks. It is currently tested for Mac OS X and Linux. It interfaces with Activewire USB cards (on Mac OS X) and the parallel port (on Linux) for synchronization of experimental events with physiological recordings. In this article, we first present two sample programs which illustrate core PyEPL features. The examples demonstrate visual stimulus presentation, keyboard input, and simulation and exploration of a simple 3-D environment. We then describe the components and strategies used in implementing PyEPL. PMID:18183912

  17. Decentralized Control of Sound Radiation from an Aircraft-Style Panel Using Iterative Loop Recovery

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2008-01-01

    A decentralized LQG-based control strategy is designed to reduce low-frequency sound transmission through periodically stiffened panels. While modern control strategies have been used to reduce sound radiation from relatively simple structural acoustic systems, significant implementation issues have to be addressed before these control strategies can be extended to large systems such as the fuselage of an aircraft. For instance, centralized approaches typically require a high level of connectivity and are computationally intensive, while decentralized strategies face stability problems caused by the unmodeled interaction between neighboring control units. Since accurate uncertainty bounds are not known a priori, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is validated using real-time control experiments performed on a built-up aluminum test structure representative of the fuselage of an aircraft. Experiments demonstrate that the iterative approach is capable of achieving 12 dB peak reductions and a 3.6 dB integrated reduction in radiated sound power from the stiffened panel.

  18. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants

    PubMed Central

    Zheng, Yi; Godar, Shelly P.; Litovsky, Ruth Y.

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users. PMID:26288142

  19. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    PubMed

    Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  20. Decentralized Control of Sound Radiation using a High-Authority/Low-Authority Control Strategy with Anisotropic Actuators

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2008-01-01

    This paper describes a combined control strategy designed to reduce sound radiation from stiffened aircraft-style panels. The control architecture uses robust active damping in addition to high-authority linear quadratic Gaussian (LQG) control. Active damping is achieved using direct velocity feedback with triangularly shaped anisotropic actuators and point velocity sensors. While active damping is simple and robust, stability is guaranteed at the expense of performance. Therefore the approach is often referred to as low-authority control. In contrast, LQG control strategies can achieve substantial reductions in sound radiation. Unfortunately, the unmodeled interaction between neighboring control units can destabilize decentralized control systems. Numerical simulations show that combining active damping and decentralized LQG control can be beneficial. In particular, augmenting the in-bandwidth damping supplements the performance of the LQG control strategy and reduces the destabilizing interaction between neighboring control units.

  1. Nonlinear Acoustics: Periodic Waveguide, Scattering of Sound by Sound, Three-Layer Fluid, Finite Amplitude Sound in a Medium Having a Distribution of Relaxation Processes, and Production of an Isolated Negative Pulse in Water

    DTIC Science & Technology

    1993-06-03

    propagation and shape of the waveform," Conference on Lithotripsy (Extra-Corporeal Shock Wave Applications - Technical and Clinical Problems), Univer- sity of...Blackstock, "Physical aspects of lithotripsy ," Paper GG1, 115th Meeting, Acoustical Society of America, Seattle, 16-20 May 1988. ABSTRACT: J. Acoust...Am. 90, 2244(A) (1991). kAlso supported in part by Grant NAG-1-1204 and University of Southampton , Eng- land. 49 1992 ONR Contract Code 1109 JS 1. F

  2. Prediction of Turbulence-Generated Noise in Unheated Jets. Part 2; JeNo Users' Manual (Version 1.0)

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Wolter, John D.; Koch, L. Danielle

    2009-01-01

    JeNo (Version 1.0) is a Fortran90 computer code that calculates the far-field sound spectral density produced by axisymmetric, unheated jets at a user specified observer location and frequency range. The user must provide a structured computational grid and a mean flow solution from a Reynolds-Averaged Navier Stokes (RANS) code as input. Turbulence kinetic energy and its dissipation rate from a k-epsilon or k-omega turbulence model must also be provided. JeNo is a research code, and as such, its development is ongoing. The goal is to create a code that is able to accurately compute far-field sound pressure levels for jets at all observer angles and all operating conditions. In order to achieve this goal, current theories must be combined with the best practices in numerical modeling, all of which must be validated by experiment. Since the acoustic predictions from JeNo are based on the mean flow solutions from a RANS code, quality predictions depend on accurate aerodynamic input.This is why acoustic source modeling, turbulence modeling, together with the development of advanced measurement systems are the leading areas of research in jet noise research at NASA Glenn Research Center.

  3. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  4. Auditory responses in the amygdala to social vocalizations

    NASA Astrophysics Data System (ADS)

    Gadziola, Marie A.

    The underlying goal of this dissertation is to understand how the amygdala, a brain region involved in establishing the emotional significance of sensory input, contributes to the processing of complex sounds. The general hypothesis is that communication calls of big brown bats (Eptesicus fuscus) transmit relevant information about social context that is reflected in the activity of amygdalar neurons. The first specific aim analyzed social vocalizations emitted under a variety of behavioral contexts, and related vocalizations to an objective measure of internal physiological state by monitoring the heart rate of vocalizing bats. These experiments revealed a complex acoustic communication system among big brown bats in which acoustic cues and call structure signal the emotional state of a sender. The second specific aim characterized the responsiveness of single neurons in the basolateral amygdala to a range of social syllables. Neurons typically respond to the majority of tested syllables, but effectively discriminate among vocalizations by varying the response duration. This novel coding strategy underscores the importance of persistent firing in the general functioning of the amygdala. The third specific aim examined the influence of acoustic context by characterizing both the behavioral and neurophysiological responses to natural vocal sequences. Vocal sequences differentially modify the internal affective state of a listening bat, with lower aggression vocalizations evoking the greatest change in heart rate. Amygdalar neurons employ two different coding strategies: low background neurons respond selectively to very few stimuli, whereas high background neurons respond broadly to stimuli but demonstrate variation in response magnitude and timing. Neurons appear to discriminate the valence of stimuli, with aggression sequences evoking robust population-level responses across all sound levels. Further, vocal sequences show improved discrimination among stimuli compared to isolated syllables, and this improved discrimination is expressed in part by the timing of action potentials. Taken together, these data support the hypothesis that big brown bat social vocalizations transmit relevant information about the social context that is encoded within the discharge pattern of amygdalar neurons ultimately responsible for coordinating appropriate social behaviors. I further propose that vocalization-evoked amygdalar activity will have significant impact on subsequent sensory processing and plasticity.

  5. Noise levels in the learning-teaching activities in a dental medicine school

    NASA Astrophysics Data System (ADS)

    Matos, Andreia; Carvalho, Antonio P. O.; Fernandes, Joao C. S.

    2002-11-01

    The noise levels made by different clinical handpieces and laboratory engines are considered to be the main descriptors of acoustical comfort in learning spaces in a dental medicine school. Sound levels were measured in five types of classrooms and teaching laboratories at the University of Porto Dental Medicine School. Handpiece noise measurements were made while instruments were running free and during operations with cutting tools (tooth, metal, and acrylic). Noise levels were determined using a precision sound level meter, which was positioned at ear level and also at one-meter distance from the operator. Some of the handpieces were brand new and the others had a few years of use. The sound levels encountered were between 60 and 99 dB(A) and were compared with the noise limits in A-weighted sound pressure level for mechanical equipments installed in educational buildings included in the Portuguese Noise Code and in other European countries codes. The daily personal noise exposure levels (LEP,d) of the students and professors were calculated to be between 85 and 90 dB(A) and were compared with the European legal limits. Some noise limits for this type of environment are proposed and suggestions for the improvement of the acoustical environment are given.

  6. Do you see what I hear: experiments in multi-channel sound and 3D visualization for network monitoring?

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Hall, David L.

    2010-04-01

    Detection of intrusions is a continuing problem in network security. Due to the large volumes of data recorded in Web server logs, analysis is typically forensic, taking place only after a problem has occurred. This paper describes a novel method of representing Web log information through multi-channel sound, while simultaneously visualizing network activity using a 3-D immersive environment. We are exploring the detection of intrusion signatures and patterns, utilizing human aural and visual pattern recognition ability to detect intrusions as they occur. IP addresses and return codes are mapped to an informative and unobtrusive listening environment to act as a situational sound track of Web traffic. Web log data is parsed and formatted using Python, then read as a data array by the synthesis language SuperCollider [1], which renders it as a sonification. This can be done either for the study of pre-existing data sets or in monitoring Web traffic in real time. Components rendered aurally include IP address, geographical information, and server Return Codes. Users can interact with the data, speeding or slowing the speed of representation (for pre-existing data sets) or "mixing" sound components to optimize intelligibility for tracking suspicious activity.

  7. Cortical encoding of pitch: Recent results and open questions

    PubMed Central

    Walker, Kerry M.M.; Bizley, Jennifer K.; King, Andrew J.; Schnupp, Jan W.H.

    2011-01-01

    It is widely appreciated that the key predictor of the pitch of a sound is its periodicity. Neural structures which support pitch perception must therefore be able to reflect the repetition rate of a sound, but this alone is not sufficient. Since pitch is a psychoacoustic property, a putative cortical code for pitch must also be able to account for the relationship between the amount to which a sound is periodic (i.e. its temporal regularity) and the perceived pitch salience, as well as limits in our ability to detect pitch changes or to discriminate rising from falling pitch. Pitch codes must also be robust in the presence of nuisance variables such as loudness or timbre. Here, we review a large body of work on the cortical basis of pitch perception, which illustrates that the distribution of cortical processes that give rise to pitch perception is likely to depend on both the acoustical features and functional relevance of a sound. While previous studies have greatly advanced our understanding, we highlight several open questions regarding the neural basis of pitch perception. These questions can begin to be addressed through a cooperation of investigative efforts across species and experimental techniques, and, critically, by examining the responses of single neurons in behaving animals. PMID:20457240

  8. Third order harmonic imaging for biological tissues using three phase-coded pulses.

    PubMed

    Ma, Qingyu; Gong, Xiufen; Zhang, Dong

    2006-12-22

    Compared to the fundamental and the second harmonic imaging, the third harmonic imaging shows significant improvements in image quality due to the better resolution, but it is degraded by the lower sound pressure and signal-to-noise ratio (SNR). In this study, a phase-coded pulse technique is proposed to selectively enhance the sound pressure of the third harmonic by 9.5 dB whereas the fundamental and the second harmonic components are efficiently suppressed and SNR is also increased by 4.7 dB. Based on the solution of the KZK nonlinear equation, the axial and lateral beam profiles of harmonics radiated from a planar piston transducer were theoretically simulated and experimentally examined. Finally, the third harmonic images using this technique were performed for several biological tissues and compared with the images obtained by the fundamental and the second harmonic imaging. Results demonstrate that the phase-coded pulse technique yields a dramatically cleaner and sharper contrast image.

  9. Prediction of sound radiated from different practical jet engine inlets

    NASA Technical Reports Server (NTRS)

    Zinn, B. T.; Meyer, W. L.

    1980-01-01

    Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.

  10. IEP goals for school-age children with speech sound disorders.

    PubMed

    Farquharson, Kelly; Tambyraja, Sherine R; Justice, Laura M; Redle, Erin E

    2014-01-01

    The purpose of the current study was to describe the current state of practice for writing Individualized Education Program (IEP) goals for children with speech sound disorders (SSDs). IEP goals for 146 children receiving services for SSDs within public school systems across two states were coded for their dominant theoretical framework and overall quality. A dichotomous scheme was used for theoretical framework coding: cognitive-linguistic or sensory-motor. Goal quality was determined by examining 7 specific indicators outlined by an empirically tested rating tool. In total, 147 long-term and 490 short-term goals were coded. The results revealed no dominant theoretical framework for long-term goals, whereas short-term goals largely reflected a sensory-motor framework. In terms of quality, the majority of speech production goals were functional and generalizable in nature, but were not able to be easily targeted during common daily tasks or by other members of the IEP team. Short-term goals were consistently rated higher in quality domains when compared to long-term goals. The current state of practice for writing IEP goals for children with SSDs indicates that theoretical framework may be eclectic in nature and likely written to support the individual needs of children with speech sound disorders. Further investigation is warranted to determine the relations between goal quality and child outcomes. (1) Identify two predominant theoretical frameworks and discuss how they apply to IEP goal writing. (2) Discuss quality indicators as they relate to IEP goals for children with speech sound disorders. (3) Discuss the relationship between long-term goals level of quality and related theoretical frameworks. (4) Identify the areas in which business-as-usual IEP goals exhibit strong quality.

  11. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  12. Pre-attentive processing of spectrally complex sounds with asynchronous onsets: an event-related potential study with human subjects.

    PubMed

    Tervaniemi, M; Schröger, E; Näätänen, R

    1997-05-23

    Neuronal mechanisms involved in the processing of complex sounds with asynchronous onsets were studied in reading subjects. The sound onset asynchrony (SOA) between the leading partial and the remaining complex tone was varied between 0 and 360 ms. Infrequently occurring deviant sounds (in which one out of 10 harmonics was different in pitch relative to the frequently occurring standard sound) elicited the mismatch negativity (MMN), a change-specific cortical event-related potential (ERP) component. This indicates that the pitch of standard stimuli had been pre-attentively coded by sensory-memory traces. Moreover, when the complex-tone onset fell within temporal integration window initiated by the leading-partial onset, the deviants elicited the N2b component. This indexes that involuntary attention switch towards the sound change occurred. In summary, the present results support the existence of pre-perceptual integration mechanism of 100-200 ms duration and emphasize its importance in switching attention towards the stimulus change.

  13. Principal cells of the brainstem's interaural sound level detector are temporal differentiators rather than integrators.

    PubMed

    Franken, Tom P; Joris, Philip X; Smith, Philip H

    2018-06-14

    The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.

  14. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  15. The Effect of Strain upon the Velocity of Sound and the Velocity of Free Retraction for Natural Rubber.

    DTIC Science & Technology

    1982-05-01

    28 - DYN 6/81 DISTRIBUTION LIST No. Copies ,No. Cooies Dr. L.V. Schmidt I Dr. F. Roberto I Assistant Secretary of the Navy Code AFRPL MKPA (R,E, and...Scientific Advisor Directorate of Aerosoace Sciences Commandant of the Marine Corps Bolling Air Force Base Code RD-I Washington, D.C. 20332 Washington

  16. 40 CFR 51.491 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... strategies are strategies for which adequate procedures to quantify emissions reductions or specify a program... goals. Such programs are categorized into the following three categories: Emission-limiting, market-response, and directionally-sound strategies. Emission-limiting strategies are strategies that directly...

  17. Aeroacoustic Analysis of Turbofan Noise Generation

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.; Envia, Edmane

    1996-01-01

    This report provides an updated version of analytical documentation for the V072 Rotor Wake/Stator Interaction Code. It presents the theoretical derivation of the equations used in the code and, where necessary, it documents the enhancements and changes made to the original code since its first release. V072 is a package of FORTRAN computer programs which calculate the in-duct acoustic modes excited by a fan/stator stage operating in a subsonic mean flow. Sound is generated by the stator vanes interacting with the mean wakes of the rotor blades. In this updated version, only the tonal noise produced at the blade passing frequency and its harmonics, is described. The broadband noise component analysis, which was part of the original report, is not included here. The code provides outputs of modal pressure and power amplitudes generated by the rotor-wake/stator interaction. The rotor/stator stage is modeled as an ensemble of blades and vanes of zero camber and thickness enclosed within an infinite hard-walled annular duct. The amplitude of each propagating mode is computed and summed to obtain the harmonics of sound power flux within the duct for both upstream and downstream propagating modes.

  18. Neural network decoder for quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  19. Hierarchical differences in population coding within auditory cortex.

    PubMed

    Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L

    2017-08-01

    Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation ( r noise ) between simultaneously recorded neurons and found that whereas engagement decreased average r noise in A1, engagement increased average r noise in ML. This finding surprised us, because attentive states are commonly reported to decrease average r noise We analyzed the effect of r noise on AM coding in both A1 and ML and found that whereas engagement-related shifts in r noise in A1 enhance AM coding, r noise shifts in ML have little effect. These results imply that the effect of r noise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing r noise Therefore, the hierarchical emergence of r noise -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding. Copyright © 2017 the American Physiological Society.

  20. Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

    PubMed

    Paraouty, Nihaad; Stasiak, Arkadiusz; Lorenzi, Christian; Varnet, Léo; Winter, Ian M

    2018-04-25

    Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (<4 kHz) demonstrate good phase locking to TFS. For modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues. SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal fine structure cues. We also demonstrate a diversity of neural responses with different coding specializations. These results support the dual-coding scheme proposed by psychophysicists to account for FM sensitivity in humans and provide new insights on how this might be implemented in the early stages of the auditory pathway. Copyright © 2018 the authors 0270-6474/18/384123-15$15.00/0.

  1. A Synopsis of Marine Animal Underwater Sounds in Eight Geographic Areas

    DTIC Science & Technology

    1971-05-28

    MAMMALS (1) HIGHEST PROBABILITY Delphinapterus leucas - In Apr.-June they move up Kola River. Present all months except July) Aug, Sept...its vocali- zations are available from Dr. W. C. Cummings, NUC Code 5054. : ii V, , 71 Delphinapterus leucas --(white-whale or beluga) The beluga is...about 5 msoc. ~7 * Delphinapterus leucas --(white whale or beluga-continued Our own estimates of source level for sounds from an individual whale 2 are

  2. Investigation of Liner Characteristics in the NASA Langley Curved Duct Test Rig

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Brown, Martha C.; Watson, Willie R.; Jones, Michael G.

    2007-01-01

    The Curved Duct Test Rig (CDTR), which is designed to investigate propagation of sound in a duct with flow, has been developed at NASA Langley Research Center. The duct incorporates an adaptive control system to generate a tone in the duct at a specific frequency with a target Sound Pressure Level and a target mode shape. The size of the duct, the ability to isolate higher order modes, and the ability to modify the duct configuration make this rig unique among experimental duct acoustics facilities. An experiment is described in which the facility performance is evaluated by measuring the sound attenuation by a sample duct liner. The liner sample comprises one wall of the liner test section. Sound in tones from 500 to 2400 Hz, with modes that are parallel to the liner surface of order 0 to 5, and that are normal to the liner surface of order 0 to 2, can be generated incident on the liner test section. Tests are performed in which sound is generated without axial flow in the duct and with flow at a Mach number of 0.275. The attenuation of the liner is determined by comparing the sound power in a hard wall section downstream of the liner test section to the sound power in a hard wall section upstream of the liner test section. These experimentally determined attenuations are compared to numerically determined attenuations calculated by means of a finite element analysis code. The code incorporates liner impedance values educed from measured data from the NASA Langley Grazing Incidence Tube, a test rig that is used for investigating liner performance with flow and with (0,0) mode incident grazing. The analytical and experimental results compare favorably, indicating the validity of the finite element method and demonstrating that finite element prediction tools can be used together with experiment to characterize the liner attenuation.

  3. Representations of Pitch and Timbre Variation in Human Auditory Cortex

    PubMed Central

    2017-01-01

    Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255

  4. Neural coding of sound envelope in reverberant environments.

    PubMed

    Slama, Michaël C C; Delgutte, Bertrand

    2015-03-11

    Speech reception depends critically on temporal modulations in the amplitude envelope of the speech signal. Reverberation encountered in everyday environments can substantially attenuate these modulations. To assess the effect of reverberation on the neural coding of amplitude envelope, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbit using sinusoidally amplitude modulated (AM) broadband noise stimuli presented in simulated anechoic and reverberant environments. Although reverberation degraded both rate and temporal coding of AM in IC neurons, in most neurons, the degradation in temporal coding was smaller than the AM attenuation in the stimulus. This compensation could largely be accounted for by the compressive shape of the modulation input-output function (MIOF), which describes the nonlinear transformation of modulation depth from acoustic stimuli into neural responses. Additionally, in a subset of neurons, the temporal coding of AM was better for reverberant stimuli than for anechoic stimuli having the same modulation depth at the ear. Using hybrid anechoic stimuli that selectively possess certain properties of reverberant sounds, we show that this reverberant advantage is not caused by envelope distortion, static interaural decorrelation, or spectral coloration. Overall, our results suggest that the auditory system may possess dual mechanisms that make the coding of amplitude envelope relatively robust in reverberation: one general mechanism operating for all stimuli with small modulation depths, and another mechanism dependent on very specific properties of reverberant stimuli, possibly the periodic fluctuations in interaural correlation at the modulation frequency. Copyright © 2015 the authors 0270-6474/15/354452-17$15.00/0.

  5. HCPCS Coding: An Integral Part of Your Reimbursement Strategy.

    PubMed

    Nusgart, Marcia

    2013-12-01

    The first step to a successful reimbursement strategy is to ensure that your wound care product has the most appropriate Healthcare Common Procedure Coding System (HCPCS) code (or billing) for your product. The correct HCPCS code plays an essential role in patient access to new and existing technologies. When devising a strategy to obtain a HCPCS code for its product, companies must consider a number of factors as follows: (1) Has the product gone through the Food and Drug Administration (FDA) regulatory process or does it need to do so? Will the FDA code designation impact which HCPCS code will be assigned to your product? (2) In what "site of service" do you intend to market your product? Where will your customers use the product? Which coding system (CPT ® or HCPCS) applies to your product? (3) Does a HCPCS code for a similar product already exist? Does your product fit under the existing HCPCS code? (4) Does your product need a new HCPCS code? What is the linkage, if any, between coding, payment, and coverage for the product? Researchers and companies need to start early and place the same emphasis on a reimbursement strategy as it does on a regulatory strategy. Your reimbursement strategy staff should be involved early in the process, preferably during product research and development and clinical trial discussions.

  6. Healthy young adults implement distinctive avoidance strategies while walking and circumventing virtual human vs. non-human obstacles in a virtual environment.

    PubMed

    Souza Silva, Wagner; Aravind, Gayatri; Sangani, Samir; Lamontagne, Anouk

    2018-03-01

    This study examines how three types of obstacles (cylinder, virtual human and virtual human with footstep sounds) affect circumvention strategies of healthy young adults. Sixteen participants aged 25.2 ± 2.5 years (mean ± 1SD) were tested while walking overground and viewing a virtual room through a helmet mounted display. As participants walked towards a stationary target in the far space, they avoided an obstacle (cylinder or virtual human) approaching either from the right (+40°), left (-40°) or head-on (0°). Obstacle avoidance strategies were characterized using the position and orientation of the head. Repeated mixed model analysis showed smaller minimal distances (p = 0.007) while avoiding virtual humans as compared to cylinders. Footstep sounds added to virtual humans did not modify (p = 0.2) minimal distances compared to when no sound was provided. Onset times of avoidance strategies were similar across conditions (p = 0.06). Results indicate that the nature of the obstacle (human-like vs. non-human object) matters and can modify avoidance strategies. Smaller obstacle clearances in response to virtual humans may reflect the use of a less conservative avoidance strategy, due to a resemblance of obstacles to pedestrians and a recall of strategies used in daily locomotion. The lack of influence of footstep sounds supports the fact that obstacle avoidance primarily relies on visual cues and the principle of 'inverse effectiveness' whereby multisensory neurons' response to multimodal stimuli becomes weaker when the unimodal sensory stimulus (vision) is strong. Present findings should be taken into consideration to optimize the ecological validity of VR-based obstacle avoidance paradigms used in research and rehabilitation. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Auditory observation of stepping actions can cue both spatial and temporal components of gait in Parkinson׳s disease patients.

    PubMed

    Young, William R; Rodger, Matthew W M; Craig, Cathy M

    2014-05-01

    A common behavioural symptom of Parkinson׳s disease (PD) is reduced step length (SL). Whilst sensory cueing strategies can be effective in increasing SL and reducing gait variability, current cueing strategies conveying spatial or temporal information are generally confined to the use of either visual or auditory cue modalities, respectively. We describe a novel cueing strategy using ecologically-valid 'action-related' sounds (footsteps on gravel) that convey both spatial and temporal parameters of a specific action within a single cue. The current study used a real-time imitation task to examine whether PD affects the ability to re-enact changes in spatial characteristics of stepping actions, based solely on auditory information. In a second experimental session, these procedures were repeated using synthesized sounds derived from recordings of the kinetic interactions between the foot and walking surface. A third experimental session examined whether adaptations observed when participants walked to action-sounds were preserved when participants imagined either real recorded or synthesized sounds. Whilst healthy control participants were able to re-enact significant changes in SL in all cue conditions, these adaptations, in conjunction with reduced variability of SL were only observed in the PD group when walking to, or imagining the recorded sounds. The findings show that while recordings of stepping sounds convey action information to allow PD patients to re-enact and imagine spatial characteristics of gait, synthesis of sounds purely from gait kinetics is insufficient to evoke similar changes in behaviour, perhaps indicating that PD patients have a higher threshold to cue sensorimotor resonant responses. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Time course of dynamic range adaptation in the auditory nerve

    PubMed Central

    Wang, Grace I.; Dean, Isabel; Delgutte, Bertrand

    2012-01-01

    Auditory adaptation to sound-level statistics occurs as early as in the auditory nerve (AN), the first stage of neural auditory processing. In addition to firing rate adaptation characterized by a rate decrement dependent on previous spike activity, AN fibers show dynamic range adaptation, which is characterized by a shift of the rate-level function or dynamic range toward the most frequently occurring levels in a dynamic stimulus, thereby improving the precision of coding of the most common sound levels (Wen B, Wang GI, Dean I, Delgutte B. J Neurosci 29: 13797–13808, 2009). We investigated the time course of dynamic range adaptation by recording from AN fibers with a stimulus in which the sound levels periodically switch from one nonuniform level distribution to another (Dean I, Robinson BL, Harper NS, McAlpine D. J Neurosci 28: 6430–6438, 2008). Dynamic range adaptation occurred rapidly, but its exact time course was difficult to determine directly from the data because of the concomitant firing rate adaptation. To characterize the time course of dynamic range adaptation without the confound of firing rate adaptation, we developed a phenomenological “dual adaptation” model that accounts for both forms of AN adaptation. When fitted to the data, the model predicts that dynamic range adaptation occurs as rapidly as firing rate adaptation, over 100–400 ms, and the time constants of the two forms of adaptation are correlated. These findings suggest that adaptive processing in the auditory periphery in response to changes in mean sound level occurs rapidly enough to have significant impact on the coding of natural sounds. PMID:22457465

  9. Using the structure of natural scenes and sounds to predict neural response properties in the brain

    NASA Astrophysics Data System (ADS)

    Deweese, Michael

    2014-03-01

    The natural scenes and sounds we encounter in the world are highly structured. The fact that animals and humans are so efficient at processing these sensory signals compared with the latest algorithms running on the fastest modern computers suggests that our brains can exploit this structure. We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogra representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus (MGBv) and primary auditory cortex (A1), and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds. We have also developed a biologically-inspired neural network model of primary visual cortex (V1) that can learn a sparse representation of natural scenes using spiking neurons and strictly local plasticity rules. The representation learned by our model is in good agreement with measured receptive fields in V1, demonstrating that sparse sensory coding can be achieved in a realistic biological setting.

  10. Retrievals of methane from IASI radiance spectra and comparisons with ground-based FTIR measurements

    NASA Astrophysics Data System (ADS)

    Kerzenmacher, T.; Kumps, N.; de Mazière, M.; Kruglanski, M.; Senten, C.; Vanhaelewyn, G.; Vandaele, A. C.; Vigouroux, C.

    2009-04-01

    The Infrared Atmospheric Sounding Interferometer (IASI), launched on 19 October 2006, is a Fourier transform spectrometer onboard METOP-1, observing the radiance of the Earth's surface and atmosphere in nadir mode. The spectral range covers the 645 to 2760 cm-1 region with a resolution of 0.35 to 0.5 cm-1. A line-by-line spectral simulation and inversion code, ASIMUT, has been developed for the retrieval of chemical species from infrared spectra. The code includes an analytical calculation of the Jacobians for use in the inversion part of the algorithm based on the Optimal Estimation Method. In 2007 we conducted a measurement campaign at St Denis, Île de la Réunion where we performed ground-based solar absorption observations with a infrared Fourier transform spectrometer. ASIMUT has been used to retrieve methane from the ground-based and collocated satellite measurements. For the latter we selected pixels that are situated over the sea. In this presentation we will show the retrieval strategies, the resulting methane column time series above St Denis and the comparisons of the satellite data with the ground-based data sets. Vertical profile information in these data sets will also be discussed.

  11. ANN modeling of DNA sequences: new strategies using DNA shape code.

    PubMed

    Parbhane, R V; Tambe, S S; Kulkarni, B D

    2000-09-01

    Two new encoding strategies, namely, wedge and twist codes, which are based on the DNA helical parameters, are introduced to represent DNA sequences in artificial neural network (ANN)-based modeling of biological systems. The performance of the new coding strategies has been evaluated by conducting three case studies involving mapping (modeling) and classification applications of ANNs. The proposed coding schemes have been compared rigorously and shown to outperform the existing coding strategies especially in situations wherein limited data are available for building the ANN models.

  12. Hair cells use active zones with different voltage dependence of Ca2+ influx to decompose sounds into complementary neural codes

    PubMed Central

    Ohn, Tzu-Lun; Rutherford, Mark A.; Jing, Zhizi; Jung, Sangyong; Duque-Afonso, Carlos J.; Hoch, Gerhard; Picher, Maria Magdalena; Scharinger, Anja; Strenzke, Nicola; Moser, Tobias

    2016-01-01

    For sounds of a given frequency, spiral ganglion neurons (SGNs) with different thresholds and dynamic ranges collectively encode the wide range of audible sound pressures. Heterogeneity of synapses between inner hair cells (IHCs) and SGNs is an attractive candidate mechanism for generating complementary neural codes covering the entire dynamic range. Here, we quantified active zone (AZ) properties as a function of AZ position within mouse IHCs by combining patch clamp and imaging of presynaptic Ca2+ influx and by immunohistochemistry. We report substantial AZ heterogeneity whereby the voltage of half-maximal activation of Ca2+ influx ranged over ∼20 mV. Ca2+ influx at AZs facing away from the ganglion activated at weaker depolarizations. Estimates of AZ size and Ca2+ channel number were correlated and larger when AZs faced the ganglion. Disruption of the deafness gene GIPC3 in mice shifted the activation of presynaptic Ca2+ influx to more hyperpolarized potentials and increased the spontaneous SGN discharge. Moreover, Gipc3 disruption enhanced Ca2+ influx and exocytosis in IHCs, reversed the spatial gradient of maximal Ca2+ influx in IHCs, and increased the maximal firing rate of SGNs at sound onset. We propose that IHCs diversify Ca2+ channel properties among AZs and thereby contribute to decomposing auditory information into complementary representations in SGNs. PMID:27462107

  13. Auditory stimuli elicit hippocampal neuronal responses during sleep

    PubMed Central

    Vinnik, Ekaterina; Antopolskiy, Sergey; Itskov, Pavel M.; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons code behaviorally salient stimuli, we recorded from neurons in the CA1 region of hippocampus in rats while they learned to associate the presence of sound with water reward. Rats learned to alternate between two reward ports at which, in 50% of the trials, sound stimuli were presented followed by water reward after a 3-s delay. Sound at the water port predicted subsequent reward delivery in 100% of the trials and the absence of sound predicted reward omission. During this task, 40% of recorded neurons fired differently according to which of the two reward ports the rat was visiting. A smaller fraction of neurons demonstrated onset response to sound/nosepoke (19%) and reward delivery (24%). When the sounds were played during passive wakefulness, 8% of neurons responded with short latency onset responses; 25% of neurons responded to sounds when they were played during sleep. During sleep the short-latency responses in hippocampus are intermingled with long lasting responses which in the current experiment could last for 1–2 s. Based on the current findings and the results of previous experiments we described the existence of two types of hippocampal neuronal responses to sounds: sound-onset responses with very short latency and longer-lasting sound-specific responses that are likely to be present when the animal is actively engaged in the task. PMID:22754507

  14. Assessment of incidence of severe sepsis in Sweden using different ways of abstracting International Classification of Diseases codes: difficulties with methods and interpretation of results.

    PubMed

    Wilhelms, Susanne B; Huss, Fredrik R; Granath, Göran; Sjöberg, Folke

    2010-06-01

    To compare three International Classification of Diseases code abstraction strategies that have previously been reported to mirror severe sepsis by examining retrospective Swedish national data from 1987 to 2005 inclusive. Retrospective cohort study. Swedish hospital discharge database. All hospital admissions during the period 1987 to 2005 were extracted and these patients were screened for severe sepsis using the three International Classification of Diseases code abstraction strategies, which were adapted for the Swedish version of the International Classification of Diseases. Two code abstraction strategies included both International Classification of Diseases, Ninth Revision and International Classification of Diseases, Tenth Revision codes, whereas one included International Classification of Diseases, Tenth Revision codes alone. None. The three International Classification of Diseases code abstraction strategies identified 37,990, 27,655, and 12,512 patients, respectively, with severe sepsis. The incidence increased over the years, reaching 0.35 per 1000, 0.43 per 1000, and 0.13 per 1000 inhabitants, respectively. During the International Classification of Diseases, Ninth Revision period, we found 17,096 unique patients and of these, only 2789 patients (16%) met two of the code abstraction strategy lists and 14,307 (84%) met one list. The International Classification of Diseases, Tenth Revision period included 46,979 unique patients, of whom 8% met the criteria of all three International Classification of Diseases code abstraction strategies, 7% met two, and 84% met one only. The three different International Classification of Diseases code abstraction strategies generated three almost separate cohorts of patients with severe sepsis. Thus, the International Classification of Diseases code abstraction strategies for recording severe sepsis in use today provides an unsatisfactory way of estimating the true incidence of severe sepsis. Further studies relating International Classification of Diseases code abstraction strategies to the American College of Chest Physicians/Society of Critical Care Medicine scores are needed.

  15. A Subjective Test of Modulated Blade Spacing for Helicopter Main Rotors

    NASA Technical Reports Server (NTRS)

    Sullivan, Brenda M.; Edwards, Bryan D.; Brentner, Kenneth S.; Booth, Earl R., Jr.

    2002-01-01

    Analytically, uneven (modulated) spacing of main rotor blades was found to reduce helicopter noise. A study was performed to see if these reductions transferred to improvements in subjective response. Using a predictive computer code, sounds produced by six main rotor configurations: 4 blades evenly spaced, 5 blades evenly spaced and four configurations with 5 blades with modulated spacing of varying amounts, were predicted. These predictions were converted to audible sounds corresponding to the level flyover, takeoff and approach flight conditions. Subjects who heard the simulations were asked to assess the overflight sounds in terms of noisiness on a scale of 0 to 10. In general the evenly spaced configurations were found less noisy than the modulated spacings, possibly because the uneven spacings produced a perceptible pulsating sound due to the very low fundamental frequency.

  16. On cortical coding of vocal communication sounds in primates

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqin

    2000-10-01

    Understanding how the brain processes vocal communication sounds is one of the most challenging problems in neuroscience. Our understanding of how the cortex accomplishes this unique task should greatly facilitate our understanding of cortical mechanisms in general. Perception of species-specific communication sounds is an important aspect of the auditory behavior of many animal species and is crucial for their social interactions, reproductive success, and survival. The principles of neural representations of these behaviorally important sounds in the cerebral cortex have direct implications for the neural mechanisms underlying human speech perception. Our progress in this area has been relatively slow, compared with our understanding of other auditory functions such as echolocation and sound localization. This article discusses previous and current studies in this field, with emphasis on nonhuman primates, and proposes a conceptual platform to further our exploration of this frontier. It is argued that the prerequisite condition for understanding cortical mechanisms underlying communication sound perception and production is an appropriate animal model. Three issues are central to this work: (i) neural encoding of statistical structure of communication sounds, (ii) the role of behavioral relevance in shaping cortical representations, and (iii) sensory-motor interactions between vocal production and perception systems.

  17. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING AND CODING VERIFICATION (HAND ENTRY) (UA-D-14.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...

  18. Evaluation of noise impact mitigation protocols to support CSS : final report.

    DOT National Transportation Integrated Search

    2008-03-01

    This research project developed and evaluated practical ways of involving the public in context sensitive sound mitigation strategies. The integrated use of photo montage, PowerPoint presentation, linked traffic sound files, and audience response sys...

  19. Analysis of the sound field in finite length infinite baffled cylindrical ducts with vibrating walls of finite impedance.

    PubMed

    Shao, Wei; Mechefske, Chris K

    2005-04-01

    This paper describes an analytical model of finite cylindrical ducts with infinite flanges. This model is used to investigate the sound radiation characteristics of the gradient coil system of a magnetic resonance imaging (MRI) scanner. The sound field in the duct satisfies both the boundary conditions at the wall and at the open ends. The vibrating cylindrical wall of the duct is assumed to be the only sound source. Different acoustic conditions for the wall (rigid and absorptive) are used in the simulations. The wave reflection phenomenon at the open ends of the finite duct is described by general radiation impedance. The analytical model is validated by the comparison with its counterpart in a commercial code based on the boundary element method (BEM). The analytical model shows significant advantages over the BEM model with better numerical efficiency and a direct relation between the design parameters and the sound field inside the duct.

  20. Role of N-Methyl-D-Aspartate Receptors in Action-Based Predictive Coding Deficits in Schizophrenia.

    PubMed

    Kort, Naomi S; Ford, Judith M; Roach, Brian J; Gunduz-Bruce, Handan; Krystal, John H; Jaeger, Judith; Reinhart, Robert M G; Mathalon, Daniel H

    2017-03-15

    Recent theoretical models of schizophrenia posit that dysfunction of the neural mechanisms subserving predictive coding contributes to symptoms and cognitive deficits, and this dysfunction is further posited to result from N-methyl-D-aspartate glutamate receptor (NMDAR) hypofunction. Previously, by examining auditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding during vocalization is disrupted in schizophrenia. To test the hypothesized contribution of NMDAR hypofunction to this disruption, we examined the effects of the NMDAR antagonist, ketamine, on predictive coding during vocalization in healthy volunteers and compared them with the effects of schizophrenia. In two separate studies, the N1 component of the event-related potential elicited by speech sounds during vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppression during vocalization, a putative measure of auditory predictive coding. In the crossover study, 31 healthy volunteers completed two randomly ordered test days, a saline day and a ketamine day. Event-related potentials during the talk/listen task were obtained before infusion and during infusion on both days, and N1 amplitudes were compared across days. In the case-control study, N1 amplitudes from 34 schizophrenia patients and 33 healthy control volunteers were compared. N1 suppression to self-produced vocalizations was significantly and similarly diminished by ketamine (Cohen's d = 1.14) and schizophrenia (Cohen's d = .85). Disruption of NMDARs causes dysfunction in predictive coding during vocalization in a manner similar to the dysfunction observed in schizophrenia patients, consistent with the theorized contribution of NMDAR hypofunction to predictive coding deficits in schizophrenia. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. Behavioral and modeling studies of sound localization in cats: effects of stimulus level and duration

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.

    2013-01-01

    Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278

  2. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Borland, Michael S.; Buell, Elizabeth P.; Centanni, Tracy M.; Fink, Melyssa K.; Im, Kwok W.; Wilson, Linda G.; Kilgard, Michael P.

    2015-01-01

    Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial. PMID:26321676

  3. Longitudinal Relations Between Parental Writing Support and Preschoolers’ Language and Literacy Skills

    PubMed Central

    Bindman, Samantha W.; Hindman, Annemarie H.; Aram, Dorit; Morrison, Frederick J.

    2013-01-01

    Parental writing support was examined over time and in relation to children’s language and literacy skills. Seventy-seven parents and their preschoolers were videotaped writing an invitation together twice during one year. Parental writing support was coded at the level of the letter to document parents’ graphophonemic support (letter–sound correspondence), print support (letter formation), and demand for precision (expectation for correcting writing errors). Parents primarily relied on only a couple print (i.e., parent writing the letter alone) and graphophonemic (i.e., saying the word as a whole, dictating letters as children write) strategies. Graphophonemic and print support in preschool predicted children’s decoding skills, and graphophonemic support also predicted children’s future phonological awareness. Neither type of support predicted children’s vocabulary scores. Demand for precision occurred infrequently and was unrelated to children’s outcomes. Findings demonstrate the importance of parental writing support for augmenting children’s literacy skills. PMID:25045186

  4. Evaluation strategy : Puget Sound regional fare card : FY01 earmark evaluation

    DOT National Transportation Integrated Search

    2003-06-24

    King County Metro Transit is the lead agency responsible for implementing the Central Puget Sound Regional Fare Coordination Project (RFC Project). The project features a smart card technology that will support and link the fare collection systems of...

  5. Creating A Choral Sound.

    ERIC Educational Resources Information Center

    Leenman, Tracy E.

    1996-01-01

    Covers a variety of strategies for creating a unique and identifiable choral sound. Provides specific instructions for developing singing in unison and recommends a standing arrangement of soprano, alto, tenor, and bass quartets. Provides other tips for instrumentation, sight reading, and quality rehearsal time. (MJP)

  6. BIAS: A Regional Management of Underwater Sound in the Baltic Sea.

    PubMed

    Sigray, Peter; Andersson, Mathias; Pajala, Jukka; Laanearu, Janek; Klauson, Aleksander; Tegowski, Jaroslaw; Boethling, Maria; Fischer, Jens; Tougaard, Jakob; Wahlberg, Magnus; Nikolopoulos, Anna; Folegot, Thomas; Matuschek, Rainer; Verfuss, Ursula

    2016-01-01

    Management of the impact of underwater sound is an emerging concern worldwide. Several countries are in the process of implementing regulatory legislations. In Europe, the Marine Strategy Framework Directive was launched in 2008. This framework addresses noise impacts and the recommendation is to deal with it on a regional level. The Baltic Sea is a semienclosed area with nine states bordering the sea. The number of ships is one of the highest in Europe. Furthermore, the number of ships is estimated to double by 2030. Undoubtedly, due to the unbound character of noise, an efficient management of sound in the Baltic Sea must be done on a regional scale. In line with the European Union directive, the Baltic Sea Information on the Acoustic Soundscape (BIAS) project was established to implement Descriptor 11 of the Marine Strategy Framework Directive in the Baltic Sea region. BIAS will develop tools, standards, and methodologies that will allow for cross-border handling of data and results, measure sound in 40 locations for 1 year, establish a seasonal soundscape map by combining measured sound with advanced three-dimensional modeling, and, finally, establish standards for measuring continuous sound. Results from the first phase of BIAS are presented here, with an emphasis on standards and soundscape mapping as well as the challenges related to regional handling.

  7. Mathematical simulation of sound propagation in a flow channel with impedance walls

    NASA Astrophysics Data System (ADS)

    Osipov, A. A.; Reent, K. S.

    2012-07-01

    The paper considers the specifics of calculating tonal sound propagating in a flow channel with an installed sound-absorbing device. The calculation is performed on the basis of numerical integrating on linearized nonstationary Euler equations using a code developed by the authors based on the so-called discontinuous Galerkin method. Using the linear theory of small perturbations, the effect of the sound-absorbing lining of the channel walls is described with the modified value of acoustic impedance proposed by the authors, for which, under flow channel conditions, the traditional classification of the active and reactive types of lining in terms of the real and imaginary impedance values, respectively, remains valid. To stabilize the computation process, a generalized impedance boundary condition is proposed in which, in addition to the impedance value itself, some additional parameters are introduced characterizing certain fictitious properties of inertia and elasticity of the impedance surface.

  8. On sound transmission through double-walled cylindrical shells lined with poroelastic material: Comparison with Zhou's results and further effect of external mean flow

    NASA Astrophysics Data System (ADS)

    Liu, Yu; He, Chuanbo

    2015-12-01

    In this discussion, the corrections to the errors found in the derivations and the numerical code of a recent analytical study (Zhou et al. Journal of Sound and Vibration 333 (7) (2014) 1972-1990) on sound transmission through double-walled cylindrical shells lined with poroelastic material are presented and discussed, as well as the further effect of the external mean flow on the transmission loss. After applying the corrections, the locations of the characteristic frequencies of thin shells remain unchanged, as well as the TL results above the ring frequency where BU and UU remain the best configurations in sound insulation performance. In the low-frequency region below the ring frequency, however, the corrections attenuate the TL amplitude significantly for BU and UU, and hence the BB configuration exhibits the best performance which is consistent with previous observations for flat sandwich panels.

  9. A simple clinical coding strategy to improve recording of child maltreatment concerns: an audit study.

    PubMed

    McGovern, Andrew Peter; Woodman, Jenny; Allister, Janice; van Vlymen, Jeremy; Liyanage, Harshana; Jones, Simon; Rafi, Imran; de Lusignan, Simon; Gilbert, Ruth

    2015-01-14

    Recording concerns about child maltreatment, including minor concerns, is recommended by the General Medical Council (GMC) and National Institute for Health and Clinical Excellence (NICE) but there is evidence of substantial under-recording. To determine whether a simple coding strategy improved recording of maltreatment-related concerns in electronic primary care records. Clinical audit of rates of maltreatment-related coding before January 2010-December 2011 and after January-December 2012 implementation of a simple coding strategy in 11 English family practices. The strategy included encouraging general practitioners to use, always and as a minimum, the Read code 'Child is cause for concern'. A total of 25,106 children aged 0-18 years were registered with these practices. We also undertook a qualitative service evaluation to investigate barriers to recording. Outcomes were recording of 1) any maltreatment-related codes, 2) child protection proceedings and 3) child was a cause for concern. We found increased recording of any maltreatment-related code (rate ratio 1.4; 95% CI 1.1-1.6), child protection procedures (RR 1.4; 95% CI 1.1-1.6) and cause for concern (RR 2.5; 95% CI 1.8-3.4) after implementation of the coding strategy. Clinicians cited the simplicity of the coding strategy as the most important factor assisting implementation. This simple coding strategy improved clinician's recording of maltreatment-related concerns in a small sample of practices with some 'buy-in'. Further research should investigate how recording can best support the doctor-patient relationship. HOW THIS FITS IN: Recording concerns about child maltreatment, including minor concerns, is recommended by the General Medical Council (GMC) and National Institute for Health and Clinical Excellence (NICE), but there is evidence of substantial under-recording. We describe a simple clinical coding strategy that helped general practitioners to improve recording of maltreatment-related concerns. These improvements could improve case finding of children at risk and information sharing.

  10. Microsoft C#.NET program and electromagnetic depth sounding for large loop source

    NASA Astrophysics Data System (ADS)

    Prabhakar Rao, K.; Ashok Babu, G.

    2009-07-01

    A program, in the C# (C Sharp) language with Microsoft.NET Framework, is developed to compute the normalized vertical magnetic field of a horizontal rectangular loop source placed on the surface of an n-layered earth. The field can be calculated either inside or outside the loop. Five C# classes with member functions in each class are, designed to compute the kernel, Hankel transform integral, coefficients for cubic spline interpolation between computed values and the normalized vertical magnetic field. The program computes the vertical magnetic field in the frequency domain using the integral expressions evaluated by a combination of straightforward numerical integration and the digital filter technique. The code utilizes different object-oriented programming (OOP) features. It finally computes the amplitude and phase of the normalized vertical magnetic field. The computed results are presented for geometric and parametric soundings. The code is developed in Microsoft.NET visual studio 2003 and uses various system class libraries.

  11. Software for storage and processing coded messages for the international exchange of meteorological information

    NASA Astrophysics Data System (ADS)

    Popov, V. N.; Botygin, I. A.; Kolochev, A. S.

    2017-01-01

    The approach allows representing data of international codes for exchange of meteorological information using metadescription as the formalism associated with certain categories of resources. Development of metadata components was based on an analysis of the data of surface meteorological observations, atmosphere vertical sounding, atmosphere wind sounding, weather radar observing, observations from satellites and others. A common set of metadata components was formed including classes, divisions and groups for a generalized description of the meteorological data. The structure and content of the main components of a generalized metadescription are presented in detail by the example of representation of meteorological observations from land and sea stations. The functional structure of a distributed computing system is described. It allows organizing the storage of large volumes of meteorological data for their further processing in the solution of problems of the analysis and forecasting of climatic processes.

  12. Orthographic effects in spoken word recognition: Evidence from Chinese.

    PubMed

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  13. Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures

    DTIC Science & Technology

    2015-09-01

    soundness or completeness. An incomplete analysis will produce extra edges in the CFG that might allow an attacker to slip through. An unsound analysis...Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures by Isaac Noah Evans Submitted to the Department of Electrical...Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer

  14. Fast Scattering Code (FSC) User's Manual: Version 2

    NASA Technical Reports Server (NTRS)

    Tinetti, Ana F.; Dun, M. H.; Pope, D. Stuart

    2006-01-01

    The Fast Scattering Code (version 2.0) is a computer program for predicting the three-dimensional scattered acoustic field produced by the interaction of known, time-harmonic, incident sound with aerostructures in the presence of potential background flow. The FSC has been developed for use as an aeroacoustic analysis tool for assessing global effects on noise radiation and scattering caused by changes in configuration (geometry, component placement) and operating conditions (background flow, excitation frequency).

  15. Addressing Kitchen Contaminants for Healthy, Low-Energy Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratton, J. Chris; Singer, Brett C.

    2014-01-01

    Cooking and cooking burners emit pollutants that can adversely affect indoor air quality in residences and significantly impact occupant health. Effective kitchen exhaust ventilation can reduce exposure to cooking-related air pollutants as an enabling step to healthier, low-energy homes. This report by Lawrence Berkeley National Laboratory identifies barriers to the widespread adoption of kitchen exhaust ventilation technologies and practice and proposes a suite of strategies to overcome these barriers. The recommendations have been vetted by a group of industry, regulatory, health, and research experts and stakeholders who convened for two meetings and provided input and feedback to early drafts ofmore » this document. The most fundamental barriers are (1) the common misconception, based on a sensory perception of risk, that kitchen exhaust when cooking is unnecessary and (2) the lack of a code requirement for kitchen ventilation in most U.S. locations. Highest priority objectives include the following: (1) Raise awareness among the public and the building industry of the need to install and routinely use kitchen ventilation; (2) Incorporate kitchen exhaust ventilation as a requirement of building codes and improve the mechanisms for code enforcement; (3) Provide best practice product and use-behavior guidance to ventilation equipment purchasers and installers, and; (4) Develop test methods and performance targets to advance development of high performance products. A specific, urgent need is the development of an over-the-range microwave that meets the airflow and sound requirements of ASHRAE Standard 62.2.« less

  16. Addressing Kitchen Contaminants for Healthy, Low-Energy Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratton, J. Chris; Singer, Brett C.

    2014-01-01

    Cooking and cooking burners emit pollutants that can adversely affect indoor air quality in residences and significantly impact occupant health. Effective kitchen exhaust ventilation can reduce exposure to cooking-related air pollutants as an enabling step to healthier, low-energy homes. This report identifies barriers to the widespread adoption of kitchen exhaust ventilation technologies and practice and proposes a suite of strategies to overcome these barriers. The recommendations have been vetted by a group of industry, regulatory, health, and research experts and stakeholders who convened for two web-based meetings and provided input and feedback to early drafts of this document. The mostmore » fundamental barriers are (1) the common misconception, based on a sensory perception of risk, that kitchen exhaust when cooking is unnecessary and (2) the lack of a code requirement for kitchen ventilation in most US locations. Highest priority objectives include the following: (1) Raise awareness among the public and the building industry of the need to install and routinely use kitchen ventilation; (2) Incorporate kitchen exhaust ventilation as a requirement of building codes and improve the mechanisms for code enforcement; (3) Provide best practice product and use-behavior guidance to ventilation equipment purchasers and installers, and; (4) Develop test methods and performance targets to advance development of high performance products. A specific, urgent need is the development of an over-the-range microwave that meets the airflow and sound requirements of ASHRAE Standard 62.2.« less

  17. Generalized type II hybrid ARQ scheme using punctured convolutional coding

    NASA Astrophysics Data System (ADS)

    Kallel, Samir; Haccoun, David

    1990-11-01

    A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.

  18. Sound level exposure of high-risk infants in different environmental conditions.

    PubMed

    Byers, Jacqueline F; Waugh, W Randolph; Lowman, Linda B

    2006-01-01

    To provide descriptive information about the sound levels to which high-risk infants are exposed in various actual environmental conditions in the NICU, including the impact of physical renovation on sound levels, and to assess the contributions of various types of equipment, alarms, and activities to sound levels in simulated conditions in the NICU. Descriptive and comparative design. Convenience sample of 134 infants at a southeastern quarternary children's hospital. A-weighted decibel (dBA) sound levels under various actual and simulated environmental conditions. The renovated NICU was, on average, 4-6 dBA quieter across all environmental conditions than a comparable nonrenovated room, representing a significant sound level reduction. Sound levels remained above consensus recommendations despite physical redesign and staff training. Respiratory therapy equipment, alarms, staff talking, and infant fussiness contributed to higher sound levels. Evidence-based sound-reducing strategies are proposed. Findings were used to plan environment management as part of a developmental, family-centered care, performance improvement program and in new NICU planning.

  19. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis

    PubMed Central

    Johnson, Jeffrey S.; Yin, Pingbo; O'Connor, Kevin N.

    2012-01-01

    Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1. PMID:22422997

  20. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds.

    PubMed

    Shinn-Cunningham, Barbara

    2017-10-17

    This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.

  1. Experimental and Analytical Determination of the Geometric Far Field for Round Jets

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James E.; Brown, Clifford E.; Khavaran, Abbas

    2005-01-01

    An investigation was conducted at the NASA Glenn Research Center using a set of three round jets operating under unheated subsonic conditions to address the question: "How close is too close?" Although sound sources are distributed at various distances throughout a jet plume downstream of the nozzle exit, at great distances from the nozzle the sound will appear to emanate from a point and the inverse-square law can be properly applied. Examination of normalized sound spectra at different distances from a jet, from experiments and from computational tools, established the required minimum distance for valid far-field measurements of the sound from subsonic round jets. Experimental data were acquired in the Aeroacoustic Propulsion Laboratory at the NASA Glenn Research Center. The WIND computer program solved the Reynolds-Averaged Navier-Stokes equations for aerodynamic computations; the MGBK jet-noise prediction computer code was used to predict the sound pressure levels. Results from both the experiments and the analytical exercises indicated that while the shortest measurement arc (with radius approximately 8 nozzle diameters) was already in the geometric far field for high-frequency sound (Strouhal number >5), low-frequency sound (Strouhal number <0.2) reached the geometric far field at a measurement radius of at least 50 nozzle diameters because of its extended source distribution.

  2. Clinical Interventions for Hyperacusis in Adults: A Scoping Review to Assess the Current Position and Determine Priorities for Research

    PubMed Central

    Potgieter, Iskra; Baguley, David M.

    2017-01-01

    Background There is no universally accepted definition for hyperacusis, but in general it is characterised by decreased sound tolerance to ordinary environmental sounds. Despite hyperacusis being prevalent and having significant clinical implications, much remains unknown about current management strategies. Purpose To establish the current position of research on hyperacusis and identify research gaps to direct future research. Design and Sample Using an established methodological framework, electronic and manual searches of databases and journals identified 43 records that met our inclusion criteria. Incorporating content and thematic analysis approaches, the definitions of hyperacusis, management strategies, and outcome measures were catalogued. Results Only 67% of the studies provided a definition of hyperacusis, such as “reduced tolerance” or “oversensitivity to sound.” Assessments and outcome measures included Loudness Discomfort Levels, the Hyperacusis Questionnaire, and Tinnitus Retraining Therapy (TRT) interview. Management strategies reported were Cognitive Behavioural Therapy, TRT, devices, pharmacological therapy, and surgery. Conclusions Management strategies were typically evaluated in patients reporting hyperacusis as a secondary complaint or as part of a symptom set. As such the outcomes reported only provided an indication of their effectiveness for hyperacusis. Randomised Controlled Trials are needed to evaluate the effectiveness of management strategies for patients experiencing hyperacusis. PMID:29312994

  3. Categorization of common sounds by cochlear implanted and normal hearing adults.

    PubMed

    Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P

    2016-05-01

    Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Roles for Coincidence Detection in Coding Amplitude-Modulated Sounds

    PubMed Central

    Ashida, Go; Kretzberg, Jutta; Tollin, Daniel J.

    2016-01-01

    Many sensory neurons encode temporal information by detecting coincident arrivals of synaptic inputs. In the mammalian auditory brainstem, binaural neurons of the medial superior olive (MSO) are known to act as coincidence detectors, whereas in the lateral superior olive (LSO) roles of coincidence detection have remained unclear. LSO neurons receive excitatory and inhibitory inputs driven by ipsilateral and contralateral acoustic stimuli, respectively, and vary their output spike rates according to interaural level differences. In addition, LSO neurons are also sensitive to binaural phase differences of low-frequency tones and envelopes of amplitude-modulated (AM) sounds. Previous physiological recordings in vivo found considerable variations in monaural AM-tuning across neurons. To investigate the underlying mechanisms of the observed temporal tuning properties of LSO and their sources of variability, we used a simple coincidence counting model and examined how specific parameters of coincidence detection affect monaural and binaural AM coding. Spike rates and phase-locking of evoked excitatory and spontaneous inhibitory inputs had only minor effects on LSO output to monaural AM inputs. In contrast, the coincidence threshold of the model neuron affected both the overall spike rates and the half-peak positions of the AM-tuning curve, whereas the width of the coincidence window merely influenced the output spike rates. The duration of the refractory period affected only the low-frequency portion of the monaural AM-tuning curve. Unlike monaural AM coding, temporal factors, such as the coincidence window and the effective duration of inhibition, played a major role in determining the trough positions of simulated binaural phase-response curves. In addition, empirically-observed level-dependence of binaural phase-coding was reproduced in the framework of our minimalistic coincidence counting model. These modeling results suggest that coincidence detection of excitatory and inhibitory synaptic inputs is essential for LSO neurons to encode both monaural and binaural AM sounds. PMID:27322612

  5. Sounding of the Ion Energization Region: Resolving Ambiguities

    NASA Technical Reports Server (NTRS)

    LaBelle, James

    2003-01-01

    Dartmouth College provided a single-channel high-frequency wave receiver to the Sounding of the Ion Energization Region: Resolving Ambiguities (SIERRA) rocket experiment launched from Poker Flat, Alaska, in January 2002. The receiver used signals from booms, probes, preamplifiers, and differential amplifiers provided by Cornell University coinvestigators. Output was to a dedicated 5 MHz telemetry link provided by WFF, with a small amount of additional Pulse Code Modulation (PCM) telemetry required for the receiver gain information. We also performed preliminary analysis of the data. The work completed is outlined below, in chronological order.

  6. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    PubMed Central

    2015-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant–vowel–consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching pre-school children to decode, or read, single letters. The study compared a control group, which received the preschool’s standard letter-sound instruction, to an intervention group which received a 3-step letter-sound instruction intervention. The children’s growth in letter-sound reading and CVC word decoding abilities were assessed at baseline and 2, 4, 6 and 8 weeks. When compared to the control group, the growth of letter-sound reading ability was slightly higher for the intervention group. The rate of increase in letter-sound reading was significantly faster for the intervention group. In both groups, too few children learned to decode any CVC words to allow for analysis. Results of this study support the use of the intervention strategy in preschools for teaching children print-to-sound processing. PMID:26839494

  7. Designing Emotionally Sound Instruction: The FEASP-Approach.

    ERIC Educational Resources Information Center

    Astleitner, Hermann

    2000-01-01

    Presents strategies for making instruction more emotionally sound based on the FEASP (fear, envy, anger, sympathy, pleasure) approach. Highlights include the roles of emotions in cognitive instructional design, in motivational design of instruction, in affective education, and in emotional education; and a framework for Emotional Design of…

  8. Pulse Vector-Excitation Speech Encoder

    NASA Technical Reports Server (NTRS)

    Davidson, Grant; Gersho, Allen

    1989-01-01

    Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.

  9. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  10. openPSTD: The open source pseudospectral time-domain method for acoustic propagation

    NASA Astrophysics Data System (ADS)

    Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis

    2016-06-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.

  11. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  12. Slow Temporal Integration Enables Robust Neural Coding and Perception of a Cue to Sound Source Location.

    PubMed

    Brown, Andrew D; Tollin, Daniel J

    2016-09-21

    In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable. In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of both ITD and ILD cues. Here we demonstrate that behavioral ILD sensitivity (in humans) and neural ILD sensitivity (in single neurons of the chinchilla auditory midbrain) remain robust under stimulus conditions that render ITD cues undetectable. This result can be explained by "slow" temporal integration arising from several-millisecond-long windows of excitatory-inhibitory interaction evident in midbrain, but not brainstem, neurons. Such integrative coding can account for the preservation of ILD sensitivity despite even extreme temporal degradations in ecological acoustic stimuli. Copyright © 2016 the authors 0270-6474/16/369908-14$15.00/0.

  13. Sound Symbolic Patterns in Pokémon Names.

    PubMed

    Kawahara, Shigeto; Noto, Atsushi; Kumagai, Gakuji

    2018-04-11

    This paper presents a case study of sound symbolism, cases in which certain sounds tend to be associated with particular meanings. We used the corpus of all Japanese Pokémon names available as of October 2016. We tested the effects of voiced obstruents, mora counts, and vowel quality on Pokémon characters' size, weight, strength parameters, and evolution levels. We found that the number of voiced obstruents in Pokémon names correlates positively with size, weight, evolution levels, and general strength parameters, except for speed. We argue that this result is compatible with the frequency code hypothesis of Ohala. The number of moras in Pokémon names correlates positively with size, weight, evolution levels, and all strength parameters. Vowel height is also shown to have an influence on size and weight - Pokémon characters with initial high vowels tend to be smaller and lighter, although the effect size is not very large. Not only does this paper offer a new case study of sound symbolism, it provides evidence that sound symbolism is at work when naming proper nouns. © 2018 S. Karger AG, Basel.

  14. Modelling sound propagation in the Southern Ocean to estimate the acoustic impact of seismic research surveys on marine mammals

    NASA Astrophysics Data System (ADS)

    Breitzke, Monika; Bohlen, Thomas

    2010-05-01

    Modelling sound propagation in the ocean is an essential tool to assess the potential risk of air-gun shots on marine mammals. Based on a 2.5-D finite-difference code a full waveform modelling approach is presented, which determines both sound exposure levels of single shots and cumulative sound exposure levels of multiple shots fired along a seismic line. Band-limited point source approximations of compact air-gun clusters deployed by R/V Polarstern in polar regions are used as sound sources. Marine mammals are simulated as static receivers. Applications to deep and shallow water models including constant and depth-dependent sound velocity profiles of the Southern Ocean show dipole-like directivities in case of single shots and tubular cumulative sound exposure level fields beneath the seismic line in case of multiple shots. Compared to a semi-infinite model an incorporation of seafloor reflections enhances the seismically induced noise levels close to the sea surface. Refraction due to sound velocity gradients and sound channelling in near-surface ducts are evident, but affect only low to moderate levels. Hence, exposure zone radii derived for different hearing thresholds are almost independent of the sound velocity structure. With decreasing thresholds radii increase according to a spherical 20 log10 r law in case of single shots and according to a cylindrical 10 log10 r law in case of multiple shots. A doubling of the shot interval diminishes the cumulative sound exposure levels by -3 dB and halves the radii. The ocean bottom properties only slightly affect the radii in shallow waters, if the normal incidence reflection coefficient exceeds 0.2.

  15. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  16. Short-term memory coding in children with intellectual disabilities.

    PubMed

    Henry, Lucy

    2008-05-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these groups with MAs above 6 years showed significant visual similarity and word length effects, broadly consistent with an intermediate stage of dual visual and verbal coding. These results suggest that developmental progressions in memory coding strategies are independent of intellectual disabilities status and consistent with MA.

  17. Numerical flow simulation of a reusable sounding rocket during nose-up rotation

    NASA Astrophysics Data System (ADS)

    Kuzuu, Kazuto; Kitamura, Keiichi; Fujimoto, Keiichiro; Shima, Eiji

    2010-11-01

    Flow around a reusable sounding rocket during nose-up rotation is simulated using unstructured compressible CFD code. While a reusable sounding rocket is expected to reduce the cost of the flight management, it is demanded that this rocket has good performance for wide range of flight conditions from vertical take-off to vertical landing. A rotating body, which corresponds to a vehicle's motion just before vertical landing, is one of flight environments that largely affect its aerodynamic design. Unlike landing of the space shuttle, this vehicle must rotate from gliding position to vertical landing position in nose-up direction. During this rotation, the vehicle generates massive separations in the wake. As a result, induced flow becomes unsteady and could have influence on aerodynamic characteristics of the vehicle. In this study, we focus on the analysis of such dynamic characteristics of the rotating vehicle. An employed numerical code is based on a cell-centered finite volume compressible flow solver applied to a moving grid system. The moving grid is introduced for the analysis of rotating motion. Furthermore, in order to estimate an unsteady turbulence, we employed DDES method as a turbulence model. In this simulation, flight velocity is subsonic. Through this simulation, we discuss the effect on aerodynamic characteristics of a vehicle's shape and motion.

  18. Easily extensible unix software for spectral analysis, display, modification, and synthesis of musical sounds

    NASA Astrophysics Data System (ADS)

    Beauchamp, James W.

    2002-11-01

    Software has been developed which enables users to perform time-varying spectral analysis of individual musical tones or successions of them and to perform further processing of the data. The package, called sndan, is freely available in source code, uses EPS graphics for display, and is written in ansi c for ease of code modification and extension. Two analyzers, a fixed-filter-bank phase vocoder (''pvan'') and a frequency-tracking analyzer (''mqan'') constitute the analysis front end of the package. While pvan's output consists of continuous amplitudes and frequencies of harmonics, mqan produces disjoint ''tracks.'' However, another program extracts a fundamental frequency and separates harmonics from the tracks, resulting in a continuous harmonic output. ''monan'' is a program used to display harmonic data in a variety of formats, perform various spectral modifications, and perform additive resynthesis of the harmonic partials, including possible pitch-shifting and time-scaling. Sounds can also be synthesized according to a musical score using a companion synthesis language, Music 4C. Several other programs in the sndan suite can be used for specialized tasks, such as signal display and editing. Applications of the software include producing specialized sounds for music compositions or psychoacoustic experiments or as a basis for developing new synthesis algorithms.

  19. Challenges to implementation of the WHO Global Code of Practice on International Recruitment of Health Personnel: the case of Sudan.

    PubMed

    Abuagla, Ayat; Badr, Elsheikh

    2016-06-30

    The WHO Global Code of Practice on the International Recruitment of Health Personnel (hereafter the WHO Code) was adopted by the World Health Assembly in 2010 as a voluntary instrument to address challenges of health worker migration worldwide. To ascertain its relevance and effectiveness, the implementation of the WHO Code needs to be assessed based on country experience; hence, this case study on Sudan. This qualitative study depended mainly on documentary sources in addition to key informant interviews. Experiences of the authors has informed the analysis. Migration of Sudanese health workers represents a major health system challenge. Over half of Sudanese physicians practice abroad and new trends are showing involvement of other professions and increased feminization. Traditional destinations include Gulf States, especially Saudi Arabia and Libya, as well as the United Kingdom and the Republic of Ireland. Low salaries, poor work environment, and a lack of adequate professional development are the leading push factors. Massive emigration of skilled health workers has jeopardized coverage and quality of healthcare and health professional education. Poor evidence, lack of a national policy, and active recruitment in addition to labour market problems were barriers for effective migration management in Sudan. Response of destination countries in relation to cooperative arrangements with Sudan as a source country has always been suboptimal, demonstrating less attention to solidarity and ethical dimensions. The WHO Code boosted Sudan's efforts to address health worker migration and health workforce development in general. Improving migration evidence, fostering a national dialogue, and promoting bilateral agreements in addition to catalysing health worker retention strategies are some of the benefits accrued. There are, however, limitations in publicity of the WHO Code and its incorporation into national laws and regulatory frameworks for ethical recruitment. The outlook is bleak for Sudan unless the country designs and implements a robust national policy for migration management and unless prospects for source-destination country collaboration improve within a more sound version of the WHO Code. The WHO Code catalysed some vital steps in managing migration and strengthening the national health workforce in Sudan. Nevertheless, the country has not utilized the full potential of this instrument. Revisions of the WHO Code would benefit much from lessons of its application in the context of developing countries such as Sudan.

  20. Comparison of three coding strategies for a low cost structure light scanner

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2014-12-01

    Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.

  1. Delay Analysis of Car-to-Car Reliable Data Delivery Strategies Based on Data Mulling with Network Coding

    NASA Astrophysics Data System (ADS)

    Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok

    Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.

  2. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds II: single-neuron recordings

    PubMed Central

    Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David

    2014-01-01

    Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782

  3. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  4. Computational fluid mechanics utilizing the variational principle of modeling damping seals

    NASA Technical Reports Server (NTRS)

    Abernathy, J. M.

    1986-01-01

    A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.

  5. Automatically Preparing Safe SQL Queries

    NASA Astrophysics Data System (ADS)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  6. 40 CFR 52.2470 - Identification of plan.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...; Washington Administrative Code Chapter 173-430 (Burning of Field and Forage and Turf Grasses Grown for Seed...; Appendix G, Outline of Puget Sound Tropospheric Ozone Research Plan; and Appendix H, Prospective Vehicle... Households; Appendix H, Portland/Vancouver Carbon Monoxide Nonattainment Area Separation Documentation...

  7. The roughness of grounded ice sheet beds: Case studies from high resolution radio echo sounding studies in Antarctica

    NASA Astrophysics Data System (ADS)

    Young, Duncan; Blankeship, Donald; Beem, Lucas; Cavitte, Marie; Quartini, Enrica; Lindzey, Laura; Jackson, Charles; Roberts, Jason; Ritz, Catherine; Siegert, Martin; Greenbaum, Jamin; Frederick, Bruce

    2017-04-01

    The roughness of subglacial interfaces (as measured by airborne radar echo sounding) at length scales between profile line spacing and the footprint of the instrument is a key, but complex, signature of glacial and geomorphic processes, material lithology and integrated history at the bed of ice sheets. Subglacial roughness is also intertwined with assessments of ice thickness uncertainty using radar echo sounding, the utility of interpolation methodologies, and a key aspect of subglacial assess strategies. Here we present an assessment of subglacial roughness estimation in both West and East Antarctica, and compare this to exposed subglacial terrains. We will use recent high resolution aerogeophysical surveys to examine what variations in roughness are a fingerprint for, assess the limits of ice thickness uncertainty quantification and compare strategies for roughness assessment and utilization.

  8. Comparing and validating methods of reading instruction using behavioural and neural findings in an artificial orthography.

    PubMed

    Taylor, J S H; Davis, Matthew H; Rastle, Kathleen

    2017-06-01

    There is strong scientific consensus that emphasizing print-to-sound relationships is critical when learning to read alphabetic languages. Nevertheless, reading instruction varies across English-speaking countries, from intensive phonic training to multicuing environments that teach sound- and meaning-based strategies. We sought to understand the behavioral and neural consequences of these differences in relative emphasis. We taught 24 English-speaking adults to read 2 sets of 24 novel words (e.g., /buv/, /sig/), written in 2 different unfamiliar orthographies. Following pretraining on oral vocabulary, participants learned to read the novel words over 8 days. Training in 1 language was biased toward print-to-sound mappings while training in the other language was biased toward print-to-meaning mappings. Results showed striking benefits of print-sound training on reading aloud, generalization, and comprehension of single words. Univariate analyses of fMRI data collected at the end of training showed that print-meaning relative to print-sound relative training increased neural effort in dorsal pathway regions involved in reading aloud. Conversely, activity in ventral pathway brain regions involved in reading comprehension was no different following print-meaning versus print-sound training. Multivariate analyses validated our artificial language approach, showing high similarity between the spatial distribution of fMRI activity during artificial and English word reading. Our results suggest that early literacy education should focus on the systematicities present in print-to-sound relationships in alphabetic languages, rather than teaching meaning-based strategies, in order to enhance both reading aloud and comprehension of written words. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Comparing and Validating Methods of Reading Instruction Using Behavioural and Neural Findings in an Artificial Orthography

    PubMed Central

    2017-01-01

    There is strong scientific consensus that emphasizing print-to-sound relationships is critical when learning to read alphabetic languages. Nevertheless, reading instruction varies across English-speaking countries, from intensive phonic training to multicuing environments that teach sound- and meaning-based strategies. We sought to understand the behavioral and neural consequences of these differences in relative emphasis. We taught 24 English-speaking adults to read 2 sets of 24 novel words (e.g., /buv/, /sig/), written in 2 different unfamiliar orthographies. Following pretraining on oral vocabulary, participants learned to read the novel words over 8 days. Training in 1 language was biased toward print-to-sound mappings while training in the other language was biased toward print-to-meaning mappings. Results showed striking benefits of print–sound training on reading aloud, generalization, and comprehension of single words. Univariate analyses of fMRI data collected at the end of training showed that print–meaning relative to print–sound relative training increased neural effort in dorsal pathway regions involved in reading aloud. Conversely, activity in ventral pathway brain regions involved in reading comprehension was no different following print–meaning versus print–sound training. Multivariate analyses validated our artificial language approach, showing high similarity between the spatial distribution of fMRI activity during artificial and English word reading. Our results suggest that early literacy education should focus on the systematicities present in print-to-sound relationships in alphabetic languages, rather than teaching meaning-based strategies, in order to enhance both reading aloud and comprehension of written words. PMID:28425742

  10. An Intrinsically Digital Amplification Scheme for Hearing Aids

    NASA Astrophysics Data System (ADS)

    Blamey, Peter J.; Macfarlane, David S.; Steele, Brenton R.

    2005-12-01

    Results for linear and wide-dynamic range compression were compared with a new 64-channel digital amplification strategy in three separate studies. The new strategy addresses the requirements of the hearing aid user with efficient computations on an open-platform digital signal processor (DSP). The new amplification strategy is not modeled on prior analog strategies like compression and linear amplification, but uses statistical analysis of the signal to optimize the output dynamic range in each frequency band independently. Using the open-platform DSP processor also provided the opportunity for blind trial comparisons of the different processing schemes in BTE and ITE devices of a high commercial standard. The speech perception scores and questionnaire results show that it is possible to provide improved audibility for sound in many narrow frequency bands while simultaneously improving comfort, speech intelligibility in noise, and sound quality.

  11. [Perception and selectivity of sound duration in the central auditory midbrain].

    PubMed

    Wang, Xin; Li, An-An; Wu, Fei-Jian

    2010-08-25

    Sound duration plays important role in acoustic communication. Information of acoustic signal is mainly encoded in the amplitude and frequency spectrum of different durations. Duration selective neurons exist in the central auditory system including inferior colliculus (IC) of frog, bat, mouse and chinchilla, etc., and they are important in signal recognition and feature detection. Two generally accepted models, which are "coincidence detector model" and "anti-coincidence detector model", have been raised to explain the mechanism of neural selective responses to sound durations based on the study of IC neurons in bats. Although they are different in details, they both emphasize the importance of synaptic integration of excitatory and inhibitory inputs, and are able to explain the responses of most duration-selective neurons. However, both of the hypotheses need to be improved since other sound parameters, such as spectral pattern, amplitude and repetition rate, could affect the duration selectivity of the neurons. The dynamic changes of sound parameters are believed to enable the animal to effectively perform recognition of behavior related acoustic signals. Under free field sound stimulation, we analyzed the neural responses in the IC and auditory cortex of mouse and bat to sounds with different duration, frequency and amplitude, using intracellular or extracellular recording techniques. Based on our work and previous studies, this article reviews the properties of duration selectivity in central auditory system and discusses the mechanisms of duration selectivity and the effect of other sound parameters on the duration coding of auditory neurons.

  12. Listening to the Mind: Tracing the Auditory History of Mental Illness in Archives and Exhibitions.

    PubMed

    Birdsall, Carolyn; Parry, Manon; Tkaczyk, Viktoria

    2015-11-01

    With increasing interest in the representation of histories of mental health in museums, sound has played a key role as a tool to access a range of voices. This essay discusses how sound can be used to give voice to those previously silenced. The focus is on the use of sound recording in the history of mental health care, and the archival sources left behind for potential reuse. Exhibition strategies explored include the use of sound to interrogate established narratives, to interrupt associations visitors make when viewing the material culture of mental health, and to foster empathic listening among audiences.

  13. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds

    PubMed Central

    2017-01-01

    Purpose This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method The results from neuroscience and psychoacoustics are reviewed. Results In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with “normal hearing.” Conclusions How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601617 PMID:29049598

  14. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less

  15. Design and evaluation of a parametric model for cardiac sounds.

    PubMed

    Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador

    2017-10-01

    Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Auditory memory for timbre.

    PubMed

    McKeown, Denis; Wellsted, David

    2009-06-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex was decreased (Experiments 1 and 2) or increased (Experiments 3, 4, and 5) in intensity on half of trials: The task was simply to identify those trials. Prior to each trial, a pure tone inducer was introduced either at the same frequency as the target component or at the frequency of a different component of the complex. Consistent with a frequency-specific form of disruption, discrimination performance was impaired when the inducing tone matched the frequency of the following decrement or increment. A timbre memory model (TMM) is proposed incorporating channel-specific interference allied to inhibition of attending in the coding of sounds in the context of memory traces of recent sounds. (c) 2009 APA, all rights reserved.

  17. Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana

    2013-03-01

    The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.

  18. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    ERIC Educational Resources Information Center

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  19. Brief Report: Impaired Differentiation of Vegetative/Affective and Intentional Nonverbal Vocalizations in a Subject with Asperger Syndrome (AS)

    ERIC Educational Resources Information Center

    Dietrich, Susanne; Hertrich, Ingo; Riedel, Andreas; Ackermann, Hermann

    2012-01-01

    The Asperger syndrome (AS) includes impaired recognition of other people's mental states. Since language-based diagnostic procedures may be confounded by cognitive-linguistic compensation strategies, nonverbal test materials were created, including human affective and vegetative sounds. Depending on video context, each sound could be interpreted…

  20. An open real-time tele-stethoscopy system.

    PubMed

    Foche-Perez, Ignacio; Ramirez-Payba, Rodolfo; Hirigoyen-Emparanza, German; Balducci-Gonzalez, Fernando; Simo-Reigadas, Francisco-Javier; Seoane-Pascual, Joaquin; Corral-Peñafiel, Jaime; Martinez-Fernandez, Andres

    2012-08-23

    Acute respiratory infections are the leading cause of childhood mortality. The lack of physicians in rural areas of developing countries makes difficult their correct diagnosis and treatment. The staff of rural health facilities (health-care technicians) may not be qualified to distinguish respiratory diseases by auscultation. For this reason, the goal of this project is the development of a tele-stethoscopy system that allows a physician to receive real-time cardio-respiratory sounds from a remote auscultation, as well as video images showing where the technician is placing the stethoscope on the patient's body. A real-time wireless stethoscopy system was designed. The initial requirements were: 1) The system must send audio and video synchronously over IP networks, not requiring an Internet connection; 2) It must preserve the quality of cardiorespiratory sounds, allowing to adapt the binaural pieces and the chestpiece of standard stethoscopes, and; 3) Cardiorespiratory sounds should be recordable at both sides of the communication. In order to verify the diagnostic capacity of the system, a clinical validation with eight specialists has been designed. In a preliminary test, twelve patients have been auscultated by all the physicians using the tele-stethoscopy system, versus a local auscultation using traditional stethoscope. The system must allow listen the cardiac (systolic and diastolic murmurs, gallop sound, arrhythmias) and respiratory (rhonchi, rales and crepitations, wheeze, diminished and bronchial breath sounds, pleural friction rub) sounds. The design, development and initial validation of the real-time wireless tele-stethoscopy system are described in detail. The system was conceived from scratch as open-source, low-cost and designed in such a way that many universities and small local companies in developing countries may manufacture it. Only free open-source software has been used in order to minimize manufacturing costs and look for alliances to support its improvement and adaptation. The microcontroller firmware code, the computer software code and the PCB schematics are available for free download in a subversion repository hosted in SourceForge. It has been shown that real-time tele-stethoscopy, together with a videoconference system that allows a remote specialist to oversee the auscultation, may be a very helpful tool in rural areas of developing countries.

  1. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  2. Optimization of Particle-in-Cell Codes on RISC Processors

    NASA Technical Reports Server (NTRS)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  3. The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation.

    PubMed

    Taitz, Alan; Assaneo, M Florencia; Elisei, Natalia; Trípodi, Mónica; Cohen, Laurent; Sitt, Jacobo D; Trevisan, Marcos A

    2018-01-01

    Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.

  4. Sound-level-dependent representation of frequency modulations in human auditory cortex: a low-noise fMRI study.

    PubMed

    Brechmann, André; Baumgart, Frank; Scheich, Henning

    2002-01-01

    Recognition of sound patterns must be largely independent of level and of masking or jamming background sounds. Auditory patterns of relevance in numerous environmental sounds, species-specific vocalizations and speech are frequency modulations (FM). Level-dependent activation of the human auditory cortex (AC) in response to a large set of upward and downward FM tones was studied with low-noise (48 dB) functional magnetic resonance imaging at 3 Tesla. Separate analysis in four territories of AC was performed in each individual brain using a combination of anatomical landmarks and spatial activation criteria for their distinction. Activation of territory T1b (including primary AC) showed the most robust level dependence over the large range of 48-102 dB in terms of activated volume and blood oxygen level dependent contrast (BOLD) signal intensity. The left nonprimary territory T2 also showed a good correlation of level with activated volume but, in contrast to T1b, not with BOLD signal intensity. These findings are compatible with level coding mechanisms observed in animal AC. A systematic increase of activation with level was not observed for T1a (anterior of Heschl's gyrus) and T3 (on the planum temporale). Thus these areas might not be specifically involved in processing of the overall intensity of FM. The rostral territory T1a of the left hemisphere exhibited highest activation when the FM sound level fell 12 dB below scanner noise. This supports the previously suggested special involvement of this territory in foreground-background decomposition tasks. Overall, AC of the left hemisphere showed a stronger level-dependence of signal intensity and activated volume than the right hemisphere. But any side differences of signal intensity at given levels were lateralized to right AC. This might point to an involvement of the right hemisphere in more specific aspects of FM processing than level coding.

  5. The impact of artificially caries-affected dentin on bond strength of multi-mode adhesives

    PubMed Central

    Follak, Andressa Cargnelutti; Miotti, Leonardo Lamberti; Lenzi, Tathiane Larissa; Rocha, Rachel de Oliveira; Maxnuck Soares, Fabio Zovico

    2018-01-01

    Aim: The aim of this study is to evaluate the impact of dentin condition on bond strength of multi-mode adhesive systems (MMAS) to sound and artificially induced caries-affected dentin (CAD). Methods: Flat dentin surfaces of 112 bovine incisors were assigned to 16 subgroups (n = 7) according to the substrate condition (sound and CAD– pH-cycling for 14 days); adhesive systems (Scotchbond Universal, All-Bond Universal, Prime and Bond Elect, Adper Single Bond Plus and Clearfil SE Bond) and etching strategy (etch-and-rinse and self-etch). All systems were applied according to the manufacturer's instructions, and resin composite restorations were built. After 24 h of water storage, specimens were sectioned (0.8 mm2) and submitted to the microtensile test. Statistical Analysis: Data (MPa) were analyzed using three-way analysis of variance and Tukey's test (α = 0.05). Results: MMAS presented similar bond strength values, regardless etching strategy in each substrate condition. Bond strength values were lower when MMAS were applied to CAD in the etch-and-rinse strategy. Conclusion: The etching strategy did not influence the bond strength of MMAS to sound or CAD, considering each substrate separately. However, CAD impact negatively on bond strength of MMAS in etch-and rinse mode. PMID:29674813

  6. A COTS-Based Replacement Strategy for Aging Avionics Computers

    DTIC Science & Technology

    2001-12-01

    Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace

  7. Responses of auditory-cortex neurons to structural features of natural sounds.

    PubMed

    Nelken, I; Rotman, Y; Bar Yosef, O

    1999-01-14

    Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.

  8. Decentralized control of sound radiation using iterative loop recovery.

    PubMed

    Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R

    2010-10-01

    A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.

  9. Decentralized Control of Sound Radiation Using Iterative Loop Recovery

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2009-01-01

    A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.

  10. Validating LES for Jet Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2011-01-01

    Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that result in having dreams come true. This paper primarily addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. It also addresses the latter problem in discussing what are relevant measures critical for aeroacoustics that should be used in validating LES codes. These new diagnostic techniques deliver measurements and flow statistics of increasing sophistication and capability, but what of their accuracy? And what are the measures to be used in validation? This paper argues that the issue of accuracy be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it is argued that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound.

  11. Research on the optoacoustic communication system for speech transmission by variable laser-pulse repetition rates

    NASA Astrophysics Data System (ADS)

    Jiang, Hongyan; Qiu, Hongbing; He, Ning; Liao, Xin

    2018-06-01

    For the optoacoustic communication from in-air platforms to submerged apparatus, a method based on speech recognition and variable laser-pulse repetition rates is proposed, which realizes character encoding and transmission for speech. Firstly, the theories and spectrum characteristics of the laser-generated underwater sound are analyzed; and moreover character conversion and encoding for speech as well as the pattern of codes for laser modulation is studied; lastly experiments to verify the system design are carried out. Results show that the optoacoustic system, where laser modulation is controlled by speech-to-character baseband codes, is beneficial to improve flexibility in receiving location for underwater targets as well as real-time performance in information transmission. In the overwater transmitter, a pulse laser is controlled to radiate by speech signals with several repetition rates randomly selected in the range of one to fifty Hz, and then in the underwater receiver laser pulse repetition rate and data can be acquired by the preamble and information codes of the corresponding laser-generated sound. When the energy of the laser pulse is appropriate, real-time transmission for speaker-independent speech can be realized in that way, which solves the problem of underwater bandwidth resource and provides a technical approach for the air-sea communication.

  12. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  13. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  14. Modeling and Prediction of the Noise from Non-Axisymmetric Jets

    NASA Technical Reports Server (NTRS)

    Leib, Stewart J.

    2014-01-01

    The new source model was combined with the original sound propagation model developed for rectangular jets to produce a new version of the rectangular jet noise prediction code. This code was validated using a set of rectangular nozzles whose geometries were specified by NASA. Nozzles of aspect ratios two, four and eight were studied at jet exit Mach numbers of 0.5, 0.7 and 0.9, for a total of nine cases. Reynolds-averaged Navier-Stokes solutions for these jets were provided to the contactor for use as input to the code. Quantitative comparisons of the predicted azimuthal and polar directivity of the acoustic spectrum were made with experimental data provided by NASA. The results of these comparisons, along with a documentation of the propagation and source models, were reported in a journal article publication (Ref. 4). The complete set of computer codes and computational modules that make up the prediction scheme, along with a user's guide describing their use and example test cases, was provided to NASA as a deliverable of this task. The use of conformal mapping, along with simplified modeling of the mean flow field, for noise propagation modeling was explored for other nozzle geometries, to support the task milestone of developing methods which are applicable to other geometries and flow conditions of interest to NASA. A model to represent twin round jets using this approach was formulated and implemented. A general approach to solving the equations governing sound propagation in a locally parallel nonaxisymmetric jet was developed and implemented, in aid of the tasks and milestones charged with selecting more exact numerical methods for modeling sound propagation, and developing methods that have application to other nozzle geometries. The method is based on expansion of both the mean-flowdependent coefficients in the governing equation and the Green's function in series of orthogonal functions. The method was coded and tested on two analytically prescribed mean flows which were meant to represent noise reduction concepts being considered by NASA. Testing (Ref. 5) showed that the method was feasible for the types of mean flows of interest in jet noise applications. Subsequently, this method was further developed to allow use of mean flow profiles obtained from a Reynolds-averaged Navier-Stokes (RANS) solution of the flow. Preliminary testing of the generalized code was among the last tasks completed under this contract. The stringent noise-reduction goals of NASA's Fundamental Aeronautics Program suggest that, in addition to potentially complex exhaust nozzle geometries, next generation aircraft will also involve tighter integration of the engine with the airframe. Therefore, noise generated and propagated by jet flows in the vicinity of solid surfaces is expected to be quite significant, and reduced-order noise prediction tools will be needed that can deal with such geometries. One important source of noise is that generated by the interaction of a turbulent jet with the edge of a solid surface (edge noise). Such noise is generated, for example, by the passing of the engine exhaust over a shielding surface, such as a wing. Work under this task supported an effort to develop a RANS-based prediction code for edge noise based on an extension of the classical Rapid Distortion Theory (RDT) to transversely sheared base flows (Refs. 6 and 7). The RDT-based theoretical analysis was applied to the generic problem of a turbulent jet interacting with the trailing edge of a flat plate. A code was written to evaluate the formula derived for the spectrum of the noise produced by this interaction and results were compared with data taken at NASA Glenn for a variety of jet/plate configurations and flow conditions (Ref. 8). A longer-term goal of this task was to work toward the development of a high-fidelity model of sound propagation in spatially developing non-axisymmetric jets using direct numerical methods for solving the relevant equations. Working with NASA Glenn Acoustics Branch personnel, numerical methods and boundary conditions appropriate for use in a high-resolution calculation of the full equations governing sound propagation in a steady base flow were identified. Computer codes were then written (by NASA) and tested (by OAI) for an increasingly complex set of flow conditions to validate the methods. The NASA-supplied codes were ported to the High-End Computing resources of the NASA Advanced Supercomputing facility for testing and validation against analytical (where possible) and independent numerical solutions. The cases which were completed during the course of this contract were solutions of the two-dimensional linearized Euler equations with no mean flow, a uniform mean flow and a nonuniform mean flow representative of a parallel flow jet.

  15. Noise-induced hearing loss alters the temporal dynamics of auditory-nerve responses

    PubMed Central

    Scheidt, Ryan E.; Kale, Sushrut; Heinz, Michael G.

    2010-01-01

    Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids. PMID:20696230

  16. Individual Differences Reveal Correlates of Hidden Hearing Deficits

    PubMed Central

    Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G.

    2015-01-01

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.” PMID:25653371

  17. Changes in vocal parameters with social context in humpback whales: considering the effect of bystanders.

    PubMed

    Dunlop, Rebecca A

    Many theories and communication models developed from terrestrial studies focus on a simple dyadic exchange between a sender and receiver. During social interactions, the "frequency code" hypothesis suggests that frequency characteristics of vocal signals can simultaneously encode for static signaler attributes (size or sex) and dynamic information, such as motivation or emotional state. However, the additional presence of a bystander may result in a change of signaling behavior if the costs and benefits associated with the presence of this bystander are different from that of a simple dyad. In this study, two common humpback whale social calls ("wops" and "grumbles") were tested for differences related to group social behavior and the presence of bystanders. "Wop" parameters were stable with group social behavior, but were emitted at lower (14 dB) levels in the presence of a nearby singing whale compared to when a singing whale was not in the area. "Grumbles" were emitted at lower (30-39 Hz) fundamental frequencies in affiliative compared to non-affiliative groups and, in the presence of a nearby singing whale, were also emitted at lower (14 dB) levels. Vocal rates did not significantly change. The results suggest that, in humpbacks, the frequency in certain sound types relates to the social behavior of the vocalizing group, implying a frequency code system. The presence of a nearby audible bystander (a singing whale) had no effect on this frequency code, but by reducing their acoustic level, the signal-to-noise ratio at the singer would have been below 0, making it difficult for the singer to audibly detect the group. The frequency, duration, and amplitude parameters of humpback whale social vocalizations were tested between different social contexts: group social behavior (affiliating versus non-affiliating), the presence of a nearby singing whale, and the presence of a nearby non-singing group. "Grumbles" (commonly heard low-frequency unmodulated sounds) frequencies were lower in affiliating groups compared to non-affiliating groups, suggesting a change in group motivation (such as levels of aggression). "Wop" (another common sound type) structure (frequency and duration) was similar in affiliating and non-affiliating groups. In the presence of an audible bystander (a singing whale), both sound types were emitted at similar rates, but much lower amplitudes (14 dB), vastly reducing the detectability of these sounds by the singer. This suggests that these groups were acoustically avoiding the singing whale. They did not, however, acoustically respond to the presence of a nearby non-singing group.

  18. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  19. The Use of Communication Strategies in the Beginner EFL Classroom

    ERIC Educational Resources Information Center

    Rodríguez Cervantes, Carmen A.; Roux Rodriguez, Ruth

    2012-01-01

    When language learners do not know how to say a word in English, they can communicate effectively by using their hands, imitating sounds, inventing new words, or describing what they mean. These ways of communicating are communication strategies (CSs). EFL teachers are not always aware of the importance of teaching communication strategies to…

  20. Pulse Code Modulation (PCM) encoder handbook for Aydin Vector MMP-600 series system

    NASA Technical Reports Server (NTRS)

    Currier, S. F.; Powell, W. R.

    1986-01-01

    The hardware and software characteristics of a time division multiplex system are described. The system is used to sample analog and digital data. The data is merged with synchronization information to produce a serial pulse coded modulation (PCM) bit stream. Information presented herein is required by users to design compatible interfaces and assure effective utilization of this encoder system. GSFC/Wallops Flight Facility has flown approximately 50 of these systems through 1984 on sounding rockets with no inflight failures. Aydin Vector manufactures all of the components for these systems.

  1. Language-based communication strategies that support person-centered communication with persons with dementia.

    PubMed

    Savundranayagam, Marie Y; Moore-Nielsen, Kelsey

    2015-10-01

    There are many recommended language-based strategies for effective communication with persons with dementia. What is unknown is whether effective language-based strategies are also person centered. Accordingly, the objective of this study was to examine whether language-based strategies for effective communication with persons with dementia overlapped with the following indicators of person-centered communication: recognition, negotiation, facilitation, and validation. Conversations (N = 46) between staff-resident dyads were audio-recorded during routine care tasks over 12 weeks. Staff utterances were coded twice, using language-based and person-centered categories. There were 21 language-based categories and 4 person-centered categories. There were 5,800 utterances transcribed: 2,409 without indicators, 1,699 coded as language or person centered, and 1,692 overlapping utterances. For recognition, 26% of utterances were greetings, 21% were affirmations, 13% were questions (yes/no and open-ended), and 15% involved rephrasing. Questions (yes/no, choice, and open-ended) comprised 74% of utterances that were coded as negotiation. A similar pattern was observed for utterances coded as facilitation where 51% of utterances coded as facilitation were yes/no questions, open-ended questions, and choice questions. However, 21% of facilitative utterances were affirmations and 13% involved rephrasing. Finally, 89% of utterances coded as validation were affirmations. The findings identify specific language-based strategies that support person-centered communication. However, between 1 and 4, out of a possible 21 language-based strategies, overlapped with at least 10% of utterances coded as each person-centered indicator. This finding suggests that staff need training to use more diverse language strategies that support personhood of residents with dementia.

  2. A Simulation Model of the Planetary Boundary Layer at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Hwang, B.

    1978-01-01

    A simulation model which predicts the behavior of the Atmospheric Boundary Layer has been developed and coded. The model is partially evaluated by comparing it with laboratory measurements and the sounding measurements at Kennedy Space Center. The applicability of such an approach should prove quite widespread.

  3. 40 CFR 52.2470 - Identification of plan.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) as in effect 10/18/90; Washington Administrative Code Chapter 173-433 (Solid Fuel Burning Device... supplements to include the VMT Tracking Report data required for the Puget Sound CO Nonattainment Areas, dated... its I/M program in the two Washington ozone nonattainment areas classified as “marginal” and in the...

  4. Thinking Aloud about L2 Decoding: An Exploration of the Strategies Used by Beginner Learners when Pronouncing Unfamiliar French Words

    ERIC Educational Resources Information Center

    Woore, Robert

    2010-01-01

    "Decoding"--converting the written symbols (or graphemes) of an alphabetical writing system into the sounds (or phonemes) they represent, using knowledge of the language's symbol/sound correspondences--has been argued to be an important but neglected skill in the teaching of second language (L2) French in English secondary schools.…

  5. Efficiency of vibrational sounding in parasitoid host location depends on substrate density.

    PubMed

    Fischer, S; Samietz, J; Dorn, S

    2003-10-01

    Parasitoids of concealed hosts have to drill through a substrate with their ovipositor for successful parasitization. Hymenopteran species in this drill-and-sting guild locate immobile pupal hosts by vibrational sounding, i.e., echolocation on solid substrate. Although this host location strategy is assumed to be common among the Orussidae and Ichneumonidae there is no information yet whether it is adapted to characteristics of the host microhabitat. This study examined the effect of substrate density on responsiveness and host location efficiency in two pupal parasitoids, Pimpla turionellae and Xanthopimpla stemmator (Hymenoptera: Ichneumonidae), with different host-niche specialization and corresponding ovipositor morphology. Location and frequency of ovipositor insertions were scored on cylindrical plant stem models of various densities. Substrate density had a significant negative effect on responsiveness, number of ovipositor insertions, and host location precision in both species. The more niche-specific species X. stemmator showed a higher host location precision and insertion activity. We could show that vibrational sounding is obviously adapted to the host microhabitat of the parasitoid species using this host location strategy. We suggest the attenuation of pulses during vibrational sounding as the energetically costly limiting factor for this adaptation.

  6. Understanding environmental sounds in sentence context.

    PubMed

    Uddin, Sophia; Heald, Shannon L M; Van Hedger, Stephen C; Klos, Serena; Nusbaum, Howard C

    2018-03-01

    There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A Comparison of the Habitual Landing Strategies from Differing Drop Heights of Parkour Practitioners (Traceurs) and Recreationally Trained Individuals.

    PubMed

    Standing, Regan J; Maulder, Peter S

    2015-12-01

    Parkour is an activity that encompasses methods of jumping, climbing and vaulting. With landing being a pertinent part of this practise, Parkour participants (traceurs) have devised their own habitual landing strategies, which are suggested to be a safer and more effective style of landing. The purpose of this study was to compare the habitual landing strategies of traceurs and recreationally trained individuals from differing drop heights. Comparisons between landing sound and mechanical parameters were also assessed to gauge the level of landing safety. Ten recreationally trained participants and ten traceurs performed three landings from 25% and 50% body height using their own habitual landing strategies. Results at 25% showed significantly lower maximal vertical force (39.9%, p < 0.0013, ES = -1.88), longer times to maximal vertical force (68.6%, p < 0.0015, ES = 1.72) and lower loading rates (65.1%, p < 0.0002, ES = -2.22) in the traceur group. Maximal sound was also shown to be lower (3.6%), with an effect size of -0.63, however this was not statistically significant (p < 0.1612). At 50%, traceurs exhibited significantly different values within all variables including maximal sound (8.6%, p < 0.03, ES = -1.04), maximal vertical force (49.0%, p < 0.0002, ES = -2.38), time to maximal vertical force (65.9%, p < 0.0067, ES = 1.32) and loading rates (66.3%, p < 0.0002, ES = -2.00). Foot strike analysis revealed traceurs landed using forefoot or forefoot-midfoot strategies in 93.2% of trials; whereas recreationally trained participants used these styles in only 8.3% of these landings. To conclude, the habitual landings of traceurs are more effective at lowering the kinetic landing variables associated with a higher injury risk in comparison to recreationally trained individuals. Sound as a measure of landing effectiveness and safety holds potential significance; however requires further research to confirm. Key pointsHabitual traceur landings were observed to be safer landing techniques in comparison to those utilised by recreationally trained individuals, due to the lower maximal vertical forces, slower times to maximal vertical force, lesser loading rates and lower maximal sound.Traceurs predominantly landed with the forefoot only, whereas recreationally trained individuals habitually utilised a forefoot to heel landing strategy.The habitual landing techniques performed by traceurs may be beneficial for other landing sports to incorporate into training to reduce injury.

  8. The equation of state of predominant detonation products

    NASA Astrophysics Data System (ADS)

    Zaug, Joseph; Crowhurst, Jonathan; Bastea, Sorin; Fried, Laurence

    2009-06-01

    The equation of state of detonation products, when incorporated into an experimentally grounded thermochemical reaction algorithm can be used to predict the performance of explosives. Here we report laser based Impulsive Stimulated Light Scattering measurements of the speed of sound from a variety of polar and nonpolar detonation product supercritical fluids and mixtures. The speed of sound data are used to improve the exponential-six potentials employed within the Cheetah thermochemical code. We will discuss the improvements made to Cheetah in terms of predictions vs. measured performance data for common polymer blended explosives. Accurately computing the chemistry that occurs from reacted binder materials is one important step forward in our efforts.

  9. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems. Volume 2; Fan Suppression Model Development

    NASA Technical Reports Server (NTRS)

    Kontos, Karen B.; Kraft, Robert E.; Gliebe, Philip R.

    1996-01-01

    The Aircraft Noise Predication Program (ANOPP) is an industry-wide tool used to predict turbofan engine flyover noise in system noise optimization studies. Its goal is to provide the best currently available methods for source noise prediction. As part of a program to improve the Heidmann fan noise model, models for fan inlet and fan exhaust noise suppression estimation that are based on simple engine and acoustic geometry inputs have been developed. The models can be used to predict sound power level suppression and sound pressure level suppression at a position specified relative to the engine inlet.

  10. Fundamental Limits of Delay and Security in Device-to-Device Communication

    DTIC Science & Technology

    2013-01-01

    systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a

  11. Final report on LDRD project : coupling strategies for multi-physics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Matthew Morgan; Moffat, Harry K.; Carnes, Brian

    Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveragedmore » existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.« less

  12. Bulk viscosity of water in acoustic modal analysis and experiment

    NASA Astrophysics Data System (ADS)

    Kůrečka, Jan; Habán, Vladimír; Himr, Daniel

    2018-06-01

    Bulk viscosity is an important factor in the damping properties of fluid systems and exhibits frequency dependent behaviour. A comparison between modal analysis in ANSYS Acoustics, custom code and experimental data is presented in this paper. The measured system consists of closed ended water-filled steel pipes of different lengths. The influence of a pipe wall, flanges on both ends and longitudinal waves in the structural part were included in measurement evaluation. Therefore, the obtained values of sound speed and bulk viscosity are parameters of the fluid. A numerical simulation was carried out only using fluid volume in a range of bulk viscosity. Damping characteristics in this range were compared to measured values. The results show a significant influence of sound speed and subsequently, the use of sound speed value regressed from experimental data yields a better fit between the measurement and the computation.

  13. A Hybrid RANS/LES Approach for Predicting Jet Noise

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.

  14. Temporal processing and adaptation in the songbird auditory forebrain.

    PubMed

    Nagel, Katherine I; Doupe, Allison J

    2006-09-21

    Songbird auditory neurons must encode the dynamics of natural sounds at many volumes. We investigated how neural coding depends on the distribution of stimulus intensities. Using reverse-correlation, we modeled responses to amplitude-modulated sounds as the output of a linear filter and a nonlinear gain function, then asked how filters and nonlinearities depend on the stimulus mean and variance. Filter shape depended strongly on mean amplitude (volume): at low mean, most neurons integrated sound over many milliseconds, while at high mean, neurons responded more to local changes in amplitude. Increasing the variance (contrast) of amplitude modulations had less effect on filter shape but decreased the gain of firing in most cells. Both filter and gain changes occurred rapidly after a change in statistics, suggesting that they represent nonlinearities in processing. These changes may permit neurons to signal effectively over a wider dynamic range and are reminiscent of findings in other sensory systems.

  15. Contrast Gain Control in Auditory Cortex

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.

    2011-01-01

    Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603

  16. Determining Position Inside Non-industrial Buildings Using Ultrasound Transducers

    PubMed Central

    Escudero, Francesc; Margalef, Jordi; Luengo, Sonia; Alsina, Maria; Ribes, Josep M.; Pérez, Juan

    2007-01-01

    The position determination inside a building where no GPS signal is being received can be ascertained using laser transmitters in industrial situations where there are no people or using triangulation of the signal strength, normally electro-magnetic signals, if the required accuracy is more than a metre. Our solution is aimed at situations where people are present and where the required accuracy is less than 30 cm, such as in shopping precincts or supermarkets. To achieve this, a network of ultrasonic transmitters is fitted into the ceiling which receives a synchronised time signal. Each transmitter has a unique identifier code and emits its code with a delay with respect to the common time signal which is proportional to its code number with an ASK modulation over the ultrasonic band centred on 40 KHz. The receivers circulating beneath the transmitters receive the codes of those within their detection range, translate the time delays into distances and then obtain their position by triangulation since the receivers know the position of every transmitter. Since the receivers are not synchronised with the common time signal or the actual speed of the sound, whose value varies appreciably with temperature, relative humidity and atmospheric pressure, a consecutive approximation algorithm has been introduced. This is based on the fact that the Z coordinator of the receiver is known and constant and thus it is possible, with only three different identifiers received, to deduce the phase of the common time signal and estimate the speed of the sound with a fourth identifier. PMID:28903247

  17. Ambient Air Mitigation Strategies for Reducing Exposures to Mobile Source PM2.5 Emissions

    EPA Science Inventory

    Presentation discussing ambient air mitigation strategies for near-road exposures. The presentation provides an overview of multiple methods, but focuses on the role roadside features (sound walls, vegetation) may play. This presentation summarizes preoviously published work by...

  18. The Electric Company Writers' Notebook.

    ERIC Educational Resources Information Center

    Children's Television Workshop, New York, NY.

    This handbook outlines the curriculum objectives for the children's television program, "The Electric Company." The first portion of the text delineates strategies for teaching symbol/sound analysis, including units on blends, letter groups, and word structure. A second section addresses strategies for reading for meaning, including…

  19. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING: ARIZONA LAB DATA (UA-D-13.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for Arizona Lab Data. This strategy was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; lab data forms.

    The National Human Exposure Assessment Survey (NHEXAS) is a federal ...

  20. [Urban noise pollution].

    PubMed

    Chouard, C H

    2001-07-01

    Noise is responsible for cochlear and general damages. Hearing loss and tinnitus greatly depend on sound intensity and duration. Short-duration sound of sufficient intensity (gunshot or explosion) will not be described because they are not currently encountered in our normal urban environment. Sound levels of less than 75 d (A) are unlikely to cause permanent hearing loss, while sound levels of about 85 d (A) with exposures of 8 h per day will produce permanent hearing loss after many years. Popular and largely amplified music is today one of the most dangerous causes of noise induced hearing loss. The intensity of noises (airport, highway) responsible for stress and general consequences (cardiovascular) is generally lower. Individual noise sensibility depends on several factors. Strategies to prevent damage from sound exposure should include the use of individual hearing protection devices, education programs beginning with school-age children, consumer guidance, increased product noise labelling, and hearing conservation programs for occupational settings.

  1. Guidelines for the Design, Fabrication, Testing, Installation and Operation of Srf Cavities

    NASA Astrophysics Data System (ADS)

    Theilacker, J.; Carter, H.; Foley, M.; Hurh, P.; Klebaner, A.; Krempetz, K.; Nicol, T.; Olis, D.; Page, T.; Peterson, T.; Pfund, P.; Pushka, D.; Schmitt, R.; Wands, R.

    2010-04-01

    Superconducting Radio-Frequency (SRF) cavities containing cryogens under pressure pose a potential rupture hazard to equipment and personnel. Generally, pressure vessels fall within the scope of the ASME Boiler and Pressure Vessel Code however, the use of niobium as a material for the SRF cavities is beyond the applicability of the Code. Fermilab developed a guideline to ensure sound engineering practices governing the design, fabrication, testing, installation and operation of SRF cavities. The objective of the guideline is to reduce hazards and to achieve an equivalent level of safety afforded by the ASME Code. The guideline addresses concerns specific to SRF cavities in the areas of materials, design and analysis, welding and brazing, pressure relieving requirements, pressure testing and quality control.

  2. Musical Experience Influences Statistical Learning of a Novel Language

    PubMed Central

    Shook, Anthony; Marian, Viorica; Bartolotti, James; Schroeder, Scott R.

    2014-01-01

    Musical experience may benefit learning a new language by enhancing the fidelity with which the auditory system encodes sound. In the current study, participants with varying degrees of musical experience were exposed to two statistically-defined languages consisting of auditory Morse-code sequences which varied in difficulty. We found an advantage for highly-skilled musicians, relative to less-skilled musicians, in learning novel Morse-code based words. Furthermore, in the more difficult learning condition, performance of lower-skilled musicians was mediated by their general cognitive abilities. We suggest that musical experience may lead to enhanced processing of statistical information and that musicians’ enhanced ability to learn statistical probabilities in a novel Morse-code language may extend to natural language learning. PMID:23505962

  3. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks

    PubMed Central

    Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao

    2016-01-01

    A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well. PMID:27657088

  4. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks.

    PubMed

    Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao

    2016-09-21

    A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well.

  5. Neural Representation of Concurrent Harmonic Sounds in Monkey Primary Auditory Cortex: Implications for Models of Auditory Scene Analysis

    PubMed Central

    Steinschneider, Mitchell; Micheyl, Christophe

    2014-01-01

    The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate “auditory objects” with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the “object-related negativity” recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch. PMID:25209282

  6. Adaptation in sound localization processing induced by interaural time difference in amplitude envelope at high frequencies.

    PubMed

    Kawashima, Takayuki; Sato, Takao

    2012-01-01

    When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.

  7. The Other End of the Leash: An Experimental Test to Analyze How Owners Interact with Their Pet Dogs.

    PubMed

    Cimarelli, Giulia; Turcsán, Borbála; Range, Friederike; Virányi, Zsófia

    2017-10-13

    It has been suggested that the way in which owners interact with their dogs can largely vary and influence the dog-owner bond, but very few objective studies, so far, have addressed how the owner interacts with the dog. The goal of the present study was to record dog owners' interaction styles by means of objective observation and coding. The experiment included eight standardized situations in which owners of pet dogs were asked to perform specific tasks including both positive (i.e. playing, teaching a new task, showing a preference towards an object in a food searching task, greeting after separation) and potentially distressing tasks (i.e. physical restriction during DNA sampling, putting a T-shirt onto the dog, giving basic obedience commands while the dog was distracted). The video recordings were coded off-line using a specifically designed coding scheme including scores for communication, social support, warmth, enthusiasm, and play style, as well as frequency of behaviors like petting, praising, commands, and attention sounds. Exploratory Factor Analysis of the 20 variables measured revealed 3 factors, labeled as Owner Warmth, Owner Social Support, and Owner Control, which can be viewed as analogues to parenting style dimensions. The experimental procedure introduced here represents the first standardized measure of interaction styles of dog owners. The methodology presented here is a useful tool to investigate individual variation in the interaction style of pet dog owners that can be used to explain differences in the dog-human relationship, dogs' behavioral outcomes, and dogs stress coping strategies, all crucial elements both from a theoretical and applied point of view.

  8. Labyrinth Seal Flutter Analysis and Test Validation in Support of Robust Rocket Engine Design

    NASA Technical Reports Server (NTRS)

    El-Aini, Yehia; Park, John; Frady, Greg; Nesman, Tom

    2010-01-01

    High energy-density turbomachines, like the SSME turbopumps, utilize labyrinth seals, also referred to as knife-edge seals, to control leakage flow. The pressure drop for such seals is order of magnitude higher than comparable jet engine seals. This is aggravated by the requirement of tight clearances resulting in possible unfavorable fluid-structure interaction of the seal system (seal flutter). To demonstrate these characteristics, a benchmark case of a High Pressure Oxygen Turbopump (HPOTP) outlet Labyrinth seal was studied in detail. First, an analytical assessment of the seal stability was conducted using a Pratt & Whitney legacy seal flutter code. Sensitivity parameters including pressure drop, rotor-to-stator running clearances and cavity volumes were examined and modeling strategies established. Second, a concurrent experimental investigation was undertaken to validate the stability of the seal at the equivalent operating conditions of the pump. Actual pump hardware was used to construct the test rig, also referred to as the (Flutter Rig). The flutter rig did not include rotational effects or temperature. However, the use of Hydrogen gas at high inlet pressure provided good representation of the critical parameters affecting flutter especially the speed of sound. The flutter code predictions showed consistent trends in good agreement with the experimental data. The rig test program produced a stability threshold empirical parameter that separated operation with and without flutter. This empirical parameter was used to establish the seal build clearances to avoid flutter while providing the required cooling flow metering. The calibrated flutter code along with the empirical flutter parameter was used to redesign the baseline seal resulting in a flutter-free robust configuration. Provisions for incorporation of mechanical damping devices were introduced in the redesigned seal to ensure added robustness

  9. The Other End of the Leash: An Experimental Test to Analyze How Owners Interact with Their Pet Dogs

    PubMed Central

    Cimarelli, Giulia; Turcsán, Borbála; Range, Friederike; Virányi, Zsófia

    2017-01-01

    It has been suggested that the way in which owners interact with their dogs can largely vary and influence the dog-owner bond, but very few objective studies, so far, have addressed how the owner interacts with the dog. The goal of the present study was to record dog owners' interaction styles by means of objective observation and coding. The experiment included eight standardized situations in which owners of pet dogs were asked to perform specific tasks including both positive (i.e. playing, teaching a new task, showing a preference towards an object in a food searching task, greeting after separation) and potentially distressing tasks (i.e. physical restriction during DNA sampling, putting a T-shirt onto the dog, giving basic obedience commands while the dog was distracted). The video recordings were coded off-line using a specifically designed coding scheme including scores for communication, social support, warmth, enthusiasm, and play style, as well as frequency of behaviors like petting, praising, commands, and attention sounds. Exploratory Factor Analysis of the 20 variables measured revealed 3 factors, labeled as Owner Warmth, Owner Social Support, and Owner Control, which can be viewed as analogues to parenting style dimensions. The experimental procedure introduced here represents the first standardized measure of interaction styles of dog owners. The methodology presented here is a useful tool to investigate individual variation in the interaction style of pet dog owners that can be used to explain differences in the dog-human relationship, dogs' behavioral outcomes, and dogs stress coping strategies, all crucial elements both from a theoretical and applied point of view. PMID:29053669

  10. A System for Heart Sounds Classification

    PubMed Central

    Redlarski, Grzegorz; Gradolewski, Dawid; Palkowski, Aleksander

    2014-01-01

    The future of quick and efficient disease diagnosis lays in the development of reliable non-invasive methods. As for the cardiac diseases – one of the major causes of death around the globe – a concept of an electronic stethoscope equipped with an automatic heart tone identification system appears to be the best solution. Thanks to the advancement in technology, the quality of phonocardiography signals is no longer an issue. However, appropriate algorithms for auto-diagnosis systems of heart diseases that could be capable of distinguishing most of known pathological states have not been yet developed. The main issue is non-stationary character of phonocardiography signals as well as a wide range of distinguishable pathological heart sounds. In this paper a new heart sound classification technique, which might find use in medical diagnostic systems, is presented. It is shown that by combining Linear Predictive Coding coefficients, used for future extraction, with a classifier built upon combining Support Vector Machine and Modified Cuckoo Search algorithm, an improvement in performance of the diagnostic system, in terms of accuracy, complexity and range of distinguishable heart sounds, can be made. The developed system achieved accuracy above 93% for all considered cases including simultaneous identification of twelve different heart sound classes. The respective system is compared with four different major classification methods, proving its reliability. PMID:25393113

  11. Early Career Survival: Many Find It Harder to Enter the Profession than It Sounds in the Recruitment Literature

    ERIC Educational Resources Information Center

    Rodgers, Lala

    2004-01-01

    Many find it harder to enter the profession than it sounds in the recruitment literature. This article outlines how one librarian's job searching strategies after she experienced a layoff from her dream job due to budget cuts, can help others gain, or regain, a foothold in the profession. The author of this article offers many suggestions for…

  12. Initial Teaching Orthographies.

    ERIC Educational Resources Information Center

    Dewey, Godfrey

    To achieve its purpose, an initial teaching orthography (i.t.o.) should be as simple in form and substance as possible; it should be phonemic rather than phonetic. The 40 sounds distinguished by Pitmanic shorthand and some provision for schwa can serve as a basic code. The symbols can be derived from either of two major sources--standardizing the…

  13. 40 CFR 52.2477 - .Original identification of plan section.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) as in effect 10/18/90; Washington Administrative Code Chapter 173-433 (Solid Fuel Burning Device... supplements to include the VMT Tracking Report data required for the Puget Sound CO Nonattainment Areas, dated... its I/M program in the two Washington ozone nonattainment areas classified as “marginal” and in the...

  14. 40 CFR 52.2477 - .Original identification of plan section.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) as in effect 10/18/90; Washington Administrative Code Chapter 173-433 (Solid Fuel Burning Device... supplements to include the VMT Tracking Report data required for the Puget Sound CO Nonattainment Areas, dated... its I/M program in the two Washington ozone nonattainment areas classified as “marginal” and in the...

  15. Effects of Dual Coded Multimedia Instruction Employing Image Morphing on Learning a Logographic Language

    ERIC Educational Resources Information Center

    Wang, Ling; Blackwell, Aleka Akoyunoglou

    2015-01-01

    Native speakers of alphabetic languages, which use letters governed by grapheme-phoneme correspondence rules, often find it particularly challenging to learn a logographic language whose writing system employs symbols with no direct sound-to-spelling connection but links to the visual and semantic information. The visuospatial properties of…

  16. An open real-time tele-stethoscopy system

    PubMed Central

    2012-01-01

    Background Acute respiratory infections are the leading cause of childhood mortality. The lack of physicians in rural areas of developing countries makes difficult their correct diagnosis and treatment. The staff of rural health facilities (health-care technicians) may not be qualified to distinguish respiratory diseases by auscultation. For this reason, the goal of this project is the development of a tele-stethoscopy system that allows a physician to receive real-time cardio-respiratory sounds from a remote auscultation, as well as video images showing where the technician is placing the stethoscope on the patient’s body. Methods A real-time wireless stethoscopy system was designed. The initial requirements were: 1) The system must send audio and video synchronously over IP networks, not requiring an Internet connection; 2) It must preserve the quality of cardiorespiratory sounds, allowing to adapt the binaural pieces and the chestpiece of standard stethoscopes, and; 3) Cardiorespiratory sounds should be recordable at both sides of the communication. In order to verify the diagnostic capacity of the system, a clinical validation with eight specialists has been designed. In a preliminary test, twelve patients have been auscultated by all the physicians using the tele-stethoscopy system, versus a local auscultation using traditional stethoscope. The system must allow listen the cardiac (systolic and diastolic murmurs, gallop sound, arrhythmias) and respiratory (rhonchi, rales and crepitations, wheeze, diminished and bronchial breath sounds, pleural friction rub) sounds. Results The design, development and initial validation of the real-time wireless tele-stethoscopy system are described in detail. The system was conceived from scratch as open-source, low-cost and designed in such a way that many universities and small local companies in developing countries may manufacture it. Only free open-source software has been used in order to minimize manufacturing costs and look for alliances to support its improvement and adaptation. The microcontroller firmware code, the computer software code and the PCB schematics are available for free download in a subversion repository hosted in SourceForge. Conclusions It has been shown that real-time tele-stethoscopy, together with a videoconference system that allows a remote specialist to oversee the auscultation, may be a very helpful tool in rural areas of developing countries. PMID:22917062

  17. Assembly of the Auditory Circuitry by a Hox Genetic Network in the Mouse Brainstem

    PubMed Central

    Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M.; Studer, Michèle

    2013-01-01

    Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem. PMID:23408898

  18. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    PubMed

    Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M; Studer, Michèle

    2013-01-01

    Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  19. Laboratory evidence for short and long-term damage to pink salmon incubating in oiled gravel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heintz, R.; Rice, S.; Wiedmer, M.

    1995-12-31

    Pink salmon, incubating in gravel contaminated with crude oil, demonstrated immediate and delayed responses in the laboratory at doses consistent with the concentrations observed in oiled streams in Prince William Sound. The authors incubated pink salmon embryos in a simulated intertidal environment with gravel contaminated by oil from the Exxon Valdez. During the incubation and emergence periods the authors quantified dose-response curves for characters affected directly by the oil. After emergence, fish were coded wire tagged and released, or cultured in netpens. Delayed responses have been observed among the cultured fish, and further observations will be made when coded wiremore » tagged fish return in September 1995. The experiments have demonstrated that eggs need not contact oiled gravel to experience increased mortality, and doses as low as 17 ppb tPAH in water can have delayed effects on growth. A comparison of sediment tPAH concentrations from streams in Prince William Sound with these laboratory data suggests that many 1989 brood pink salmon were exposed to deleterious quantities of oil.« less

  20. A critical review of principal traffic noise models: Strategies and implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garg, Naveen, E-mail: ngarg@mail.nplindia.ernet.in; Department of Mechanical, Production and Industrial Engineering, Delhi Technological University, Delhi 110042; Maji, Sagar

    2014-04-01

    The paper presents an exhaustive comparison of principal traffic noise models adopted in recent years in developed nations. The comparison is drawn on the basis of technical attributes including source modelling and sound propagation algorithms. Although the characterization of source in terms of rolling and propulsion noise in conjunction with advanced numerical methods for sound propagation has significantly reduced the uncertainty in traffic noise predictions, the approach followed is quite complex and requires specialized mathematical skills for predictions which is sometimes quite cumbersome for town planners. Also, it is sometimes difficult to follow the best approach when a variety ofmore » solutions have been proposed. This paper critically reviews all these aspects pertaining to the recent models developed and adapted in some countries and also discusses the strategies followed and implications of these models. - Highlights: • Principal traffic noise models developed are reviewed. • Sound propagation algorithms used in traffic noise models are compared. • Implications of models are discussed.« less

  1. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    PubMed

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  2. Neural plasticity associated with recently versus often heard objects.

    PubMed

    Bourquin, Nathalie M-P; Spierer, Lucas; Murray, Micah M; Clarke, Stephanie

    2012-09-01

    In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215 ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213 ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. PHASE I MATERIALS PROPERTY DATABASE DEVELOPMENT FOR ASME CODES AND STANDARDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Weiju; Lin, Lianshan

    2013-01-01

    To support the ASME Boiler and Pressure Vessel Codes and Standard (BPVC) in modern information era, development of a web-based materials property database is initiated under the supervision of ASME Committee on Materials. To achieve efficiency, the project heavily draws upon experience from development of the Gen IV Materials Handbook and the Nuclear System Materials Handbook. The effort is divided into two phases. Phase I is planned to deliver a materials data file warehouse that offers a depository for various files containing raw data and background information, and Phase II will provide a relational digital database that provides advanced featuresmore » facilitating digital data processing and management. Population of the database will start with materials property data for nuclear applications and expand to data covering the entire ASME Code and Standards including the piping codes as the database structure is continuously optimized. The ultimate goal of the effort is to establish a sound cyber infrastructure that support ASME Codes and Standards development and maintenance.« less

  4. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR CODING: ARIZONA LAB DATA (UA-D-13.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for Arizona Lab Data. This strategy was developed for use in the Arizona NHEXAS project and the Border study. Keywords: data; coding; lab data forms.

    The U.S.-Mexico Border Program is sponsored by the Environmental Healt...

  5. Assessment of communication abilities in multilingual children: Language rights or human rights?

    PubMed

    Cruz-Ferreira, Madalena

    2018-02-01

    Communication involves a sender, a receiver and a shared code operating through shared rules. Breach of communication results from disruption to any of these basic components of a communicative chain, although assessment of communication abilities typically focuses on senders/receivers, on two assumptions: first, that their command of features and rules of the language in question (the code), such as sounds, words or word order, as described in linguists' theorisations, represents the full scope of linguistic competence; and second, that languages are stable, homogeneous entities, unaffected by their users' communicative needs. Bypassing the role of the code in successful communication assigns decisive rights to abstract languages rather than to real-life language users, routinely leading to suspected or diagnosed speech-language disorder in academic and clinical assessment of multilingual children's communicative skills. This commentary reflects on whether code-driven assessment practices comply with the spirit of Article 19 of the Universal Declaration of Human Rights.

  6. Individual differences reveal correlates of hidden hearing deficits.

    PubMed

    Bharadwaj, Hari M; Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G

    2015-02-04

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing." Copyright © 2015 the authors 0270-6474/15/352161-12$15.00/0.

  7. Does conal prime CANAL more than cinal? Masked phonological priming effects in Spanish with the lexical decision task.

    PubMed

    Pollatsek, Alexander; Perea, Manuel; Carreiras, Manuel

    2005-04-01

    Evidence for an early involvement of phonology in word identification usually relies on the comparison between a target word preceded by a homophonic prime and an orthographic control (rait-RATE vs. raut-RATE). This comparison rests on the assumption that the two control primes are equally orthographically similar to the target. Here, we tested for phonological effects with a masked priming paradigm in which orthographic similarity between priming conditions was perfectly controlled at the letter level and in which identification of the prime was virtually at chance for both stimulus onset asynchronies (SOAs) (66 and 50 msec). In the key prime-target pairs, each prime differed from the target by one vowel letter, but one changed the sound of the initial c, and the other did not (cinal-CANAL vs. conal-CANAL). In the control prime-target pairs, the primes had the identical vowel manipulation, but neither changed the initial consonant sound (pinel-PANEL vs. ponel-PANEL). For both high- and low-frequency words, lexical decision responses to the target were slower when the prime changed the sound of the c than when it did not, whereas there was no difference for the controls at both SOAs. However, this phonological effect was small and was not significant when the SOA was 50 msec. The pattern of data is consistent with an early phonological coding of primes that occurs just a little later than orthographic coding.

  8. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singleton, Jr., Robert; Israel, Daniel M.; Doebling, Scott William

    For code verification, one compares the code output against known exact solutions. There are many standard test problems used in this capacity, such as the Noh and Sedov problems. ExactPack is a utility that integrates many of these exact solution codes into a common API (application program interface), and can be used as a stand-alone code or as a python package. ExactPack consists of python driver scripts that access a library of exact solutions written in Fortran or Python. The spatial profiles of the relevant physical quantities, such as the density, fluid velocity, sound speed, or internal energy, are returnedmore » at a time specified by the user. The solution profiles can be viewed and examined by a command line interface or a graphical user interface, and a number of analysis tools and unit tests are also provided. We have documented the physics of each problem in the solution library, and provided complete documentation on how to extend the library to include additional exact solutions. ExactPack’s code architecture makes it easy to extend the solution-code library to include additional exact solutions in a robust, reliable, and maintainable manner.« less

  10. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  11. Frequency-Limiting Effects on Speech and Environmental Sound Identification for Cochlear Implant and Normal Hearing Listeners

    PubMed Central

    Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S.; Cho, Chang Hyun

    2018-01-01

    Background and Objectives It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Subjects and Methods Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. Results CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. Conclusions This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing. PMID:29325391

  12. Frequency-Limiting Effects on Speech and Environmental Sound Identification for Cochlear Implant and Normal Hearing Listeners.

    PubMed

    Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S; Cho, Chang Hyun

    2017-12-01

    It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing.

  13. A sound budget for the southeastern Bering Sea: measuring wind, rainfall, shipping, and other sources of underwater sound.

    PubMed

    Nystuen, Jeffrey A; Moore, Sue E; Stabeno, Phyllis J

    2010-07-01

    Ambient sound in the ocean contains quantifiable information about the marine environment. A passive aquatic listener (PAL) was deployed at a long-term mooring site in the southeastern Bering Sea from 27 April through 28 September 2004. This was a chain mooring with lots of clanking. However, the sampling strategy of the PAL filtered through this noise and allowed the background sound field to be quantified for natural signals. Distinctive signals include the sound from wind, drizzle and rain. These sources dominate the sound budget and their intensity can be used to quantify wind speed and rainfall rate. The wind speed measurement has an accuracy of +/-0.4 m s(-1) when compared to a buoy-mounted anemometer. The rainfall rate measurement is consistent with a land-based measurement in the Aleutian chain at Cold Bay, AK (170 km south of the mooring location). Other identifiable sounds include ships and short transient tones. The PAL was designed to reject transients in the range important for quantification of wind speed and rainfall, but serendipitously recorded peaks in the sound spectrum between 200 Hz and 3 kHz. Some of these tones are consistent with whale calls, but most are apparently associated with mooring self-noise.

  14. A State-of-the-Art Review: Personalization of Tinnitus Sound Therapy.

    PubMed

    Searchfield, Grant D; Durai, Mithila; Linford, Tania

    2017-01-01

    Background: There are several established, and an increasing number of putative, therapies using sound to treat tinnitus. There appear to be few guidelines for sound therapy selection and application. Aim: To review current approaches to personalizing sound therapy for tinnitus. Methods: A "state-of-the-art" review (Grant and Booth, 2009) was undertaken to answer the question: how do current sound-based therapies for tinnitus adjust for tinnitus heterogeneity? Scopus, Google Scholar, Embase and PubMed were searched for the 10-year period 2006-2016. The search strategy used the following key words: "tinnitus" AND "sound" AND "therapy" AND "guidelines" OR "personalized" OR "customized" OR "individual" OR "questionnaire" OR "selection." The results of the review were cataloged and organized into themes. Results: In total 165 articles were reviewed in full, 83 contained sufficient details to contribute to answering the study question. The key themes identified were hearing compensation, pitched-match therapy, maskability, reaction to sound and psychosocial factors. Although many therapies mentioned customization, few could be classified as being personalized. Several psychoacoustic and questionnaire-based methods for assisting treatment selection were identified. Conclusions: Assessment methods are available to assist clinicians to personalize sound-therapy and empower patients to be active in therapy decision-making. Most current therapies are modified using only one characteristic of the individual and/or their tinnitus.

  15. Issues and Strategies for Improving Constructibility.

    DTIC Science & Technology

    1988-09-01

    materials. First, the roof design called for the use of an asphalt coated roof felt layer below an EPDM membrane. The asphalt coated felt is not needed when a...being prepared by people trained in subjects foreign to construction. As designers, we were in fact contractually and professionally isolated from...specially constructed for sound isolation . The architect* correctly specified special sound seals around the doors between the rooms in this area, but

  16. Multiplexed Detection of Cytokines Based on Dual Bar-Code Strategy and Single-Molecule Counting.

    PubMed

    Li, Wei; Jiang, Wei; Dai, Shuang; Wang, Lei

    2016-02-02

    Cytokines play important roles in the immune system and have been regarded as biomarkers. While single cytokine is not specific and accurate enough to meet the strict diagnosis in practice, in this work, we constructed a multiplexed detection method for cytokines based on dual bar-code strategy and single-molecule counting. Taking interferon-γ (IFN-γ) and tumor necrosis factor-α (TNF-α) as model analytes, first, the magnetic nanobead was functionalized with the second antibody and primary bar-code strands, forming a magnetic nanoprobe. Then, through the specific reaction of the second antibody and the antigen that fixed by the primary antibody, sandwich-type immunocomplex was formed on the substrate. Next, the primary bar-code strands as amplification units triggered multibranched hybridization chain reaction (mHCR), producing nicked double-stranded polymers with multiple branched arms, which were served as secondary bar-code strands. Finally, the secondary bar-code strands hybridized with the multimolecule labeled fluorescence probes, generating enhanced fluorescence signals. The numbers of fluorescence dots were counted one by one for quantification with epi-fluorescence microscope. By integrating the primary and secondary bar-code-based amplification strategy and the multimolecule labeled fluorescence probes, this method displayed an excellent sensitivity with the detection limits were both 5 fM. Unlike the typical bar-code assay that the bar-code strands should be released and identified on a microarray, this method is more direct. Moreover, because of the selective immune reaction and the dual bar-code mechanism, the resulting method could detect the two targets simultaneously. Multiple analysis in human serum was also performed, suggesting that our strategy was reliable and had a great potential application in early clinical diagnosis.

  17. Effects of various electrode configurations on music perception, intonation and speaker gender identification.

    PubMed

    Landwehr, Markus; Fürstenberg, Dirk; Walger, Martin; von Wedel, Hasso; Meister, Hartmut

    2014-01-01

    Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.

  18. A sounding rocket program in extreme and far ultraviolet interferometry

    NASA Technical Reports Server (NTRS)

    Chakrabarti, S.

    1994-01-01

    A self-compensating, all reflection interferometric (SCARI) spectrometer was developed that can provide high resolution measurements of spectral features at any wavelength. Several mechanical components were developed that aid the instrument's performance at the short wavelength range. Examples include an optical bench and modular removable precision mechanisms for alignment. Upon alignment and lock down of the interferometer with the latter, the device is removed to minimize weight. A ray-trace code was developed to simulate the instrument's performance. Interference patterns were obtained at the shortest wavelength: the hydrogen Lyman alpha (1216 A). A laboratory instrument was developed that will be flown aboard a Black Brant sounding rocket to study the very local interstellar medium.

  19. "Sign here": nursing value and the process of informed consent.

    PubMed

    Cook, Wesley E

    2014-01-01

    Protecting patient autonomy is a key nursing role. The Code of Ethics (American Nurses Association, 2010), contextualizes the nurse's call to advocacy within the doctrine of informed consent. This article offers a primer on the legal, ethical, and practical aspects of procedural informed consent and examines the value of nursing's role within the process. The theory of nursing's value is sound, but the literature lacks data. Higher levels of evidence are necessary to make sound decisions about best practice for the process of informed consent. As such, this article concludes that adding nursing research to the current discourse should prove most valuable to patients, providers, and the nursing profession as a whole.

  20. Restoring speech perception with cochlear implants by spanning defective electrode contacts.

    PubMed

    Frijns, Johan H M; Snel-Bongers, Jorien; Vellinga, Dirk; Schrage, Erik; Vanpoucke, Filiep J; Briaire, Jeroen J

    2013-04-01

    Even with six defective contacts, spanning can largely restore speech perception with the HiRes 120 speech processing strategy to the level supported by an intact electrode array. Moreover, the sound quality is not degraded. Previous studies have demonstrated reduced speech perception scores (SPS) with defective contacts in HiRes 120. This study investigated whether replacing defective contacts by spanning, i.e. current steering on non-adjacent contacts, is able to restore speech recognition to the level supported by an intact electrode array. Ten adult cochlear implant recipients (HiRes90K, HiFocus1J) with experience with HiRes 120 participated in this study. Three different defective electrode arrays were simulated (six separate defective contacts, three pairs or two triplets). The participants received three take-home strategies and were asked to evaluate the sound quality in five predefined listening conditions. After 3 weeks, SPS were evaluated with monosyllabic words in quiet and in speech-shaped background noise. The participants rated the sound quality equal for all take-home strategies. SPS with background noise were equal for all conditions tested. However, SPS in quiet (85% phonemes correct on average with the full array) decreased significantly with increasing spanning distance, with a 3% decrease for each spanned contact.

  1. A music quality rating test battery for cochlear implant users to compare the FSP and HDCIS strategies for music appreciation.

    PubMed

    Looi, Valerie; Winter, Philip; Anderson, Ilona; Sucher, Catherine

    2011-08-01

    The purpose of this study was to develop a music quality rating test battery (MQRTB) and pilot test it by comparing appraisal ratings from cochlear implant (CI) recipients using the fine-structure processing (FSP) and high-definition continuous interleaved sampling (HDCIS) speech processing strategies. The development of the MQRTB involved three stages: (1) Selection of test items for the MQRTB; (2) Verification of its length and complexity with normally-hearing individuals; and (3) Pilot testing with CI recipients. Part 1 involved 65 adult listeners, Part 2 involved 10 normally-hearing adults, and Part 3 involved five adult MED-EL CI recipients. The MQRTB consisted of ten songs, with ratings made on scales assessing pleasantness, naturalness, richness, fullness, sharpness, and roughness. Results of the pilot study, which compared FSP and HDCIS for music, indicated that acclimatization to a strategy had a significant effect on ratings (p < 0.05). When acclimatized to FSP, the group rated FSP as closer to 'exactly as I want it to sound' than HDCIS (p < 0.05), and that HDCIS sounded significantly sharper and rougher than FSP. However when acclimatized to HDCIS, there were no significant differences between ratings. There was no effect of song familiarity or genre on ratings. Overall the results suggest that the use of FSP as the default strategy for MED-EL recipients would have a positive effect on music appreciation, and that the MQRTB is an effective tool for assessing music sound quality.

  2. A Survey on the Feasibility of Sound Classification on Wireless Sensor Nodes

    PubMed Central

    Salomons, Etto L.; Havinga, Paul J. M.

    2015-01-01

    Wireless sensor networks are suitable to gain context awareness for indoor environments. As sound waves form a rich source of context information, equipping the nodes with microphones can be of great benefit. The algorithms to extract features from sound waves are often highly computationally intensive. This can be problematic as wireless nodes are usually restricted in resources. In order to be able to make a proper decision about which features to use, we survey how sound is used in the literature for global sound classification, age and gender classification, emotion recognition, person verification and identification and indoor and outdoor environmental sound classification. The results of the surveyed algorithms are compared with respect to accuracy and computational load. The accuracies are taken from the surveyed papers; the computational loads are determined by benchmarking the algorithms on an actual sensor node. We conclude that for indoor context awareness, the low-cost algorithms for feature extraction perform equally well as the more computationally-intensive variants. As the feature extraction still requires a large amount of processing time, we present four possible strategies to deal with this problem. PMID:25822142

  3. Common humpback whale (Megaptera novaeangliae) sound types for passive acoustic monitoring.

    PubMed

    Stimpert, Alison K; Au, Whitlow W L; Parks, Susan E; Hurst, Thomas; Wiley, David N

    2011-01-01

    Humpback whales (Megaptera novaeangliae) are one of several baleen whale species in the Northwest Atlantic that coexist with vessel traffic and anthropogenic noise. Passive acoustic monitoring strategies can be used in conservation management, but the first step toward understanding the acoustic behavior of a species is a good description of its acoustic repertoire. Digital acoustic tags (DTAGs) were placed on humpback whales in the Stellwagen Bank National Marine Sanctuary to record and describe the non-song sounds being produced in conjunction with foraging activities. Peak frequencies of sounds were generally less than 1 kHz, but ranged as high as 6 kHz, and sounds were generally less than 1 s in duration. Cluster analysis distilled the dataset into eight groups of sounds with similar acoustic properties. The two most stereotyped and distinctive types ("wops" and "grunts") were also identified aurally as candidates for use in passive acoustic monitoring. This identification of two of the most common sound types will be useful for moving forward conservation efforts on this Northwest Atlantic feeding ground.

  4. Improving Aerobic Dance Programs: The Key Role of Colleges and Universities.

    ERIC Educational Resources Information Center

    Francis, Lorna L.

    1991-01-01

    Presents strategies to help college and university professors provide practical skills needed by qualified aerobic dance instructors. An in-depth course emphasizing sound teaching strategies helps prepare dance exercise teachers. The article describes how the physical education department at San Diego State University offers aerobic dance…

  5. Employing Knowledge Transfer to Support IS Implementation in SMEs

    ERIC Educational Resources Information Center

    Wynn, Martin; Turner, Phillip; Abas, Hanida; Shen, Rui

    2009-01-01

    Information systems strategy is an increasingly important component of overall business strategy in small and medium-sized enterprises (SMEs). The need for readily available and consistent management information, drawn from integrated systems based on sound and upgradeable technologies, has led many senior company managers to review the business…

  6. The Efficacy of Shared Reading with Teens.

    ERIC Educational Resources Information Center

    Hicks, Karen; Wadlington, Beth

    An instructional strategy adapted the Big Book reading experience to the adolescent student to increase enthusiasm for reading, vocabulary development, and sound word attack and comprehension strategies. Criteria for choosing books to read aloud with teenagers include: (1) select well written books; (2) select books that reflect students'…

  7. Analysis of longitudinal data from the Puget Sound Transportation Panel : task F : cross section and dynamic analysis of activity and travel patterns in PSTP

    DOT National Transportation Integrated Search

    1995-02-01

    The profiles contained in the appendix are all in the Portland, Maine district. They are listed below by border groups as used in the study, with the U.S. Customs port codes indicated. Maine Frontier Border Crossings: Calais - Calais, Ferry Point, ME...

  8. The Contributions of Vocabulary and Letter Writing Automaticity to Word Reading and Spelling for Kindergartners

    ERIC Educational Resources Information Center

    Kim, Young-Suk; Al Otaiba, Stephanie; Puranik, Cynthia; Folsom, Jessica Sidler; Gruelich, Luana

    2014-01-01

    In the present study we examined the relation between alphabet knowledge fluency (letter names and sounds) and letter writing automaticity, and unique relations of letter writing automaticity and semantic knowledge (i.e., vocabulary) to word reading and spelling over and above code-related skills such as phonological awareness and alphabet…

  9. Using a Study Circle Model to Improve Teacher Confidence and Proficiency in Delivering Pronunciation Instruction in the Classroom

    ERIC Educational Resources Information Center

    Echelberger, Andrea; McCurdy, Suzanne Gichrist; Parrish, Betsy

    2018-01-01

    Adult English language learners are hungry for pronunciation instruction that helps them to "crack the code" of speaking intelligible English (Derwing, 2003). Research indicates benefits of pronunciation instruction with adult learners, yet many teachers believe they lack the knowledge and background to make sound instructional decisions…

  10. Improving speech perception in noise for children with cochlear implants.

    PubMed

    Gifford, René H; Olund, Amy P; Dejong, Melissa

    2011-10-01

    Current cochlear implant recipients are achieving increasingly higher levels of speech recognition; however, the presence of background noise continues to significantly degrade speech understanding for even the best performers. Newer generation Nucleus cochlear implant sound processors can be programmed with SmartSound strategies that have been shown to improve speech understanding in noise for adult cochlear implant recipients. The applicability of these strategies for use in children, however, is not fully understood nor widely accepted. To assess speech perception for pediatric cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether Nucleus sound processor SmartSound strategies yield improved sentence recognition in noise for children who learn language through the implant. Single subject, repeated measures design. Twenty-two experimental subjects with cochlear implants (mean age 11.1 yr) and 25 control subjects with normal hearing (mean age 9.6 yr) participated in this prospective study. Speech reception thresholds (SRT) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the experimental subjects' everyday program incorporating Adaptive Dynamic Range Optimization (ADRO) as well as with the addition of Autosensitivity control (ASC). Adaptive SRTs with the Hearing In Noise Test (HINT) sentences were obtained for all 22 experimental subjects, and performance-in percent correct-was assessed in a fixed +6 dB SNR (signal-to-noise ratio) for a six-subject subset. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the SmartSound setting on the SRT in noise. The primary findings mirrored those reported previously with adult cochlear implant recipients in that the addition of ASC to ADRO significantly improved speech recognition in noise for pediatric cochlear implant recipients. The mean degree of improvement in the SRT with the addition of ASC to ADRO was 3.5 dB for a mean SRT of 10.9 dB SNR. Thus, despite the fact that these children have acquired auditory/oral speech and language through the use of their cochlear implant(s) equipped with ADRO, the addition of ASC significantly improved their ability to recognize speech in high levels of diffuse background noise. The mean SRT for the control subjects with normal hearing was 0.0 dB SNR. Given that the mean SRT for the experimental group was 10.9 dB SNR, despite the improvements in performance observed with the addition of ASC, cochlear implants still do not completely overcome the speech perception deficit encountered in noisy environments accompanying the diagnosis of severe-to-profound hearing loss. SmartSound strategies currently available in latest generation Nucleus cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise for pediatric cochlear implant recipients. Despite the reluctance of pediatric audiologists to utilize SmartSound settings for regular use, the results of the current study support the addition of ASC to ADRO for everyday listening environments to improve speech perception in a child's typical everyday program. American Academy of Audiology.

  11. Perceptual consequences of disrupted auditory nerve activity.

    PubMed

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.

  12. Fine Structure Processing improves speech perception as well as objective and subjective benefits in pediatric MED-EL COMBI 40+ users.

    PubMed

    Lorens, Artur; Zgoda, Małgorzata; Obrycka, Anita; Skarżynski, Henryk

    2010-12-01

    Presently, there are only few studies examining the benefits of fine structure information in coding strategies. Against this background, this study aims to assess the objective and subjective performance of children experienced with the C40+ cochlear implant using the CIS+ coding strategy who were upgraded to the OPUS 2 processor using FSP and HDCIS. In this prospective study, 60 children with more than 3.5 years of experience with the C40+ cochlear implant were upgraded to the OPUS 2 processor and fit and tested with HDCIS (Interval I). After 3 months of experience with HDCIS, they were fit with the FSP coding strategy (Interval II) and tested with all strategies (FSP, HDCIS, CIS+). After an additional 3-4 months, they were assessed on all three strategies and asked to choose their take-home strategy (Interval III). The children were tested using the Adaptive Auditory Speech Test which measures speech reception threshold (SRT) in quiet and noise at each test interval. The children were also asked to rate on a Visual Analogue Scale their satisfaction and coding strategy preference when listening to speech and a pop song. However, since not all tests could be performed at one single visit, some children were not able complete all tests at all intervals. At the study endpoint, speech in quiet showed a significant difference in SRT of 1.0 dB between FSP and HDCIS, with FSP performing better. FSP proved a better strategy compared with CIS+, showing lower SRT results of 5.2 dB. Speech in noise tests showed FSP to be significantly better than CIS+ by 0.7 dB, and HDCIS to be significantly better than CIS+ by 0.8 dB. Both satisfaction and coding strategy preference ratings also revealed that FSP and HDCIS strategies were better than CIS+ strategy when listening to speech and music. FSP was better than HDCIS when listening to speech. This study demonstrates that long-term pediatric users of the COMBI 40+ are able to upgrade to a newer processor and coding strategy without compromising their listening performance and even improving their performance with FSP after a short time of experience. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Neural coding of time-varying interaural time differences and time-varying amplitude in the inferior colliculus

    PubMed Central

    2017-01-01

    Binaural cues occurring in natural environments are frequently time varying, either from the motion of a sound source or through interactions between the cues produced by multiple sources. Yet, a broad understanding of how the auditory system processes dynamic binaural cues is still lacking. In the current study, we directly compared neural responses in the inferior colliculus (IC) of unanesthetized rabbits to broadband noise with time-varying interaural time differences (ITD) with responses to noise with sinusoidal amplitude modulation (SAM) over a wide range of modulation frequencies. On the basis of prior research, we hypothesized that the IC, one of the first stages to exhibit tuning of firing rate to modulation frequency, might use a common mechanism to encode time-varying information in general. Instead, we found weaker temporal coding for dynamic ITD compared with amplitude modulation and stronger effects of adaptation for amplitude modulation. The differences in temporal coding of dynamic ITD compared with SAM at the single-neuron level could be a neural correlate of “binaural sluggishness,” the inability to perceive fluctuations in time-varying binaural cues at high modulation frequencies, for which a physiological explanation has so far remained elusive. At ITD-variation frequencies of 64 Hz and above, where a temporal code was less effective, noise with a dynamic ITD could still be distinguished from noise with a constant ITD through differences in average firing rate in many neurons, suggesting a frequency-dependent tradeoff between rate and temporal coding of time-varying binaural information. NEW & NOTEWORTHY Humans use time-varying binaural cues to parse auditory scenes comprising multiple sound sources and reverberation. However, the neural mechanisms for doing so are poorly understood. Our results demonstrate a potential neural correlate for the reduced detectability of fluctuations in time-varying binaural information at high speeds, as occurs in reverberation. The results also suggest that the neural mechanisms for processing time-varying binaural and monaural cues are largely distinct. PMID:28381487

  14. Perceived Benefits and Drawbacks of Disclosure Practices: An Analysis of PLWHAs' Strategies for Disclosing HIV Status.

    PubMed

    Catona, Danielle; Greene, Kathryn; Magsamen-Conrad, Kate

    2015-01-01

    People living with HIV/AIDS must make decisions about how, where, when, what, and to whom to disclose their HIV status. This study explores their perceptions of benefits and drawbacks of various HIV disclosure strategies. The authors interviewed 53 people living with HIV/AIDS from a large AIDS service organization in a northeastern U.S. state and used a combination of deductive and inductive coding to analyze disclosure strategies and advantages and disadvantages of disclosure strategies. Deductive codes consisted of eight strategies subsumed under three broad categories: mode (face-to-face, non-face-to-face, and third-party disclosure), context (setting, bringing a companion, and planning a time), and content (practicing and incremental disclosure). Inductive coding identified benefits and drawbacks for enacting each specific disclosure strategy. The discussion focuses on theoretical explanations for the reasons for and against disclosure strategy enactment and the utility of these findings for practical interventions concerning HIV disclosure practices and decision making.

  15. High-fidelity large eddy simulation for supersonic jet noise prediction

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt M.

    The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the wall model treatment are then utilized to simulate military-style nozzles with and without beveling of the nozzle exit plane. Experiments of beveled converging-diverging nozzles have found reduced noise levels for some observer locations. Predicting the noise for these geometries provides a good initial test of the overall methodology for a more complex nozzle. The jet flowfield and acoustic data are analyzed and compared to similar experiments and excellent agreement is found. Potential areas of improvement are discussed for future research.

  16. The influence of bat echolocation call duration and timing on auditory encoding of predator distance in noctuoid moths.

    PubMed

    Gordon, Shira D; Ter Hofstede, Hannah M

    2018-03-22

    Animals co-occur with multiple predators, making sensory systems that can encode information about diverse predators advantageous. Moths in the families Noctuidae and Erebidae have ears with two auditory receptor cells (A1 and A2) used to detect the echolocation calls of predatory bats. Bat communities contain species that vary in echolocation call duration, and the dynamic range of A1 is limited by the duration of sound, suggesting that A1 provides less information about bats with shorter echolocation calls. To test this hypothesis, we obtained intensity-response functions for both receptor cells across many moth species for sound pulse durations representing the range of echolocation call durations produced by bat species in northeastern North America. We found that the threshold and dynamic range of both cells varied with sound pulse duration. The number of A1 action potentials per sound pulse increases linearly with increasing amplitude for long-duration pulses, saturating near the A2 threshold. For short sound pulses, however, A1 saturates with only a few action potentials per pulse at amplitudes far lower than the A2 threshold for both single sound pulses and pulse sequences typical of searching or approaching bats. Neural adaptation was only evident in response to approaching bat sequences at high amplitudes, not search-phase sequences. These results show that, for short echolocation calls, a large range of sound levels cannot be coded by moth auditory receptor activity, resulting in no information about the distance of a bat, although differences in activity between ears might provide information about direction. © 2018. Published by The Company of Biologists Ltd.

  17. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging “periodicity-tagged” segregation of competing speech in rooms

    PubMed Central

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.

    2015-01-01

    The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening conditions. PMID:25628545

  18. Observationally constrained modeling of sound in curved ocean internal waves: examination of deep ducting and surface ducting at short range.

    PubMed

    Duda, Timothy F; Lin, Ying-Tsong; Reeder, D Benjamin

    2011-09-01

    A study of 400 Hz sound focusing and ducting effects in a packet of curved nonlinear internal waves in shallow water is presented. Sound propagation roughly along the crests of the waves is simulated with a three-dimensional parabolic equation computational code, and the results are compared to measured propagation along fixed 3 and 6 km source/receiver paths. The measurements were made on the shelf of the South China Sea northeast of Tung-Sha Island. Construction of the time-varying three-dimensional sound-speed fields used in the modeling simulations was guided by environmental data collected concurrently with the acoustic data. Computed three-dimensional propagation results compare well with field observations. The simulations allow identification of time-dependent sound forward scattering and ducting processes within the curved internal gravity waves. Strong acoustic intensity enhancement was observed during passage of high-amplitude nonlinear waves over the source/receiver paths, and is replicated in the model. The waves were typical of the region (35 m vertical displacement). Two types of ducting are found in the model, which occur asynchronously. One type is three-dimensional modal trapping in deep ducts within the wave crests (shallow thermocline zones). The second type is surface ducting within the wave troughs (deep thermocline zones). © 2011 Acoustical Society of America

  19. Hearing in noisy environments: noise invariance and contrast gain control

    PubMed Central

    Willmore, Ben D B; Cooke, James E; King, Andrew J

    2014-01-01

    Contrast gain control has recently been identified as a fundamental property of the auditory system. Electrophysiological recordings in ferrets have shown that neurons continuously adjust their gain (their sensitivity to change in sound level) in response to the contrast of sounds that are heard. At the level of the auditory cortex, these gain changes partly compensate for changes in sound contrast. This means that sounds which are structurally similar, but have different contrasts, have similar neuronal representations in the auditory cortex. As a result, the cortical representation is relatively invariant to stimulus contrast and robust to the presence of noise in the stimulus. In the inferior colliculus (an important subcortical auditory structure), gain changes are less reliably compensatory, suggesting that contrast- and noise-invariant representations are constructed gradually as one ascends the auditory pathway. In addition to noise invariance, contrast gain control provides a variety of computational advantages over static neuronal representations; it makes efficient use of neuronal dynamic range, may contribute to redundancy-reducing, sparse codes for sound and allows for simpler decoding of population responses. The circuits underlying auditory contrast gain control are still under investigation. As in the visual system, these circuits may be modulated by factors other than stimulus contrast, forming a potential neural substrate for mediating the effects of attention as well as interactions between the senses. PMID:24907308

  20. Non-mineralized fibrocartilage shows the lowest elastic modulus in the rabbit supraspinatus tendon insertion: measurement with scanning acoustic microscopy.

    PubMed

    Sano, Hirotaka; Saijo, Yoshifumi; Kokubun, Shoichi

    2006-01-01

    The acoustic properties of rabbit supraspinatus tendon insertions were measured by scanning acoustic microscopy. After cutting parallel to the supraspinatus tendon fibers, specimens were fixed with 10% neutralized formalin, embedded in paraffin, and sectioned. Both the sound speed and the attenuation constant were measured at the insertion site. The 2-dimensional distribution of the sound speed and that of the attenuation constant were displayed with color-coded scales. The acoustic properties reflected both the histologic architecture and the collagen type. In the tendon proper and the non-mineralized fibrocartilage, the sound speed and attenuation constant gradually decreased as the predominant collagen type changed from I to II. In the mineralized fibrocartilage, they increased markedly with the mineralization of the fibrocartilaginous tissue. These results indicate that the non-mineralized fibrocartilage shows the lowest elastic modulus among 4 zones at the insertion site, which could be interpreted as an adaptation to various types of biomechanical stress.

  1. Perceptual Grouping Affects Pitch Judgments Across Time and Frequency

    PubMed Central

    Borchert, Elizabeth M. O.; Micheyl, Christophe; Oxenham, Andrew J.

    2010-01-01

    Pitch, the perceptual correlate of fundamental frequency (F0), plays an important role in speech, music and animal vocalizations. Changes in F0 over time help define musical melodies and speech prosody, while comparisons of simultaneous F0 are important for musical harmony, and for segregating competing sound sources. This study compared listeners’ ability to detect differences in F0 between pairs of sequential or simultaneous tones that were filtered into separate, non-overlapping spectral regions. The timbre differences induced by filtering led to poor F0 discrimination in the sequential, but not the simultaneous, conditions. Temporal overlap of the two tones was not sufficient to produce good performance; instead performance appeared to depend on the two tones being integrated into the same perceptual object. The results confirm the difficulty of comparing the pitches of sequential sounds with different timbres and suggest that, for simultaneous sounds, pitch differences may be detected through a decrease in perceptual fusion rather than an explicit coding and comparison of the underlying F0s. PMID:21077719

  2. A combined Fuzzy and Naive Bayesian strategy can be used to assign event codes to injury narratives.

    PubMed

    Marucci-Wellman, H; Lehto, M; Corns, H

    2011-12-01

    Bayesian methods show promise for classifying injury narratives from large administrative datasets into cause groups. This study examined a combined approach where two Bayesian models (Fuzzy and Naïve) were used to either classify a narrative or select it for manual review. Injury narratives were extracted from claims filed with a worker's compensation insurance provider between January 2002 and December 2004. Narratives were separated into a training set (n=11,000) and prediction set (n=3,000). Expert coders assigned two-digit Bureau of Labor Statistics Occupational Injury and Illness Classification event codes to each narrative. Fuzzy and Naïve Bayesian models were developed using manually classified cases in the training set. Two semi-automatic machine coding strategies were evaluated. The first strategy assigned cases for manual review if the Fuzzy and Naïve models disagreed on the classification. The second strategy selected additional cases for manual review from the Agree dataset using prediction strength to reach a level of 50% computer coding and 50% manual coding. When agreement alone was used as the filtering strategy, the majority were coded by the computer (n=1,928, 64%) leaving 36% for manual review. The overall combined (human plus computer) sensitivity was 0.90 and positive predictive value (PPV) was >0.90 for 11 of 18 2-digit event categories. Implementing the 2nd strategy improved results with an overall sensitivity of 0.95 and PPV >0.90 for 17 of 18 categories. A combined Naïve-Fuzzy Bayesian approach can classify some narratives with high accuracy and identify others most beneficial for manual review, reducing the burden on human coders.

  3. Accuracy of Mobile-Based Audiometry in the Evaluation of Hearing Loss in Quiet and Noisy Environments.

    PubMed

    Saliba, Joe; Al-Reefi, Mahmoud; Carriere, Junie S; Verma, Neil; Provencal, Christiane; Rappaport, Jamie M

    2017-04-01

    Objectives (1) To compare the accuracy of 2 previously validated mobile-based hearing tests in determining pure tone thresholds and screening for hearing loss. (2) To determine the accuracy of mobile audiometry in noisy environments through noise reduction strategies. Study Design Prospective clinical study. Setting Tertiary hospital. Subjects and Methods Thirty-three adults with or without hearing loss were tested (mean age, 49.7 years; women, 42.4%). Air conduction thresholds measured as pure tone average and at individual frequencies were assessed by conventional audiogram and by 2 audiometric applications (consumer and professional) on a tablet device. Mobile audiometry was performed in a quiet sound booth and in a noisy sound booth (50 dB of background noise) through active and passive noise reduction strategies. Results On average, 91.1% (95% confidence interval [95% CI], 89.1%-93.2%) and 95.8% (95% CI, 93.5%-97.1%) of the threshold values obtained in a quiet sound booth with the consumer and professional applications, respectively, were within 10 dB of the corresponding audiogram thresholds, as compared with 86.5% (95% CI, 82.6%-88.5%) and 91.3% (95% CI, 88.5%-92.8%) in a noisy sound booth through noise cancellation. When screening for at least moderate hearing loss (pure tone average >40 dB HL), the consumer application showed a sensitivity and specificity of 87.5% and 95.9%, respectively, and the professional application, 100% and 95.9%. Overall, patients preferred mobile audiometry over conventional audiograms. Conclusion Mobile audiometry can correctly estimate pure tone thresholds and screen for moderate hearing loss. Noise reduction strategies in mobile audiometry provide a portable effective solution for hearing assessments outside clinical settings.

  4. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    PubMed

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Context-aware and locality-constrained coding for image categorization.

    PubMed

    Xiao, Wenhua; Wang, Bin; Liu, Yu; Bao, Weidong; Zhang, Maojun

    2014-01-01

    Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts.

  6. Plastic modes of listening: affordance in constructed sound environments

    NASA Astrophysics Data System (ADS)

    Sjolin, Anders

    This thesis is concerned with how the ecological approach to perception with the inclusion of listening modes, informs the creation of sound art installation, or more specifically as referred to in this thesis as constructed sound environments. The basis for the thesis has been a practiced based research where the aim and purpose of the written part of this PhD project has been to critically investigate the area of sound art, in order to map various approaches towards participating in and listening to a constructed sound environment. The main areas has been the notion of affordance as coined by James J. Gibson (1986), listening modes as coined by Pierre Schaeffer (1966) and further developed by Michel Chion (1994), aural architects as coined by Blesser and Salter (2007) and the holistic approach towards understanding sound art developed by Brandon LaBelle (2006). The findings within the written part of the thesis, based on a qualitative analysis, have informed the practice that has resulted in artefacts in the form of seven constructed sound environments that also functions as case studies for further analysis. The aim of the practice has been to exemplify the methodology, strategy and progress behind the organisation and construction of sound environments The research concerns point towards the acknowledgment of affordance as the crucial factor in understanding a constructed sound environment. The affordance approach govern the idea that perceiving a sound environment is a top-down process where the autonomic quality of a constructed sound environment is based upon the perception of structures of the sound material and its relationship with speaker placement and surrounding space. This enables a researcher to side step the conflicting poles of musical/abstract and non-musical/realistic classification of sound elements and regard these poles as included, not separated elements in the analysis of a constructed sound environment.

  7. Adoption and Black Teenagers: The Viability of a Pregnancy Resolution Strategy.

    ERIC Educational Resources Information Center

    Kalmuss, Debra

    1992-01-01

    Uses data from Cycle IV of the National Survey of Family Growth to evaluate whether adoption is feasible pregnancy resolution strategy for African-American teenagers. Results indicated that existing data do not provide sound basis for conclusions about whether adoption can ultimately serve as alternative to early child rearing for larger numbers…

  8. The Effect of Adaptive Confidence Strategies in Computer-Assisted Instruction on Learning and Learner Confidence

    ERIC Educational Resources Information Center

    Warren, Richard Daniel

    2012-01-01

    The purpose of this research was to investigate the effects of including adaptive confidence strategies in instructionally sound computer-assisted instruction (CAI) on learning and learner confidence. Seventy-one general educational development (GED) learners recruited from various GED learning centers at community colleges in the southeast United…

  9. Strategy Inventory for Language Learning-ELL Student Form: Testing for Factorial Validity

    ERIC Educational Resources Information Center

    Ardasheva, Yuliya; Tretter, Thomas R.

    2013-01-01

    As the school-aged English language learner (ELL) population continues to grow in the United States and other English-speaking countries, psychometrically sound instruments to measure their language learning strategies (LLS) become ever more critical. This study adapted and validated an adult-oriented measure of LLS (50-item "Strategy…

  10. Observation of Couple Conflicts: Clinical Assessment Applications, Stubborn Truths, and Shaky Foundations

    PubMed Central

    Heyman, Richard E.

    2006-01-01

    The purpose of this review is to provide a balanced examination of the published research involving the observation of couples, with special attention toward the use of observation for clinical assessment. All published articles that (a) used an observational coding system and (b) relate to the validity of the coding system are summarized in a table. The psychometric properties of observational systems and the use of observation in clinical practice are discussed. Although advances have been made in understanding couple conflict through the use of observation, the review concludes with an appeal to the field to develop constructs in a psychometrically and theoretically sound manner. PMID:11281039

  11. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach.

    PubMed

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-03-22

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.

  12. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach †

    PubMed Central

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-01-01

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187

  13. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive.

    PubMed

    Encke, Jörg; Hemmert, Werner

    2018-01-01

    The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.

  14. Planetary atmosphere models: A research and instructional web-based resource

    NASA Astrophysics Data System (ADS)

    Gray, Samuel Augustine

    The effects of altitude change on the temperature, pressure, density, and speed of sound were investigated. These effects have been documented in Global Reference Atmospheric Models (GRAMs) to be used in calculating the conditions in various parts of the atmosphere for several planets. Besides GRAMs, there are several websites that provide online calculators for the 1976 US Standard Atmosphere. This thesis presents the creation of an online calculator of the atmospheres of Earth, Mars, Venus, Titan, and Neptune. The websites consist of input forms for altitude and temperature adjustment followed by a results table for the calculated data. The first phase involved creating a spreadsheet reference based on the 1976 US Standard Atmosphere and other planetary GRAMs available. Microsoft Excel was used to input the equations and make a graphical representation of the temperature, pressure, density, and speed of sound change as altitude changed using equations obtained from the GRAMs. These spreadsheets were used later as a reference for the JavaScript code in both the design and comparison of the data output of the calculators. The websites were created using HTML, CSS, and JavaScript coding languages. The calculators could accurately display the temperature, pressure, density, and speed of sound of these planets from surface values to various stages within the atmosphere. These websites provide a resource for students involved in projects and classes that require knowledge of these changes in these atmospheres. This project also created a chance for new project topics to arise for future students involved in aeronautics and astronautics.

  15. Discrimination of brief speech sounds is impaired in rats with auditory cortex lesions

    PubMed Central

    Porter, Benjamin A.; Rosenthal, Tara R.; Ranasinghe, Kamalini G.; Kilgard, Michael P.

    2011-01-01

    Auditory cortex (AC) lesions impair complex sound discrimination. However, a recent study demonstrated spared performance on an acoustic startle response test of speech discrimination following AC lesions (Floody et al., 2010). The current study reports the effects of AC lesions on two operant speech discrimination tasks. AC lesions caused a modest and quickly recovered impairment in the ability of rats to discriminate consonant-vowel-consonant speech sounds. This result seems to suggest that AC does not play a role in speech discrimination. However, the speech sounds used in both studies differed in many acoustic dimensions and an adaptive change in discrimination strategy could allow the rats to use an acoustic difference that does not require an intact AC to discriminate. Based on our earlier observation that the first 40 ms of the spatiotemporal activity patterns elicited by speech sounds best correlate with behavioral discriminations of these sounds (Engineer et al., 2008), we predicted that eliminating additional cues by truncating speech sounds to the first 40 ms would render the stimuli indistinguishable to a rat with AC lesions. Although the initial discrimination of truncated sounds took longer to learn, the final performance paralleled rats using full-length consonant-vowel-consonant sounds. After 20 days of testing, half of the rats using speech onsets received bilateral AC lesions. Lesions severely impaired speech onset discrimination for at least one-month post lesion. These results support the hypothesis that auditory cortex is required to accurately discriminate the subtle differences between similar consonant and vowel sounds. PMID:21167211

  16. Sound exposure changes European seabass behaviour in a large outdoor floating pen: Effects of temporal structure and a ramp-up procedure.

    PubMed

    Neo, Y Y; Hubert, J; Bolle, L; Winter, H V; Ten Cate, C; Slabbekoorn, H

    2016-07-01

    Underwater sound from human activities may affect fish behaviour negatively and threaten the stability of fish stocks. However, some fundamental understanding is still lacking for adequate impact assessments and potential mitigation strategies. For example, little is known about the potential contribution of the temporal features of sound, the efficacy of ramp-up procedures, and the generalisability of results from indoor studies to the outdoors. Using a semi-natural set-up, we exposed European seabass in an outdoor pen to four treatments: 1) continuous sound, 2) intermittent sound with a regular repetition interval, 3) irregular repetition intervals and 4) a regular repetition interval with amplitude 'ramp-up'. Upon sound exposure, the fish increased swimming speed and depth, and swam away from the sound source. The behavioural readouts were generally consistent with earlier indoor experiments, but the changes and recovery were more variable and were not significantly influenced by sound intermittency and interval regularity. In addition, the 'ramp-up' procedure elicited immediate diving response, similar to the onset of treatment without a 'ramp-up', but the fish did not swim away from the sound source as expected. Our findings suggest that while sound impact studies outdoors increase ecological and behavioural validity, the inherently higher variability also reduces resolution that may be counteracted by increasing sample size or looking into different individual coping styles. Our results also question the efficacy of 'ramp-up' in deterring marine animals, which warrants more investigation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing.

    PubMed

    Ross, Bernhard; Miyazaki, Takahiro; Thompson, Jessica; Jamali, Shahab; Fujioka, Takako

    2014-10-15

    When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations. Copyright © 2014 the American Physiological Society.

  18. 37 CFR 201.22 - Advance notices of potential infringement of works consisting of sounds, images, or both.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Definitions. (1) An Advance Notice of Potential Infringement is a notice which, if served in accordance with section 411(b) of title 17 of the United States Code, and in accordance with the provisions of this..., provided registration for the work is made within three months after its first transmission. (2) For...

  19. Teaching Ethical Decision Making Using Dual Relationship Principles as a Case Example

    ERIC Educational Resources Information Center

    Boland-Prom, Kim; Anderson, Sandra C.

    2005-01-01

    When the National Association of Social Workers (1999) ratified the Code of Ethics in 2000, it was an acknowledgement that dual relationships can be part of sound social work practice. The educational materials that are available to educators do not move sufficiently beyond a risk-reduction approach to dual relationships to an assessment of how a…

  20. The Role of the Syllable in Foreign Language Learning: Improving Oral Production through Dual-Coded, Sound-Synchronised, Typographic Annotations

    ERIC Educational Resources Information Center

    Stenton, Anthony

    2013-01-01

    The CNRS-financed authoring system SWANS (Synchronised Web Authoring Notation System), now used in several CercleS centres, was developed by teams from four laboratories as a personalised learning tool for the purpose of making available knowledge about lexical stress patterns and mother-tongue interference in L2 speech production--helping…

  1. 50 CFR Table 16 to Part 679 - Area Codes and Descriptions for Use With State of Alaska ADF&G Commercial Operator's Annual...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Shrimp Outer Cook Inlet Shrimp Dungeness Crab King Crab Tanner Crab Miscellaneous Shellfish Salmon HH H H...) (K) GroundfishHerring King Crab Salmon Shrimp Dungeness Crab Tanner Crab Miscellaneous Shellfish KK K... 04.100 Prince William Sound (E) GroundfishHerring Shrimp Dungeness Crab King Crab Tanner Crab...

  2. New Millenium Inflatable Structures Technology

    NASA Technical Reports Server (NTRS)

    Mollerick, Ralph

    1997-01-01

    Specific applications where inflatable technology can enable or enhance future space missions are tabulated. The applicability of the inflatable technology to large aperture infra-red astronomy missions is discussed. Space flight validation and risk reduction are emphasized along with the importance of analytical tools in deriving structurally sound concepts and performing optimizations using compatible codes. Deployment dynamics control, fabrication techniques, and system testing are addressed.

  3. TREX13 Data Analysis/Modeling

    DTIC Science & Technology

    2018-03-29

    www.apl.washington.edu 29 Mar 2018 To: Dr. Robert H. Headrick Office of Naval Research (Code 322) 875 North Randolph Street Arlington, VA 22203-1995...Benjamin Blake Naval Research Laboratory Defense Technical Information Center DISTRIBUTION STATEMENT A. Approved for public release; distribution is... quantitatively impact sound behavior. To gain quantitative knowledge, TREX13 was designed to contemporaneously measure acoustics quantities and environmental

  4. Hyperspectral IASI L1C Data Compression.

    PubMed

    García-Sobrino, Joaquín; Serra-Sagristà, Joan; Bartrina-Rapesta, Joan

    2017-06-16

    The Infrared Atmospheric Sounding Interferometer (IASI), implemented on the MetOp satellite series, represents a significant step forward in atmospheric forecast and weather understanding. The instrument provides infrared soundings of unprecedented accuracy and spectral resolution to derive humidity and atmospheric temperature profiles, as well as some of the chemical components playing a key role in climate monitoring. IASI collects rich spectral information, which results in large amounts of data (about 16 Gigabytes per day). Efficient compression techniques are requested for both transmission and storage of such huge data. This study reviews the performance of several state of the art coding standards and techniques for IASI L1C data compression. Discussion embraces lossless, near-lossless and lossy compression. Several spectral transforms, essential to achieve improved coding performance due to the high spectral redundancy inherent to IASI products, are also discussed. Illustrative results are reported for a set of 96 IASI L1C orbits acquired over a full year (4 orbits per month for each IASI-A and IASI-B from July 2013 to June 2014) . Further, this survey provides organized data and facts to assist future research and the atmospheric scientific community.

  5. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  6. Modeling of Passive Acoustic Liners from High Fidelity Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Ferrari, Marcello do Areal Souto

    Noise reduction in aviation has been an important focus of study in the last few decades. One common solution is setting up acoustic liners in the internal walls of the engines. However, measurements in the laboratory with liners are expensive and time consuming. The present work proposes a nonlinear physics-based time domain model to predict the acoustic behavior of a given liner in a defined flow condition. The parameters of the model are defined by analysis of accurate numerical solutions of the flow obtained from a high-fidelity numerical code. The length of the cavity is taken into account by using an analytical procedure to account for internal reflections in the interior of the cavity. Vortices and jets originated from internal flow separations are confirmed to be important mechanisms of sound absorption, which defines the overall efficiency of the liner. Numerical simulations at different frequency, geometry and sound pressure level are studied in detail to define the model parameters. Comparisons with high-fidelity numerical simulations show that the proposed model is accurate, robust, and can be used to define a boundary condition simulating a liner in a high-fidelity code.

  7. Validating LES for Jet Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Bridges, James; Wernet, Mark P.

    2011-01-01

    Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that are produced. This paper addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. This paper argues that the issue of accuracy of the experimental measurements be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it argues that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound, such as two-point space-time velocity correlations. A brief review of data sources available is presented along with examples illustrating cross-facility and internal quality checks required of the data before it should be accepted for validation of LES.

  8. Screech Noise Generation From Supersonic Underexpanded Jets Investigated

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.

    2000-01-01

    Many supersonic military aircraft and some of the modern civilian aircraft (such as the Boeing 777) produce shock-associated noise. This noise is generated from the jet engine plume when the engine nozzle is operated beyond the subsonic operation limit to gain additional thrust. At these underexpanded conditions, a series of shock waves appear in the plume. The turbulent vortices present in the jet interact with the shock waves and produce the additional shock-associated noise. Screech belongs to this noise category, where sound is generated in single or multiple pure tones. The high dynamic load associated with screech can damage the tailplane. One purpose of this study at the NASA Glenn Research Center at Lewis Field was to provide an accurate data base for validating various computational fluid dynamics (CFD) codes. These codes will be used to predict the frequency and amplitude of screech tones. A second purpose was to advance the fundamental physical understanding of how shock-turbulence interactions generate sound. Previously, experiments on shock-turbulence interaction were impossible to perform because no suitable technique was available. As one part of this program, an optical Rayleigh-scattering measurement technique was devised to overcome this difficulty.

  9. Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.

    PubMed

    Bidelman, Gavin M; Grall, Jeremy

    2014-11-01

    Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  11. Tone Noise Predictions for a Spacecraft Cabin Ventilation Fan Ingesting Distorted Inflow and the Challenges of Validation

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Shook, Tony D.; Astler, Douglas T.; Bittinger, Samantha A.

    2011-01-01

    A fan tone noise prediction code has been developed at NASA Glenn Research Center that is capable of estimating duct mode sound power levels for a fan ingesting distorted inflow. This code was used to predict the circumferential and radial mode sound power levels in the inlet and exhaust duct of an axial spacecraft cabin ventilation fan. Noise predictions at fan design rotational speed were generated. Three fan inflow conditions were studied: an undistorted inflow, a circumferentially symmetric inflow distortion pattern (cylindrical rods inserted radially into the flowpath at 15deg, 135deg, and 255deg), and a circumferentially asymmetric inflow distortion pattern (rods located at 15deg, 52deg and 173deg). Noise predictions indicate that tones are produced for the distorted inflow cases that are not present when the fan operates with an undistorted inflow. Experimental data are needed to validate these acoustic predictions, as well as the aerodynamic performance predictions. Given the aerodynamic design of the spacecraft cabin ventilation fan, a mechanical and electrical conceptual design study was conducted. Design features of a fan suitable for obtaining detailed acoustic and aerodynamic measurements needed to validate predictions are discussed.

  12. Tone Noise Predictions for a Spacecraft Cabin Ventilation Fan Ingesting Distorted Inflow and the Challenges of Validation

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Shook, Tony D.; Astler, Douglas T.; Bittinger, Samantha A.

    2012-01-01

    A fan tone noise prediction code has been developed at NASA Glenn Research Center that is capable of estimating duct mode sound power levels for a fan ingesting distorted inflow. This code was used to predict the circumferential and radial mode sound power levels in the inlet and exhaust duct of an axial spacecraft cabin ventilation fan. Noise predictions at fan design rotational speed were generated. Three fan inflow conditions were studied: an undistorted inflow, a circumferentially symmetric inflow distortion pattern (cylindrical rods inserted radially into the flowpath at 15deg, 135deg, and 255deg), and a circumferentially asymmetric inflow distortion pattern (rods located at 15deg, 52deg and 173deg). Noise predictions indicate that tones are produced for the distorted inflow cases that are not present when the fan operates with an undistorted inflow. Experimental data are needed to validate these acoustic predictions, as well as the aerodynamic performance predictions. Given the aerodynamic design of the spacecraft cabin ventilation fan, a mechanical and electrical conceptual design study was conducted. Design features of a fan suitable for obtaining detailed acoustic and aerodynamic measurements needed to validate predictions are discussed.

  13. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  14. [Conversion of sound into auditory nerve action potentials].

    PubMed

    Encke, J; Kreh, J; Völk, F; Hemmert, W

    2016-11-01

    Outer hair cells play a major role in the hearing process: they amplify the motion of the basilar membrane up to a 1000-fold and at the same time sharpen the excitation patterns. These patterns are converted by inner hair cells into action potentials of the auditory nerve. Outer hair cells are delicate structures and easily damaged, e. g., by overexposure to noise. Hearing aids can amplify the amplitude of the excitation patterns, but they cannot restore their degraded frequency selectivity. Noise overexposure also leads to delayed degeneration of auditory nerve fibers, particularly those with low a spontaneous rate, which are important for the coding of sound in noise. However, this loss cannot be diagnosed by pure-tone audiometry.

  15. Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; Milz, Mathias; Buehler, Stefan A.; von Clarmann, Thomas

    2018-05-01

    An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric radiative transfer and remote sensing - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the 19 HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. The mutual differences of the equivalent brightness temperatures are presented and possible causes of disagreement are discussed. In particular, the impact of path integration schemes and atmospheric layer discretization is assessed. When the continuum absorption contribution is ignored because of the different implementations, residuals are generally in the sub-Kelvin range and smaller than 0.1 K for some window channels (and all atmospheric models and lbl codes). None of the three codes turned out to be perfect for all channels and atmospheres. Remaining discrepancies are attributed to different lbl optimization techniques. Lbl codes seem to have reached a maturity in the implementation of radiative transfer that the choice of the underlying physical models (line shape models, continua etc) becomes increasingly relevant.

  16. A Tool for Low Noise Procedures Design and Community Noise Impact Assessment: The Rotorcraft Noise Model (RNM)

    NASA Technical Reports Server (NTRS)

    Conner, David A.; Page, Juliet A.

    2002-01-01

    To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.

  17. Everyday listening questionnaire: correlation between subjective hearing and objective performance.

    PubMed

    Brendel, Martina; Frohne-Buechner, Carolin; Lesinski-Schiedat, Anke; Lenarz, Thomas; Buechner, Andreas

    2014-01-01

    Clinical experience has demonstrated that speech understanding by cochlear implant (CI) recipients has improved over recent years with the development of new technology. The Everyday Listening Questionnaire 2 (ELQ 2) was designed to collect information regarding the challenges faced by CI recipients in everyday listening. The aim of this study was to compare self-assessment of CI users using ELQ 2 with objective speech recognition measures and to compare results between users of older and newer coding strategies. During their regular clinical review appointments a group of representative adult CI recipients implanted with the Advanced Bionics implant system were asked to complete the questionnaire. The first 100 patients who agreed to participate in this survey were recruited independent of processor generation and speech coding strategy. Correlations between subjectively scored hearing performance in everyday listening situations and objectively measured speech perception abilities were examined relative to the speech coding strategies used. When subjects were grouped by strategy there were significant differences between users of older 'standard' strategies and users of the newer, currently available strategies (HiRes and HiRes 120), especially in the categories of telephone use and music perception. Significant correlations were found between certain subjective ratings and the objective speech perception data in noise. There is a good correlation between subjective and objective data. Users of more recent speech coding strategies tend to have fewer problems in difficult hearing situations.

  18. Codes of professional conduct for Australian Defence Force military physicians: evenomating the serpent?

    PubMed

    O'Connor, Mike

    2010-09-01

    The scandal of health professionals' involvement in recent human rights abuses in United States military detention centres has prompted concern that Australian military physicians should be well protected against similar pressures to participate in harsh interrogations. A framework of military health ethics has been proposed. Would a code of professional conduct be a partial solution? This article examines the utility of professional codes: can they transform unethical behaviour or are they only of value to those who already behave ethically? How should such codes be designed, what support mechanisms should be in place and how should complaints be managed? A key recommendation is that codes of professional conduct should be accompanied by publicly transparent procedures for the investigation of serious infractions and appropriate disciplinary action when proven. The training of military physicians should also aim to develop a sound understanding of both humanitarian and human rights law. At present, both civil and military education of physicians generally lacks any component of human rights law. The Australian Defence Force (ADF) seems well placed to add codes of professional conduct to its existing ethical framework because of strong support at the highest executive levels.

  19. Phonological coding during reading.

    PubMed

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  20. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  1. Description of Existing Data for Integrated Landscape Monitoring in the Puget Sound Basin, Washington

    USGS Publications Warehouse

    Aiello, Danielle P.; Torregrosa, Alicia; Jason, Allyson L.; Fuentes, Tracy L.; Josberger, Edward G.

    2008-01-01

    This report summarizes existing geospatial data and monitoring programs for the Puget Sound Basin in northwestern Washington. This information was assembled as a preliminary data-development task for the U.S. Geological Survey (USGS) Puget Sound Integrated Landscape Monitoring (PSILM) pilot project. The PSILM project seeks to support natural resource decision-making by developing a 'whole system' approach that links ecological processes at the landscape level to the local level (Benjamin and others, 2008). Part of this effort will include building the capacity to provide cumulative information about impacts that cross jurisdictional and regulatory boundaries, such as cumulative effects of land-cover change and shoreline modification, or region-wide responses to climate change. The PSILM project study area is defined as the 23 HUC-8 (hydrologic unit code) catchments that comprise the watersheds that drain into Puget Sound and their near-shore environments. The study area includes 13 counties and more than four million people. One goal of the PSILM geospatial database is to integrate spatial data collected at multiple scales across the Puget Sound Basin marine and terrestrial landscape. The PSILM work plan specifies an iterative process that alternates between tasks associated with data development and tasks associated with research or strategy development. For example, an initial work-plan goal was to delineate the study area boundary. Geospatial data required to address this task included data from ecological regions, watersheds, jurisdictions, and other boundaries. This assemblage of data provided the basis for identifying larger research issues and delineating the study-area boundary based on these research needs. Once the study-area boundary was agreed upon, the next iteration between data development and research activities was guided by questions about data availability, data extent, data abundance, and data types. This report is not intended as an exhaustive compilation of all available geospatial data, rather, it is a collection of information about geospatial data that can be used to help answer the suite of questions posed after the study-area boundary was defined. This information will also be useful to the PSILM team for future project tasks, such as assessing monitoring gaps, exploring monitoring-design strategies, identifying and deriving landscape indicators and metrics, and visual geographic communication. The two main geospatial data types referenced in this report - base-reference layers and monitoring data - originated from numerous and varied sources. In addition to collecting information and metadata about the base-reference layers, the data also were collected for project needs, such as developing maps for visual communication among team members and with outside groups. In contrast, only information about the data was typically required for the monitoring data. The information on base-reference layers and monitoring data included in this report is only as detailed as what was readily available from the sources themselves. Although this report may appear to lack consistency between data records, the varying degree of details contained in this report are merely a reflection of varying source detail. This compilation is just a beginning. All data listed also are being catalogued in spreadsheets and knowledge-management systems. Our efforts are continual as we develop a geospatial catalog for the PSILM pilot project.

  2. Sound localization in the alligator.

    PubMed

    Bierman, Hilary S; Carr, Catherine E

    2015-11-01

    In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Dual routes for verbal repetition: articulation-based and acoustic-phonetic codes for pseudoword and word repetition, respectively.

    PubMed

    Yoo, Sejin; Chung, Jun-Young; Jeon, Hyeon-Ae; Lee, Kyoung-Min; Kim, Young-Bo; Cho, Zang-Hee

    2012-07-01

    Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  5. The Therapy Process Observational Coding System for Child Psychotherapy Strategies Scale

    ERIC Educational Resources Information Center

    McLeod, Bryce D.; Weisz, John R.

    2010-01-01

    Most everyday child and adolescent psychotherapy does not follow manuals that document the procedures. Consequently, usual clinical care has remained poorly understood and rarely studied. The Therapy Process Observational Coding System for Child Psychotherapy-Strategies scale (TPOCS-S) is an observational measure of youth psychotherapy procedures…

  6. The Effect of Explicit Instruction for Story Grammar Code Strategy on Third Graders' Reading Comprehension

    ERIC Educational Resources Information Center

    De Nigris, Rosemarie Previti

    2017-01-01

    The hypothesis of the study was explicit gradual release of responsibility comprehension instruction (GRR) (Pearson & Gallagher, 1983; Fisher & Frey, 2008) with the researcher-created Story Grammar Code (SGC) strategy would significantly increase third graders' comprehension of narrative fiction and nonfiction text. SGC comprehension…

  7. Comparison of Theory and Experiment on Aeroacoustic Loads and Deflections

    NASA Astrophysics Data System (ADS)

    Campos, L. M. B. C.; Bourgine, A.; Bonomi, B.

    1999-01-01

    The correlation of acoustic pressure loads induced by a turbulent wake on a nearby structural panel is considered: this problem is relevant to the acoustic fatigue of aircraft, rocket and satellite structures. Both the correlation of acoustic pressure loads and the panel deflections, were measured in an 8-m diameter transonic wind tunnel. Using the measured correlation of acoustic pressures, as an input to a finite-element aeroelastic code, the panel response was reproduced. The latter was also satisfactorily reproduced, using again the aeroelastic code, with input given by a theoretical formula for the correlation of acoustic pressures; the derivation of this formula, and the semi-empirical parameters which appear in it, are included in this paper. The comparison of acoustic responses in aeroacoustic wind tunnels (AWT) and progressive wave tubes (PWT) shows that much work needs to be done to bridge that gap; this is important since the PWT is the standard test means, whereas the AWT is more representative of real flight conditions but also more demanding in resources. Since this may be the first instance of successful modelling of acoustic fatigue, it may be appropriate to list briefly the essential ``positive'' features and associated physical phenomena: (i) a standard aeroelastic structural code can predict acoustic fatigue, provided that the correlation of pressure loads be adequately specified; (ii) the correlation of pressure loads is determined by the interference of acoustic waves, which depends on the exact evaluation of multiple scattering integrals, involving the statistics of random phase shifts; (iii) for the relatively low frequencies (one to a few hundred Hz) of aeroacoustic fatigue, the main cause of random phase effects is scattering by irregular wakes, which are thin on wavelength scale, and appear as partially reflecting rough interfaces. It may also be appropriate to mention some of the ``negative'' features, to which may be attached illusory importance; (iv) deterministic flow features, even conspicuous or of large scale, such as convection, are not relevant to aeroacoustic fatigue, because they do not produce random phase shifts; (v) local turbulence, of scale much smaller than the wavelength of sound, cannot produce significant random phase shifts, and is also of little consequence to aeroacoustic fatigue; (vi) the precise location of sound sources can become of little consequence, after multiple scattering gives rise to a diffuse sound field; and (vii) there is not much ground for distinction between unsteady flow and sound waves, since at transonic speeds they are both associated with pressures fluctuating in time and space.

  8. Efficient audio signal processing for embedded systems

    NASA Astrophysics Data System (ADS)

    Chiu, Leung Kin

    As mobile platforms continue to pack on more computational power, electronics manufacturers start to differentiate their products by enhancing the audio features. However, consumers also demand smaller devices that could operate for longer time, hence imposing design constraints. In this research, we investigate two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound ”richer" and "fuller." Piezoelectric speakers have a small form factor but exhibit poor response in the low-frequency region. In the algorithm, we combine psychoacoustic bass extension and dynamic range compression to improve the perceived bass coming out from the tiny speakers. We also developed an audio energy reduction algorithm for loudspeaker power management. The perceptually transparent algorithm extends the battery life of mobile devices and prevents thermal damage in speakers. This method is similar to audio compression algorithms, which encode audio signals in such a ways that the compression artifacts are not easily perceivable. Instead of reducing the storage space, however, we suppress the audio contents that are below the hearing threshold, therefore reducing the signal energy. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The system is an example of an analog-to-information converter. The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. In this classifier architecture, we combine simple "base" analog classifiers to form a strong one. We also designed the circuits to implement the AdaBoost-based analog classifier.

  9. Chapter 7: Information needs and a research strategy for conserving forest carnivores

    Treesearch

    Leonard F. Ruggiero; Steven W. Buskirk; Keith B. Aubry; L. Jack Lyon; William J. Zielinski

    1994-01-01

    This forest carnivore conservation assessment summarizes what is known about the biology and ecology of the American marten, fisher, lynx, and wolverine. It is the first step in ascertaining what information we need to develop a scientifically sound strategy for species conservation. Although this assessment implies that we know what information we need to prescribe...

  10. Personal finance: there are no shortcuts to financial security.

    PubMed

    Yarkony, Kathryn

    2009-12-01

    Perioperative nurses have skills that lend themselves to sound financial decision-making, and during these difficult economic times, it is important to know how to secure earnings for the future. Key strategies include saving for retirement, consulting a financial advisor, investing in reliable vehicles, holding investments until the market stabilizes, and controlling credit card debt. Nurses can use the nursing process of assessment, diagnosis, planning, implementation, and evaluation to help them make sound financial decisions. (c) AORN, Inc, 2009.

  11. Echolocating Big Brown Bats, Eptesicus fuscus, Modulate Pulse Intervals to Overcome Range Ambiguity in Cluttered Surroundings

    PubMed Central

    Wheeler, Alyssa R.; Fulton, Kara A.; Gaudette, Jason E.; Simmons, Ryan A.; Matsuo, Ikuo; Simmons, James A.

    2016-01-01

    Big brown bats (Eptesicus fuscus) emit trains of brief, wideband frequency-modulated (FM) echolocation sounds and use echoes of these sounds to orient, find insects, and guide flight through vegetation. They are observed to emit sounds that alternate between short and long inter-pulse intervals (IPIs), forming sonar sound groups. The occurrence of these strobe groups has been linked to flight in cluttered acoustic environments, but how exactly bats use sonar sound groups to orient and navigate is still a mystery. Here, the production of sound groups during clutter navigation was examined. Controlled flight experiments were conducted where the proximity of the nearest obstacles was systematically decreased while the extended scene was kept constant. Four bats flew along a corridor of varying widths (100, 70, and 40 cm) bounded by rows of vertically hanging plastic chains while in-flight echolocation calls were recorded. Bats shortened their IPIs for more rapid spatial sampling and also grouped their sounds more tightly when flying in narrower corridors. Bats emitted echolocation calls with progressively shorter IPIs over the course of a flight, and began their flights by emitting shorter starting IPI calls when clutter was denser. The percentage of sound groups containing 3 or more calls increased with increasing clutter proximity. Moreover, IPI sequences having internal structure become more pronounced when corridor width narrows. A novel metric for analyzing the temporal organization of sound sequences was developed, and the results indicate that the time interval between echolocation calls depends heavily on the preceding time interval. The occurrence of specific IPI patterns were dependent upon clutter, which suggests that sonar sound grouping may be an adaptive strategy for coping with pulse-echo ambiguity in cluttered surroundings. PMID:27445723

  12. An Efficient Variable Length Coding Scheme for an IID Source

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.

  13. Misophonia: physiological investigations and case descriptions.

    PubMed

    Edelstein, Miren; Brang, David; Rouw, Romke; Ramachandran, Vilayanur S

    2013-01-01

    Misophonia is a relatively unexplored chronic condition in which a person experiences autonomic arousal (analogous to an involuntary "fight-or-flight" response) to certain innocuous or repetitive sounds such as chewing, pen clicking, and lip smacking. Misophonics report anxiety, panic, and rage when exposed to trigger sounds, compromising their ability to complete everyday tasks and engage in healthy and normal social interactions. Across two experiments, we measured behavioral and physiological characteristics of the condition. Interviews (Experiment 1) with misophonics showed that the most problematic sounds are generally related to other people's behavior (pen clicking, chewing sounds). Misophonics are however not bothered when they produce these "trigger" sounds themselves, and some report mimicry as a coping strategy. Next, (Experiment 2) we tested the hypothesis that misophonics' subjective experiences evoke an anomalous physiological response to certain auditory stimuli. Misophonic individuals showed heightened ratings and skin conductance responses (SCRs) to auditory, but not visual stimuli, relative to a group of typically developed controls, supporting this general viewpoint and indicating that misophonia is a disorder that produces distinct autonomic effects not seen in typically developed individuals.

  14. Misophonia: physiological investigations and case descriptions

    PubMed Central

    Edelstein, Miren; Brang, David; Rouw, Romke; Ramachandran, Vilayanur S.

    2013-01-01

    Misophonia is a relatively unexplored chronic condition in which a person experiences autonomic arousal (analogous to an involuntary “fight-or-flight” response) to certain innocuous or repetitive sounds such as chewing, pen clicking, and lip smacking. Misophonics report anxiety, panic, and rage when exposed to trigger sounds, compromising their ability to complete everyday tasks and engage in healthy and normal social interactions. Across two experiments, we measured behavioral and physiological characteristics of the condition. Interviews (Experiment 1) with misophonics showed that the most problematic sounds are generally related to other people's behavior (pen clicking, chewing sounds). Misophonics are however not bothered when they produce these “trigger” sounds themselves, and some report mimicry as a coping strategy. Next, (Experiment 2) we tested the hypothesis that misophonics' subjective experiences evoke an anomalous physiological response to certain auditory stimuli. Misophonic individuals showed heightened ratings and skin conductance responses (SCRs) to auditory, but not visual stimuli, relative to a group of typically developed controls, supporting this general viewpoint and indicating that misophonia is a disorder that produces distinct autonomic effects not seen in typically developed individuals. PMID:23805089

  15. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  16. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  17. How to generate a sound-localization map in fish

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  18. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  19. Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis

    PubMed Central

    Fletcher, Phillip D.; Downey, Laura E.; Golden, Hannah L.; Clark, Camilla N.; Slattery, Catherine F.; Paterson, Ross W.; Schott, Jonathan M.; Rohrer, Jonathan D.; Rossor, Martin N.; Warren, Jason D.

    2015-01-01

    Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music (‘musicophilia’) occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717

  20. Acoustic effects of the ATOC signal (75 Hz, 195 dB) on dolphins and whales.

    PubMed

    Au, W W; Nachtigall, P E; Pawloski, J L

    1997-05-01

    The Acoustic Thermometry of Ocean Climate (ATOC) program of Scripps Institution of Oceanography and the Applied Physics Laboratory, University of Washington, will broadcast a low-frequency 75-Hz phase modulated acoustic signal over ocean basins in order to study ocean temperatures on a global scale and examine the effects of global warming. One of the major concerns is the possible effect of the ATOC signal on marine life, especially on dolphins and whales. In order to address this issue, the hearing sensitivity of a false killer whale (Pseudorca crassidens) and a Risso's dolphin (Grampus griseus) to the ATOC sound was measured behaviorally. A staircase procedure with the signal levels being changed in 1-dB steps was used to measure the animals' threshold to the actual ATOC coded signal. The results indicate that small odontocetes such as the Pseudorca and Grampus swimming directly above the ATOC source will not hear the signal unless they dive to a depth of approximately 400 m. A sound propagation analysis suggests that the sound-pressure level at ranges greater than 0.5 km will be less than 130 dB for depths down to about 500 m. Several species of baleen whales produce sounds much greater than 170-180 dB. With the ATOC source on the axis of the deep sound channel (greater than 800 m), the ATOC signal will probably have minimal physical and physiological effects on cetaceans.

  1. Affect in Human-Robot Interaction

    DTIC Science & Technology

    2014-01-01

    is capable of learning and producing a large number of facial expressions based on Ekman’s Facial Action Coding System, FACS (Ekman and Friesen 1978... tactile (pushed, stroked, etc.), auditory (loud sound), temperature and olfactory (alcohol, smoke, etc.). The personality of the robot consists of...robot’s behavior through decision-making, learning , or action selection, a number of researchers used the fuzzy logic approach to emotion generation

  2. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate-Scale Hydrodynamic Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Khangaonkar, Tarang; Labiosa, Rochelle G.

    2010-11-30

    The Washington State Department of Ecology contracted with Pacific Northwest National Laboratory to develop an intermediate-scale hydrodynamic and water quality model to study dissolved oxygen and nutrient dynamics in Puget Sound and to help define potential Puget Sound-wide nutrient management strategies and decisions. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or dominate human impacts to dissolved oxygen levels in the sensitive areas. In this study, anmore » intermediate-scale hydrodynamic model of Puget Sound was developed to simulate the hydrodynamics of Puget Sound and the Northwest Straits for the year 2006. The model was constructed using the unstructured Finite Volume Coastal Ocean Model. The overall model grid resolution within Puget Sound in its present configuration is about 880 m. The model was driven by tides, river inflows, and meteorological forcing (wind and net heat flux) and simulated tidal circulations, temperature, and salinity distributions in Puget Sound. The model was validated against observed data of water surface elevation, velocity, temperature, and salinity at various stations within the study domain. Model validation indicated that the model simulates tidal elevations and currents in Puget Sound well and reproduces the general patterns of the temperature and salinity distributions.« less

  3. Acquisition of Inductive Biconditional Reasoning Skills: Training of Simultaneous and Sequential Processing.

    ERIC Educational Resources Information Center

    Lee, Seong-Soo

    1982-01-01

    Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…

  4. A State-of-the-Art Review: Personalization of Tinnitus Sound Therapy

    PubMed Central

    Searchfield, Grant D.; Durai, Mithila; Linford, Tania

    2017-01-01

    Background: There are several established, and an increasing number of putative, therapies using sound to treat tinnitus. There appear to be few guidelines for sound therapy selection and application. Aim: To review current approaches to personalizing sound therapy for tinnitus. Methods: A “state-of-the-art” review (Grant and Booth, 2009) was undertaken to answer the question: how do current sound-based therapies for tinnitus adjust for tinnitus heterogeneity? Scopus, Google Scholar, Embase and PubMed were searched for the 10-year period 2006–2016. The search strategy used the following key words: “tinnitus” AND “sound” AND “therapy” AND “guidelines” OR “personalized” OR “customized” OR “individual” OR “questionnaire” OR “selection.” The results of the review were cataloged and organized into themes. Results: In total 165 articles were reviewed in full, 83 contained sufficient details to contribute to answering the study question. The key themes identified were hearing compensation, pitched-match therapy, maskability, reaction to sound and psychosocial factors. Although many therapies mentioned customization, few could be classified as being personalized. Several psychoacoustic and questionnaire-based methods for assisting treatment selection were identified. Conclusions: Assessment methods are available to assist clinicians to personalize sound-therapy and empower patients to be active in therapy decision-making. Most current therapies are modified using only one characteristic of the individual and/or their tinnitus. PMID:28970812

  5. Sound-direction identification, interaural time delay discrimination, and speech intelligibility advantages in noise for a bilateral cochlear implant user.

    PubMed

    Van Hoesel, Richard; Ramsden, Richard; Odriscoll, Martin

    2002-04-01

    To characterize some of the benefits available from using two cochlear implants compared with just one, sound-direction identification (ID) abilities, sensitivity to interaural time delays (ITDs) and speech intelligibility in noise were measured for a bilateral multi-channel cochlear implant user. Sound-direction ID in the horizontal plane was tested with a bilateral cochlear implant user. The subject was tested both unilaterally and bilaterally using two independent behind-the-ear ESPRIT (Cochlear Ltd.) processors, as well as bilaterally using custom research processors. Pink noise bursts were presented using an 11-loudspeaker array spanning the subject's frontal 180 degrees arc in an anechoic room. After each burst, the subject was asked to identify which loudspeaker had produced the sound. No explicit training, and no feedback were given. Presentation levels were nominally at 70 dB SPL, except for a repeat experiment using the clinical devices where the presentation levels were reduced to 60 dB SPL to avoid activation of the devices' automatic gain control (AGC) circuits. Overall presentation levels were randomly varied by +/- 3 dB. For the research processor, a "low-update-rate" and a "high-update-rate" strategy were tested. Direct measurements of ITD just noticeable differences (JNDs) were made using a 3 AFC paradigm targeting 70% correct performance on the psychometric function. Stimuli included simple, low-rate electrical pulse trains as well as high-rate pulse trains modulated at 100 Hz. Speech data comparing monaural and binaural performance in noise were also collected with both low, and high update-rate strategies on the research processors. Open-set sentences were presented from directly in front of the subject and competing multi-talker babble noise was presented from the same loudspeaker, or from a loudspeaker placed 90 degrees to the left or right of the subject. For the sound-direction ID task, monaural performance using the clinical devices showed large mean absolute errors of 81 degrees and 73 degrees, with standard deviations (averaged across all 11 loud-speakers) of 10 degrees and 17 degrees, for left and right ears, respectively. Fore bilateral device use at a presentation level of 70 dB SPL, the mean error improved to about 16 degrees with an average standard deviation of 18 degrees. When the presentation level was decreased to 60 dB SPL to avoid activation of the automatic gain control (AGC) circuits in the clinical processors, the mean response error improved further to 8 degrees with a standard deviation of 13 degrees. Further tests with the custom research processors, which had a higher stimulation rate and did not include AGCs, showed comparable response errors: around 8 or 9 degrees and a standard deviation of about 11 degrees for both update rates. The best ITD JNDs measured for this subject were between 350 to 400 microsec for simple low-rate pulse trains. Speech results showed a substantial headshadow advantage for bilateral device use when speech and noise were spatially separated, but little evidence of binaural unmasking. For spatially coincident speech and noise, listening with both ears showed similar results to listening with either side alone when loudness summation was compensated for. No significant differences were observed between binaural results for high and low update-rates in any test configuration. Only for monaural listening in one test configuration did the high rate show a small significant improvement over the low rate. Results show that even if interaural time delay cues are not well coded or perceived, bilateral implants can offer important advantages, both for speech in noise as well as for sound-direction identification.

  6. Rank Order Coding: a Retinal Information Decoding Strategy Revealed by Large-Scale Multielectrode Array Retinal Recordings.

    PubMed

    Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne

    2016-01-01

    How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.

  7. Experimental Investigation of Propagation and Reflection Phenomena in Finite Amplitude Sound Beams.

    NASA Astrophysics Data System (ADS)

    Averkiou, Michalakis Andrea

    Measurements of finite amplitude sound beams are compared with theoretical predictions based on the KZK equation. Attention is devoted to harmonic generation and shock formation related to a variety of propagation and reflection phenomena. Both focused and unfocused piston sources were used in the experiments. The nominal source parameters are piston radii of 6-25 mm, frequencies of 1-5 MHz, and focal lengths of 10-20 cm. The research may be divided into two parts: propagation and reflection of continuous-wave focused sound beams, and propagation of pulsed sound beams. In the first part, measurements of propagation curves and beam patterns of focused pistons in water, both in the free field and following reflection from curved targets, are presented. The measurements are compared with predictions from a computer model that solves the KZK equation in the frequency domain. A novel method for using focused beams to measure target curvature is developed. In the second part, measurements of pulsed sound beams from plane pistons in both water and glycerin are presented. Very short pulses (less than 2 cycles), tone bursts (5-30 cycles), and frequency modulated (FM) pulses (10-30 cycles) were measured. Acoustic saturation of pulse propagation in water is investigated. Self-demodulation of tone bursts and FM pulses was measured in glycerin, both in the near and far fields, on and off axis. All pulse measurements are compared with numerical results from a computer code that solves the KZK equation in the time domain. A quasilinear analytical solution for the entire axial field of a self-demodulating pulse is derived in the limit of strong absorption. Taken as a whole, the measurements provide a broad data base for sound beams of finite amplitude. Overall, outstanding agreement is obtained between theory and experiment.

  8. Law No. 91, Amendment to the Penal Code, 5 September 1987.

    PubMed

    1989-01-01

    This Law replaces Article 398 of the Iraq Penal Code with the following language: "If a sound contract of marriage has been made between a perpetrator of one of the crimes mentioned in this chapter and the victim, it shall be a legal extenuating excuse for the purpose of implementing the provisions of Articles (130 and 131) of the Penal Code. If the marriage contract has been terminated by a divorce issued by the husband without a legitimate reason, or by a divorce passed by the court for such reasons related [to] a mistake or a misconduct of the husband, three years before the expiry of the sentence of the action, then, the punishment shall be reconsidered with a view to intensifying it due to a request from the public prosecution, the victim herself, or any interested person." Among the crimes mentioned in the chapter referred to in Article 398 is rape.

  9. Environmentally Sound Alternatives: Setting the Context.

    ERIC Educational Resources Information Center

    Chaudhary, Anil K.

    1989-01-01

    As former colonies struggle with economic development, consumerism competes with environmental awareness and concern. Developing countries should reject the models of the colonial past and create developmental strategies that preserve natural resources. (SK)

  10. A comparison of experiment and theory for sound propagation in variable area ducts

    NASA Technical Reports Server (NTRS)

    Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, C. J.

    1980-01-01

    An experimental and analytical program has been carried out to evaluate sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients. The analytical program employs a computer code based on the method of multiple scales to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. Detailed comparisons between the analytical predictions and the experimental measurements have been made. The circumferential variations of pressure amplitudes and phases at several axial positions have been examined in straight and variable area ducts, with hard walls and lined sections, and with and without a mean flow. Reasonable agreement between the theoretical and experimental results has been found.

  11. Satellite sound broadcasting system study: Mobile considerations

    NASA Technical Reports Server (NTRS)

    Golshan, Nasser

    1990-01-01

    Discussed here is the mobile reception part of a study to investigate a satellite sound broadcast system in the UHF or L bands. Existing propagation and reception measurements are used with proper interpretation to evaluate the signaling, coding, and diversity alternatives suitable for the system. Signal attenuation in streets shadowed by buildings appear to be around 29 db, considerably higher than the 10 db adopted by CCIR. With the marriage of proper technologies, an LMSS class satellite can provide substantial direct satellite audio broadcast capability in UHF or L bands for high quality mobile and portable indoor reception by low cost radio receivers. This scheme requires terrestrial repeaters for satisfactory mobile reception in urban areas. A specialized bandwidth efficient spread spectrum signalling technique is particularly suitable for the terrestrial repeaters.

  12. Effective Connectivity Reveals Right-Hemisphere Dominance in Audiospatial Perception: Implications for Models of Spatial Neglect

    PubMed Central

    Friston, Karl J.; Mattingley, Jason B.; Roepstorff, Andreas; Garrido, Marta I.

    2014-01-01

    Detecting the location of salient sounds in the environment rests on the brain's ability to use differences in sounds arriving at both ears. Functional neuroimaging studies in humans indicate that the left and right auditory hemispaces are coded asymmetrically, with a rightward attentional bias that reflects spatial attention in vision. Neuropsychological observations in patients with spatial neglect have led to the formulation of two competing models: the orientation bias and right-hemisphere dominance models. The orientation bias model posits a symmetrical mapping between one side of the sensorium and the contralateral hemisphere, with mutual inhibition of the ipsilateral hemisphere. The right-hemisphere dominance model introduces a functional asymmetry in the brain's coding of space: the left hemisphere represents the right side, whereas the right hemisphere represents both sides of the sensorium. We used Dynamic Causal Modeling of effective connectivity and Bayesian model comparison to adjudicate between these alternative network architectures, based on human electroencephalographic data acquired during an auditory location oddball paradigm. Our results support a hemispheric asymmetry in a frontoparietal network that conforms to the right-hemisphere dominance model. We show that, within this frontoparietal network, forward connectivity increases selectively in the hemisphere contralateral to the side of sensory stimulation. We interpret this finding in light of hierarchical predictive coding as a selective increase in attentional gain, which is mediated by feedforward connections that carry precision-weighted prediction errors during perceptual inference. This finding supports the disconnection hypothesis of unilateral neglect and has implications for theories of its etiology. PMID:24695717

  13. Human auditory steady state responses to binaural and monaural beats.

    PubMed

    Schwarz, D W F; Taylor, P

    2005-03-01

    Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.

  14. A Model for Shear Layer Effects on Engine Noise Radiation

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Farassat, F.; Pope, D. Stuart; Vatsa, V.

    2004-01-01

    Prediction of aircraft engine noise is an important aspect of addressing the issues of community noise and cabin noise control. The development of physics based methodologies for performing such predictions has been a focus of Computational Aeroacoustics (CAA). A recent example of code development in this area is the ducted fan noise propagation and radiation code CDUCT-LaRC. Included within the code is a duct radiation model that is based on the solution of FfowcsWilliams-Hawkings (FW-H) equation with a penetrable data surface. Testing of this equation for many acoustic problems has shown it to provide generally better results than the Kirchhoff formula for moving surfaces. Currently, the data surface is taken to be the inlet or exhaust plane for inlet or aft-fan cases, respectively. While this provides reasonable results in many situations, these choices of data surface location lead to a few limitations. For example, the shear layer between the bypass ow and external stream can refract the sound waves radiated to the far field. Radiation results can be improved by including this effect, as well as the rejection of the sound in the bypass region from the solid surface external to the bypass duct surrounding the core ow. This work describes the implementation, and possible approximation, of a shear layer boundary condition within CDUCT-LaRC. An example application also illustrates the improvements that this extension offers for predicting noise radiation from complex inlet and bypass duct geometries, thereby providing a means to evaluate external treatments in the vicinity of the bypass duct exhaust plane.

  15. Neonatal incubators: a toxic sound environment for the preterm infant?*.

    PubMed

    Marik, Paul E; Fuller, Christopher; Levitov, Alexander; Moll, Elizabeth

    2012-11-01

    High sound pressure levels may be harmful to the maturing newborn. Current guidelines suggest that the sound pressure levels within a neonatal intensive care unit should not exceed 45 dB(A). It is likely that environmental noise as well as the noise generated by the incubator fan and respiratory equipment may contribute to the total sound pressure levels. Knowledge of the contribution of each component and source is important to develop effective strategies to reduce noise within the incubator. The objectives of this study were to determine the sound levels, sound spectra, and major sources of sound within a modern neonatal incubator (Giraffe Omnibed; GE Healthcare, Helsinki, Finland) using a sound simulation study to replicate the conditions of a preterm infant undergoing high-frequency jet ventilation (Life Pulse, Bunnell, UT). Using advanced sound data acquisition and signal processing equipment, we measured and analyzed the sound level at a dummy infant's ear and at the head level outside the enclosure. The sound data time histories were digitally acquired and processed using a digital Fast Fourier Transform algorithm to provide spectra of the sound and cumulative sound pressure levels (dBA). The simulation was done with the incubator cooling fan and ventilator switched on or off. In addition, tests were carried out with the enclosure sides closed and hood down and then with the enclosure sides open and the hood up to determine the importance of interior incubator reverberance on the interior sound levels With all the equipment off and the hood down, the sound pressure levels were 53 dB(A) inside the incubator. The sound pressure levels increased to 68 dB(A) with all equipment switched on (approximately 10 times louder than recommended). The sound intensity was 6.0 × 10(-8) watts/m(2); this sound level is roughly comparable with that generated by a kitchen exhaust fan on high. Turning the ventilator off reduced the overall sound pressure levels to 64 dB(A) and the sound pressure levels in the low-frequency band of 0 to 100 Hz were reduced by 10 dB(A). The incubator fan generated tones at 200, 400, and 600 Hz that raised the sound level by approximately 2 dB(A)-3 dB(A). Opening the enclosure (with all equipment turned on) reduced the sound levels above 50 Hz by reducing the revereberance within the enclosure. The sound levels, especially at low frequencies, within a modern incubator may reach levels that are likely to be harmful to the developing newborn. Much of the noise is at low frequencies and thus difficult to reduce by conventional means. Therefore, advanced forms of noise control are needed to address this issue.

  16. Speech Recognition with the Advanced Combination Encoder and Transient Emphasis Spectral Maxima Strategies in Nucleus 24 Recipients

    ERIC Educational Resources Information Center

    Holden, Laura K.; Vandali, Andrew E.; Skinner, Margaret W.; Fourakis, Marios S.; Holden, Timothy A.

    2005-01-01

    One of the difficulties faced by cochlear implant (CI) recipients is perception of low-intensity speech cues. A. E. Vandali (2001) has developed the transient emphasis spectral maxima (TESM) strategy to amplify short-duration, low-level sounds. The aim of the present study was to determine whether speech scores would be significantly higher with…

  17. International intellectual property strategies for therapeutic antibodies

    PubMed Central

    2011-01-01

    Therapeutic antibodies need international patent protection as their markets expand to include industrialized and emerging countries. Because international intellectual property strategies are frequently complex and costly, applicants require sound information as a basis for decisions regarding the countries in which to pursue patents. While the most important factor is the size of a given market, other factors should also be considered. PMID:22123063

  18. Learning the Rules: Observation and Imitation of a Sorting Strategy by 36-Month-Old Children

    ERIC Educational Resources Information Center

    Williamson, Rebecca A.; Jaswal, Vikram K.; Meltzoff, Andrew N.

    2010-01-01

    Two experiments were used to investigate the scope of imitation by testing whether 36-month-olds can learn to produce a categorization strategy through observation. After witnessing an adult sort a set of objects by a visible property (their color; Experiment 1) or a nonvisible property (the particular sounds produced when the objects were shaken;…

  19. Active Control Of Structure-Borne Noise

    NASA Astrophysics Data System (ADS)

    Elliott, S. J.

    1994-11-01

    The successful practical application of active noise control requires an understanding of both its acoustic limitations and the limitations of the electrical control strategy used. This paper is concerned with the active control of sound in enclosures. First, a review is presented of the fundamental physical limitations of using loudspeakers to achieve either global or local control. Both approaches are seen to have a high frequency limit, due to either the acoustic modal overlap, or the spatial correlation function of the pressure field. These physical performance limits could, in principle, be achieved with either a feedback or a feedforward control strategy. These strategies are reviewed and the use of adaptive digital filters is discussed for both approaches. The application of adaptive feedforward control in the control of engine and road noise in cars is described. Finally, an indirect approach to the active control of sound is discussed, in which the vibration is suppressed in the structural paths connecting the source of vibration to the enclosure. Two specific examples of this strategy are described, using an active automotive engine mount and the incorporation of actuators into helicopter struts to control gear-meshing tones. In both cases good passive design can minimize the complexity of the active controller.

  20. Digital signal processing of the phonocardiogram: review of the most recent advancements.

    PubMed

    Durand, L G; Pibarot, P

    1995-01-01

    The objective of the present paper is to provide a detailed review of the most recent developments in instrumentation and signal processing of digital phonocardiography and heart auscultation. After a short introduction, the paper presents a brief history of heart auscultation and phonocardiography, which is followed by a summary of the basic theories and controversies regarding the genesis of the heart sounds. The application of spectral analysis and the potential of new time-frequency representations and cardiac acoustic mapping to resolve the controversies and better understand the genesis and transmission of heart sounds and murmurs within the heart-thorax acoustic system are reviewed. The most recent developments in the application of linear predictive coding, spectral analysis, time-frequency representation techniques, and pattern recognition for the detection and follow-up of native and prosthetic valve degeneration and dysfunction are also presented in detail. New areas of research and clinical applications and areas of potential future developments are then highlighted. The final section is a discussion about a multidegree of freedom theory on the origin of the heart sounds and murmurs, which is completed by the authors' conclusion.

  1. On the upper bound in the Bohm sheath criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su

    2016-02-15

    The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ionmore » source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.« less

  2. Effects of cacheing on multitasking efficiency and programming strategy on an ELXSI 6400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montry, G.R.; Benner, R.E.

    1985-12-01

    The impact of a cache/shared memory architecture, and, in particular, the cache coherency problem, upon concurrent algorithm and program development is discussed. In this context, a simple set of programming strategies are proposed which streamline code development and improve code performance when multitasking in a cache/shared memory or distributed memory environment.

  3. Practice management companies. Creating sound information technology strategies.

    PubMed

    Cross, M A

    1997-10-01

    Practice management companies are becoming more prominent players in the health care industry. To improve the performance of the group practices that they acquire, these companies are striving to use updated information technologies.

  4. Abundance, stock origin, and length of marked and unmarked juvenile Chinook salmon in the surface waters of greater Puget Sound

    USGS Publications Warehouse

    Rice, C.A.; Greene, C.M.; Moran, P.; Teel, D.J.; Kuligowski, D.R.; Reisenbichler, R.R.; Beamer, E.M.; Karr, J.R.; Fresh, K.L.

    2011-01-01

    This study focuses on the use by juvenile Chinook salmon Oncorhynchus tshawytscha of the rarely studied neritic environment (surface waters overlaying the sublittoral zone) in greater Puget Sound. Juvenile Chinook salmon inhabit the sound from their late estuarine residence and early marine transition to their first year at sea. We measured the density, origin, and size of marked (known hatchery) and unmarked (majority naturally spawned) juveniles by means of monthly surface trawls at six river mouth estuaries in Puget Sound and the areas in between. Juvenile Chinook salmon were present in all months sampled (April-November). Unmarked fish in the northern portion of the study area showed broader seasonal distributions of density than did either marked fish in all areas or unmarked fish in the central and southern portions of the sound. Despite these temporal differences, the densities of marked fish appeared to drive most of the total density estimates across space and time. Genetic analysis and coded wire tag data provided us with documented individuals from at least 16 source populations and indicated that movement patterns and apparent residence time were, in part, a function of natal location and time passed since the release of these fish from hatcheries. Unmarked fish tended to be smaller than marked fish and had broader length frequency distributions. The lengths of unmarked fish were negatively related to the density of both marked and unmarked Chinook salmon, but those of marked fish were not. These results indicate more extensive use of estuarine environments by wild than by hatchery juvenile Chinook salmon as well as differential use (e.g., rearing and migration) of various geographic regions of greater Puget Sound by juvenile Chinook salmon in general. In addition, the results for hatchery-generated timing, density, and length differences have implications for the biological interactions between hatchery and wild fish throughout Puget Sound. ?? American Fisheries Society 2011.

  5. Phonological, visual, and semantic coding strategies and children's short-term picture memory span.

    PubMed

    Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura

    2012-01-01

    Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.

  6. A practical approach to portability and performance problems on massively parallel supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1994-12-08

    We present an overview of the tactics we have used to achieve a high-level of performance while improving portability for a large-scale molecular dynamics code SPaSM. SPaSM was originally implemented in ANSI C with message passing for the Connection Machine 5 (CM-5). In 1993, SPaSM was selected as one of the winners in the IEEE Gordon Bell Prize competition for sustaining 50 Gflops on the 1024 node CM-5 at Los Alamos National Laboratory. Achieving this performance on the CM-5 required rewriting critical sections of code in CDPEAC assembler language. In addition, the code made extensive use of CM-5 parallel I/Omore » and the CMMD message passing library. Given this highly specialized implementation, we describe how we have ported the code to the Cray T3D and high performance workstations. In addition we will describe how it has been possible to do this using a single version of source code that runs on all three platforms without sacrificing any performance. Sound too good to be true? We hope to demonstrate that one can realize both code performance and portability without relying on the latest and greatest prepackaged tool or parallelizing compiler.« less

  7. Ethical Dilemmas in Financial Reporting Situations and the Preferred Mode of Resolution of Ethical Conflicts as Taken by Certified and Noncertified Management Accountants in Organizations with Perceived Different Ethical Work Climates.

    ERIC Educational Resources Information Center

    McKenna, John N.

    1995-01-01

    Responses from 37.7% of 491 chief financial officers surveyed revealed a majority of organizational climates based on law and codes. Most believed their organizations attempted sound financial reporting and ethical operation. Certified accountants perceived a greater likelihood of the occurrence of ethical dilemmas than did noncertified…

  8. Perception and Neural Coding of Harmonic Fusion in Ferrets

    DTIC Science & Technology

    2004-01-01

    distinct percepts that come under the rubric of pitch, be- cause periodicity pitch underlies speakers’ voices and speech prosody, as well as musical ...spectral fusion is unclear for sounds having predominantly low-frequency spectra such as speech, music , and many animal vocalizations. In summary...84, 560–565. von Helmholtz, H. (1863). Die Lehre von den Tonempfindungen als physiologische Grundlage fr die Theorie der Musik . (Vieweg und Sohn

  9. Contributions to Automated Realtime Underwater Navigation

    DTIC Science & Technology

    2012-02-01

    help by scrubbing circuits–Scotty McCue was always there to help, and demonstrate proper circuit scrubbing technique. Carl Kaiser has been great at... Jung Lee, and of course, my committee again. My friends and fellow JP students have provided advice on code or prose, sounding boards for crazy ideas...order): Chris Murphy, Clay Kunz, Jeff Kaeli, Mark van Middlesworth, Peter Kimball, Wu- Jung Lee, Heather Beem, Derya Akkaynak Yellin, Kalina

  10. Experimental Evaluation of Acoustic Engine Liner Models Developed with COMSOL Multiphysics

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Jones, Michael G.; Bertolucci, Brandon

    2017-01-01

    Accurate modeling tools are needed to design new engine liners capable of reducing aircraft noise. The purpose of this study is to determine if a commercially-available finite element package, COMSOL Multiphysics, can be used to accurately model a range of different acoustic engine liner designs, and in the process, collect and document a benchmark dataset that can be used in both current and future code evaluation activities. To achieve these goals, a variety of liner samples, ranging from conventional perforate-over-honeycomb to extended-reaction designs, were installed in one wall of the grazing flow impedance tube at the NASA Langley Research Center. The liners were exposed to high sound pressure levels and grazing flow, and the effect of the liner on the sound field in the flow duct was measured. These measurements were then compared with predictions. While this report only includes comparisons for a subset of the configurations, the full database of all measurements and predictions is available in electronic format upon request. The results demonstrate that both conventional perforate-over-honeycomb and extended-reaction liners can be accurately modeled using COMSOL. Therefore, this modeling tool can be used with confidence to supplement the current suite of acoustic propagation codes, and ultimately develop new acoustic engine liners designed to reduce aircraft noise.

  11. Adaptations for Substrate Gleaning in Bats: The Pallid Bat as a Case Study.

    PubMed

    Razak, Khaleel A

    2018-06-06

    Substrate gleaning is a foraging strategy in which bats use a mixture of echolocation, prey-generated sounds, and vision to localize and hunt surface-dwelling prey. Many substrate-gleaning species depend primarily on prey-generated noise to hunt. Use of echolocation is limited to general orientation and obstacle avoidance. This foraging strategy involves a different set of selective pressures on morphology, behavior, and auditory system organization of bats compared to the use of echolocation for both hunting and navigation. Gleaning likely evolved to hunt in cluttered environments and/or as a counterstrategy to reduce detection by eared prey. Gleaning bats simultaneously receive streams of echoes from obstacles and prey-generated noise, and have to segregate these acoustic streams to attend to one or both. Not only do these bats have to be exquisitely sensitive to the soft, low frequency sounds produced by walking/rustling prey, they also have to precisely localize these sounds. Gleaners typically use low intensity echolocation calls. Such stealth echolocation requires a nervous system that is attuned to low intensity sound processing. In addition, landing on the ground to hunt may bring gleaners in close proximity to venomous prey. In fact, at least 2 gleaning bat species are known to hunt highly venomous scorpions. While a number of studies have addressed adaptations for echolocation in bats that hunt in the air, very little is known about the morphological, behavioral, and neural specializations for gleaning in bats. This review highlights the novel insights gleaning bats provide into bat evolution, particularly auditory pathway organization and ion channel structure/function relationships. Gleaning bats are found in multiple families, suggesting convergent evolution of specializations for gleaning as a foraging strategy. However, most of this review is based on recent work on a single species - the pallid bat (Antrozous palli dus) - symptomatic of the fact that more comparative work is needed to identify the mechanisms that facilitate gleaning behavior. © 2018 S. Karger AG, Basel.

  12. Evaluation of Spanwise Variable Impedance Liners with Three-Dimensional Aeroacoustics Propagation Codes

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Watson, W. R.; Nark, D. M.; Schiller, N. H.

    2017-01-01

    Three perforate-over-honeycomb liner configurations, one uniform and two with spanwise variable impedance, are evaluated based on tests conducted in the NASA Grazing Flow Impedance Tube (GFIT) with a plane-wave source. Although the GFIT is only 2" wide, spanwise impedance variability clearly affects the measured acoustic pressure field, such that three-dimensional (3D) propagation codes are required to properly predict this acoustic pressure field. Three 3D propagation codes (CHE3D, COMSOL, and CDL) are used to predict the sound pressure level and phase at eighty-seven microphones flush-mounted in the GFIT (distributed along all four walls). The CHE3D and COMSOL codes compare favorably with the measured data, regardless of whether an exit acoustic pressure or anechoic boundary condition is employed. Except for those frequencies where the attenuation is large, the CDL code also provides acceptable estimates of the measured acoustic pressure profile. The CHE3D and COMSOL predictions diverge slightly from the measured data for frequencies away from resonance, where the attenuation is noticeably reduced, particularly when an exit acoustic pressure boundary condition is used. For these conditions, the CDL code actually provides slightly more favorable comparison with the measured data. Overall, the comparisons of predicted and measured data suggest that any of these codes can be used to understand data trends associated with spanwise variable-impedance liners.

  13. Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding

    NASA Astrophysics Data System (ADS)

    Susemihl, Alex; Meir, Ron; Opper, Manfred

    2013-03-01

    Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.

  14. Influences of age, gender, and parents' educational level in knowledge, behavior and preferences regarding noise, from childhood to adolescence.

    PubMed

    Knobel, Keila Alessandra Baraldi; Lima, Maria Cecília Marconi Pinheiro

    2014-01-01

    Exposure to loud sound during leisure activities for long periods of time is an important area to implement preventive health education, especially among young people. The aim was to identify the relations among awareness about the damaging effects of loud levels of sounds, previous exposures do loud sounds, preferences-related to sound levels and knowledge about hearing protection with age, gender, and their parent's educational level among children. Prospective cross-sectional. Seven hundred and forty students (5-16 years old) and 610 parents participated in the study. Chi-square test, Fisher exact test and linear regression. About 86.5% of the children consider that loud sounds damage the ears and 53.7% dislike noisy places. Children were previously exposed to parties and concerts with loud music, Mardi Gras, firecrackers and loud music at home or in the car and loud music with earphones. About 18.4% of the younger children could select the volume of the music, versus 65.3% of the older ones. Children have poor information about hearing protection and do not have hearing protection device. Knowledge about the risks related to exposures to loud sounds and about strategies to protect their hearing increases with age, but preference for loud sounds and exposures to it increases too. Gender and parents' instructional level have little influence on the studied variables. Many of the children's recreational activities are noisy. It is possible that the tendency of increasing preference for loud sounds with age might be a result of a learned behavior.

  15. Spellbinding and crooning: sound amplification, radio, and political rhetoric in international comparative perspective, 1900-1945.

    PubMed

    Wijfjes, Huub

    2014-01-01

    This article researches in an interdisciplinary way the relationship of sound technology and political culture at the beginning of the twentieth century. It sketches the different strategies that politicians--Franklin D. Roosevelt, Adolf Hitler, Winston Churchill, and Dutch prime minister Hendrikus Colijn--found for the challenges that sound amplification and radio created for their rhetoric and presentation. Taking their different political styles into account, the article demonstrates that the interconnected technologies of sound amplification and radio forced a transition from a spellbinding style based on atmosphere and pathos in a virtual environment to "political crooning" that created artificial intimacy in despatialized simultaneity. Roosevelt and Colijn created the best examples of this political crooning, while Churchill and Hitler encountered problems in this respect. Churchill's radio successes profited from the special circumstances during the first period of World War II. Hitler's speeches were integrated into a radio regime trying to shape, with dictatorial powers, a national socialistic community of listeners.

  16. Speech perception in individuals with auditory dys-synchrony.

    PubMed

    Kumar, U A; Jayaram, M

    2011-03-01

    This study aimed to evaluate the effect of lengthening the transition duration of selected speech segments upon the perception of those segments in individuals with auditory dys-synchrony. Thirty individuals with auditory dys-synchrony participated in the study, along with 30 age-matched normal hearing listeners. Eight consonant-vowel syllables were used as auditory stimuli. Two experiments were conducted. Experiment one measured the 'just noticeable difference' time: the smallest prolongation of the speech sound transition duration which was noticeable by the subject. In experiment two, speech sounds were modified by lengthening the transition duration by multiples of the just noticeable difference time, and subjects' speech identification scores for the modified speech sounds were assessed. Subjects with auditory dys-synchrony demonstrated poor processing of temporal auditory information. Lengthening of speech sound transition duration improved these subjects' perception of both the placement and voicing features of the speech syllables used. These results suggest that innovative speech processing strategies which enhance temporal cues may benefit individuals with auditory dys-synchrony.

  17. Assessing sound exposure from shipping in coastal waters using a single hydrophone and Automatic Identification System (AIS) data.

    PubMed

    Merchant, Nathan D; Witt, Matthew J; Blondel, Philippe; Godley, Brendan J; Smith, George H

    2012-07-01

    Underwater noise from shipping is a growing presence throughout the world's oceans, and may be subjecting marine fauna to chronic noise exposure with potentially severe long-term consequences. The coincidence of dense shipping activity and sensitive marine ecosystems in coastal environments is of particular concern, and noise assessment methodologies which describe the high temporal variability of sound exposure in these areas are needed. We present a method of characterising sound exposure from shipping using continuous passive acoustic monitoring combined with Automatic Identification System (AIS) shipping data. The method is applied to data recorded in Falmouth Bay, UK. Absolute and relative levels of intermittent ship noise contributions to the 24-h sound exposure level are determined using an adaptive threshold, and the spatial distribution of potential ship sources is then analysed using AIS data. This technique can be used to prioritize shipping noise mitigation strategies in coastal marine environments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.

    PubMed

    Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S

    2002-06-01

    Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.

  19. Psychophysical evidence for auditory motion parallax.

    PubMed

    Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz

    2018-04-17

    Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.

  20. Auditory event perception: the source-perception loop for posture in human gait.

    PubMed

    Pastore, Richard E; Flint, Jesse D; Gaston, Jeremy R; Solomon, Matthew J

    2008-01-01

    There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.

  1. Time measurements with a mobile device using sound

    NASA Astrophysics Data System (ADS)

    Wisman, Raymond F.; Spahn, Gabriel; Forinash, Kyle

    2018-05-01

    Data collection is a fundamental skill in science education, one that students generally practice in a controlled setting using equipment only available in the classroom laboratory. However, using smartphones with their built-in sensors and often free apps, many fundamental experiments can be performed outside the laboratory. Taking advantage of these tools often require creative approaches to data collection and exploring alternative strategies for experimental procedures. As examples, we present several experiments using smartphones and apps that record and analyze sound to measure a variety of physical properties.

  2. Deconvolution of magnetic acoustic change complex (mACC).

    PubMed

    Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W

    2014-11-01

    The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  3. ODECS -- A computer code for the optimal design of S.I. engine control strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arsie, I.; Pianese, C.; Rizzo, G.

    1996-09-01

    The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less

  4. Computational strategies for three-dimensional flow simulations on distributed computer systems. Ph.D. Thesis Semiannual Status Report, 15 Aug. 1993 - 15 Feb. 1994

    NASA Technical Reports Server (NTRS)

    Weed, Richard Allen; Sankar, L. N.

    1994-01-01

    An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.

  5. Vibration Suppression Strategies for Large Tension-Aligned Array Structures

    DTIC Science & Technology

    2013-11-19

    show vibration suppression. Practical issues related to actuator bandwidth were also addressed. 40 Dr. Ranjan Mukherjee (517) 355-1834 FINAL...third strategies, Lyapunov stability theory was used to show vibration suppression. Practical issues related to actuator bandwidth were also addressed...1 Publications Journal Papers : • Alsahlani, A. and Mukherjee, R., “Vibration Control of a String Using a Scabbard-Like Actuator”, Journal of Sound and

  6. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    NASA Technical Reports Server (NTRS)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  7. Development of a Coded Aperture X-Ray Backscatter Imager for Explosive Device Detection

    NASA Astrophysics Data System (ADS)

    Faust, Anthony A.; Rothschild, Richard E.; Leblanc, Philippe; McFee, John Elton

    2009-02-01

    Defence R&D Canada has an active research and development program on detection of explosive devices using nuclear methods. One system under development is a coded aperture-based X-ray backscatter imaging detector designed to provide sufficient speed, contrast and spatial resolution to detect antipersonnel landmines and improvised explosive devices. The successful development of a hand-held imaging detector requires, among other things, a light-weight, ruggedized detector with low power requirements, supplying high spatial resolution. The University of California, San Diego-designed HEXIS detector provides a modern, large area, high-temperature CZT imaging surface, robustly packaged in a light-weight housing with sound mechanical properties. Based on the potential for the HEXIS detector to be incorporated as the detection element of a hand-held imaging detector, the authors initiated a collaborative effort to demonstrate the capability of a coded aperture-based X-ray backscatter imaging detector. This paper will discuss the landmine and IED detection problem and review the coded aperture technique. Results from initial proof-of-principle experiments will then be reported.

  8. Effect of background noise on neuronal coding of interaural level difference cues in rat inferior colliculus

    PubMed Central

    Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh

    2015-01-01

    Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. PMID:25865218

  9. Acoustic effects of the ATOC signal (75 Hz, 195 dB) on dolphins and whales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Au, W.W.; Nachtigall, P.E.; Pawloski, J.L.

    1997-05-01

    The Acoustic Thermometry of Ocean Climate (ATOC) program of Scripps Institution of Oceanography and the Applied Physics Laboratory, University of Washington, will broadcast a low-frequency 75-Hz phase modulated acoustic signal over ocean basins in order to study ocean temperatures on a global scale and examine the effects of global warming. One of the major concerns is the possible effect of the ATOC signal on marine life, especially on dolphins and whales. In order to address this issue, the hearing sensitivity of a false killer whale ({ital Pseudorca crassidens}) and a Risso{close_quote}s dolphin ({ital Grampus griseus}) to the ATOC sound wasmore » measured behaviorally. A staircase procedure with the signal levels being changed in 1-dB steps was used to measure the animals{close_quote} threshold to the actual ATOC coded signal. The results indicate that small odontocetes such as the {ital Pseudorca} and {ital Grampus} swimming directly above the ATOC source will not hear the signal unless they dive to a depth of approximately 400 m. A sound propagation analysis suggests that the sound-pressure level at ranges greater than 0.5 km will be less than 130 dB for depths down to about 500 m. Several species of baleen whales produce sounds much greater than 170{endash}180 dB. With the ATOC source on the axis of the deep sound channel (greater than 800 m), the ATOC signal will probably have minimal physical and physiological effects on cetaceans. {copyright} {ital 1997 Acoustical Society of America.}« less

  10. The not-so-silent world: Measuring Arctic, Equatorial, and Antarctic soundscapes in the Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Haver, Samara M.; Klinck, Holger; Nieukirk, Sharon L.; Matsumoto, Haru; Dziak, Robert P.; Miksis-Olds, Jennifer L.

    2017-04-01

    Anthropogenic noise in the ocean has been shown, under certain conditions, to influence the behavior and health of marine mammals. Noise from human activities may interfere with the low-frequency acoustic communication of many Mysticete species, including blue (Balaenoptera musculus) and fin whales (B. physalus). This study analyzed three soundscapes in the Atlantic Ocean, from the Arctic to the Antarctic, to document ambient sound. For 16 months beginning in August 2009, acoustic data (15-100 Hz) were collected in the Fram Strait (79°N, 5.5°E), near Ascension Island (8°S, 14.4°W) and in the Bransfield Strait (62°S, 55.5°W). Results indicate (1) the highest overall sound levels were measured in the equatorial Atlantic, in association with high levels of seismic oil and gas exploration, (2) compared to the tropics, ambient sound levels in polar regions are more seasonally variable, and (3) individual elements beget the seasonal and annual variability of ambient sound levels in high latitudes. Understanding how the variability of natural and man-made contributors to sound may elicit differences in ocean soundscapes is essential to developing strategies to manage and conserve marine ecosystems and animals.

  11. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    PubMed

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  12. Computational strategies for tire monitoring and analysis

    NASA Technical Reports Server (NTRS)

    Danielson, Kent T.; Noor, Ahmed K.; Green, James S.

    1995-01-01

    Computational strategies are presented for the modeling and analysis of tires in contact with pavement. A procedure is introduced for simple and accurate determination of tire cross-sectional geometric characteristics from a digitally scanned image. Three new strategies for reducing the computational effort in the finite element solution of tire-pavement contact are also presented. These strategies take advantage of the observation that footprint loads do not usually stimulate a significant tire response away from the pavement contact region. The finite element strategies differ in their level of approximation and required amount of computer resources. The effectiveness of the strategies is demonstrated by numerical examples of frictionless and frictional contact of the space shuttle Orbiter nose-gear tire. Both an in-house research code and a commercial finite element code are used in the numerical studies.

  13. Memory modulates journey-dependent coding in the rat hippocampus

    PubMed Central

    Ferbinteanu, J.; Shirvalkar, P.; Shapiro, M. L.

    2011-01-01

    Neurons in the rat hippocampus signal current location by firing in restricted areas called place fields. During goal-directed tasks in mazes, place fields can also encode past and future positions through journey-dependent activity, which could guide hippocampus-dependent behavior and underlie other temporally extended memories, such as autobiographical recollections. The relevance of journey-dependent activity for hippocampal-dependent memory, however, is not well understood. To further investigate the relationship between hippocampal journey-dependent activity and memory we compared neural firing in rats performing two mnemonically distinct but behaviorally identical tasks in the plus maze: a hippocampus-dependent spatial navigation task, and a hippocampus-independent cue response task. While place, prospective, and retrospective coding reflected temporally extended behavioral episodes in both tasks, memory strategy altered coding differently before and after the choice point. Before the choice point, when discriminative selection of memory strategy was critical, a switch between the tasks elicited a change in a field’s coding category, so that a field that signaled current location in one task coded pending journeys in the other task. After the choice point, however, when memory strategy became irrelevant, the fields preserved coding categories across tasks, so that the same field consistently signaled either current location or the recent journeys. Additionally, on the start arm firing rates were affected at comparable levels by task and journey, while on the goal arm firing rates predominantly encoded journey. The data demonstrate a direct link between journey-dependent coding and memory, and suggest that episodes are encoded by both population and firing rate coding. PMID:21697365

  14. Diet and Physical Activity Intervention Strategies for College Students

    PubMed Central

    Martinez, Yannica Theda S.; Harmon, Brook E.; Bantum, Erin O.; Strayhorn, Shaila

    2016-01-01

    Objectives To understand perceived barriers of a diverse sample of college students and their suggestions for interventions aimed at healthy eating, cooking, and physical activity. Methods Forty students (33% Asian American, 30% mixed ethnicity) were recruited. Six focus groups were audio-recorded, transcribed, and coded. Coding began with a priori codes, but allowed for additional codes to emerge. Analysis of questionnaires on participants’ dietary and physical activity practices and behaviors provided context for qualitative findings. Results Barriers included time, cost, facility quality, and intimidation. Tailoring towards a college student’s lifestyle, inclusion of hands-on skill building, and online support and resources were suggested strategies. Conclusions Findings provide direction for diet and physical activity interventions and policies aimed at college students. PMID:28480225

  15. New cochlear implant research coding strategy based on the MP3(000™) strategy to reintroduce the virtual channel effect.

    PubMed

    Neben, Nicole; Lenarz, Thomas; Schuessler, Mark; Harpel, Theo; Buechner, Andreas

    2013-05-01

    Results for speech recognition in noise tests when using a new research coding strategy designed to introduce the virtual channel effect provided no advantage over MP3(000™). Although statistically significant smaller just noticeable differences (JNDs) were obtained, the findings for pitch ranking proved to have little clinical impact. The aim of this study was to explore whether modifications to MP3000 by including sequential virtual channel stimulation would lead to further improvements in hearing, particularly for speech recognition in background noise and in competing-talker conditions, and to compare results for pitch perception and melody recognition, as well as informally collect subjective impressions on strategy preference. Nine experienced cochlear implant subjects were recruited for the prospective study. Two variants of the experimental strategy were compared to MP3000. The study design was a single-blinded ABCCBA cross-over trial paradigm with 3 weeks of take-home experience for each user condition. Comparing results of pitch-ranking, a significantly reduced JND was identified. No significant effect of coding strategy on speech understanding in noise or competing-talker materials was found. Melody recognition skills were the same under all user conditions.

  16. The neuromechanics of hearing

    NASA Astrophysics Data System (ADS)

    Araya, Mussie K.; Brownell, William E.

    2015-12-01

    Hearing requires precise detection and coding of acoustic signals by the inner ear and equally precise communication of the information through the auditory brainstem. A membrane based motor in the outer hair cell lateral wall contributes to the transformation of sound into a precise neural code. Structural, molecular and energetic similarities between the outer hair cell and auditory brainstem neurons suggest that a similar membrane based motor may contribute to signal processing in the auditory CNS. Cooperative activation of voltage gated ion channels enhances neuronal temporal processing and increases the upper frequency limit for phase locking. We explore the possibility that membrane mechanics contribute to ion channel cooperativity as a consequence of the nearly instantaneous speed of electromechanical signaling and the fact that membrane composition and mechanics modulate ion channel function.

  17. The unstaggered extension to GFDL's FV3 dynamical core on the cubed-sphere

    NASA Astrophysics Data System (ADS)

    Chen, X.; Lin, S. J.; Harris, L.

    2017-12-01

    Finite-volume schemes have become popular for atmospheric transport since they provide intrinsic mass conservation to constituent species. Many CFD codes use unstaggered discretizations for finite volume methods with an approximate Riemann solver. However, this approach is inefficient for geophysical flows due to the complexity of the Riemann solver. We introduce a Low Mach number Approximate Riemann Solver (LMARS) simplified using assumptions appropriate for atmospheric flows: the wind speed is much slower than the sound speed, weak discontinuities, and locally uniform sound wave velocity. LMARS makes possible a Riemann-solver-based dynamical core comparable in computational efficiency to many current dynamical cores. We will present a 3D finite-volume dynamical core using LMARS in a cubed-sphere geometry with a vertically Lagrangian discretization. Results from standard idealized test cases will be discussed.

  18. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  19. Strategies for improving traveler information.

    DOT National Transportation Integrated Search

    2011-01-01

    This project developed a clear, concise, and fiscally sound plan to improve traveler information : for the Michigan Department of Transportation (DOT). The DOT has a long history of innovation : in the field of ITS, including a robust traveler inform...

  20. Simple Guidelines for Sound Investing.

    ERIC Educational Resources Information Center

    Domini, Amy L.

    1985-01-01

    Investment strategies for colleges and universities are discussed. Colleges must begin their strategic investment planning with regular sources of income to ensure year-to-year survival. Cash management, short-term investment, investment grade, and creating endowment are discussed. (MLW)

  1. 3rd grade English language learners making sense of sound

    NASA Astrophysics Data System (ADS)

    Suarez, Enrique; Otero, Valerie

    2013-01-01

    Despite the extensive body of research that supports scientific inquiry and argumentation as cornerstones of physics learning, these strategies continue to be virtually absent in most classrooms, especially those that involve students who are learning English as a second language. This study presents results from an investigation of 3rd grade students' discourse about how length and tension affect the sound produced by a string. These students came from a variety of language backgrounds, and all were learning English as a second language. Our results demonstrate varying levels, and uses, of experiential, imaginative, and mechanistic reasoning strategies. Using specific examples from students' discourse, we will demonstrate some of the productive aspects of working within multiple language frameworks for making sense of physics. Conjectures will be made about how to utilize physics as a context for English Language Learners to further conceptual understanding, while developing their competence in the English language.

  2. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  3. The queueing perspective of asynchronous network coding in two-way relay network

    NASA Astrophysics Data System (ADS)

    Liang, Yaping; Chang, Qing; Li, Xianxu

    2018-04-01

    Asynchronous network coding (NC) has potential to improve the wireless network performance compared with a routing or the synchronous network coding. Recent researches concentrate on the optimization between throughput/energy consuming and delay with a couple of independent input flow. However, the implementation of NC requires a thorough investigation of its impact on relevant queueing systems where few work focuses on. Moreover, few works study the probability density function (pdf) in network coding scenario. In this paper, the scenario with two independent Poisson input flows and one output flow is considered. The asynchronous NC-based strategy is that a new arrival evicts a head packet holding in its queue when waiting for another packet from the other flow to encode. The pdf for the output flow which contains both coded and uncoded packets is derived. Besides, the statistic characteristics of this strategy are analyzed. These results are verified by numerical simulations.

  4. Los Alamos and Lawrence Livermore National Laboratories Code-to-Code Comparison of Inter Lab Test Problem 1 for Asteroid Impact Hazard Mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weaver, Robert P.; Miller, Paul; Howley, Kirsten

    The NNSA Laboratories have entered into an interagency collaboration with the National Aeronautics and Space Administration (NASA) to explore strategies for prevention of Earth impacts by asteroids. Assessment of such strategies relies upon use of sophisticated multi-physics simulation codes. This document describes the task of verifying and cross-validating, between Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL), modeling capabilities and methods to be employed as part of the NNSA-NASA collaboration. The approach has been to develop a set of test problems and then to compare and contrast results obtained by use of a suite of codes, includingmore » MCNP, RAGE, Mercury, Ares, and Spheral. This document provides a short description of the codes, an overview of the idealized test problems, and discussion of the results for deflection by kinetic impactors and stand-off nuclear explosions.« less

  5. About the necessity to manage events coded with MedDRA prior to statistical analysis: proposal of a strategy with application to a randomized clinical trial, ANRS 099 ALIZE.

    PubMed

    Journot, Valérie; Tabuteau, Sophie; Collin, Fidéline; Molina, Jean-Michel; Chene, Geneviève; Rancinan, Corinne

    2008-03-01

    Since 2003, the Medical Dictionary for Regulatory Activities (MedDRA) is the regulatory standard for safety report in clinical trials in the European Community. Yet, we found no published example of a practical experience for a scientifically oriented statistical analysis of events coded with MedDRA. We took advantage of a randomized trial in HIV-infected patients with MedDRA-coded events to explain the difficulties encountered during the events analysis and the strategy developed to report events consistently with trial-specific objectives. MedDRA has a rich hierarchical structure, which allows the grouping of coded terms into 5 levels, the highest being "System Organ Class" (SOC). Each coded term may be related to several SOCs, among which one primary SOC is defined. We developed a new general 5-step strategy to select a SOC as trial primary SOC, consistently with trial-specific objectives for this analysis. We applied it to the ANRS 099 ALIZE trial, where all events were coded with MedDRA version 3.0. We compared the MedDRA and the ALIZE primary SOCs. In the ANRS 099 ALIZE trial, 355 patients were recruited, and 3,722 events were reported and documented, among which 35% had multiple SOCs (2 to 4). We applied the proposed 5-step strategy. Altogether, 23% of MedDRA primary SOCs were modified, mainly from MedDRA primary SOCs "Investigations" (69%) and "Ear and labyrinth disorders" (6%), for the ALIZE primary SOCs "Hepatobiliary disorders" (35%), "Musculoskeletal and connective tissue disorders" (21%), and "Gastrointestinal disorders" (15%). MedDRA largely enhanced in size and complexity with versioning and the development of Standardized MedDRA Queries. Yet, statisticians should not systematically rely on primary SOCs proposed by MedDRA to report events. A simple general 5-step strategy to re-classify events consistently with the trial-specific objectives might be useful in HIV trials as well as in other fields.

  6. Numerical modeling and experimental validation of the acoustic transmission of aircraft's double-wall structures including sound package

    NASA Astrophysics Data System (ADS)

    Rhazi, Dilal

    In the field of aeronautics, reducing the harmful effects of acoustics constitutes a major concern at the international level and justifies the call for further research, particularly in Canada where aeronautics is a key economic sector, which operates in a context of global competition. Aircraft sidewall structure is usually of a double wall construction with a curved ribbed metallic skin and a lightweight composite or sandwich trim separated by a cavity filled with a noise control treatment. The latter is of a great importance in the transport industry, and continues to be of interest in many engineering applications. However, the insertion loss noise control treatment depends on the excitation of the supporting structure. In particular, Turbulent Boundary Layer is of interest to several industries. This excitation is difficult to simulate in laboratory conditions, given the prohibiting costs and difficulties associated with wind tunnel and in-flight tests. Numerical simulation is the only practical way to predict the response to such excitations and to analyze effects of design changes to the response to such excitation. Another kinds of excitations encountered in industrial are monopole, rain on the Roof and diffuse acoustic field. Deterministic methods can calculate in each point the spectral response of the system. Most known are numerical methods such as finite elements and boundary elements methods. These methods generally apply to the low frequency where modal behavior of the structure dominates. However, the high limit of calculation in frequency of these methods cannot be defined in a strict way because it is related to the capacity of data processing and to the nature of the studied mechanical system. With these challenges in mind, and with limitations of the main numerical codes on the market, the manufacturers have expressed the need for simple models immediately available as early as the stage of preliminary drafts. This thesis represents an attempt to address this need. A numerical tool based on two approaches (Wave and Modal) is developed. It allows a fast computation of the vibroacoustic response for multilayer structures over full frequency spectrum and for various kinds of excitations (monople, rain on the roof, diffuse acoustic filed, turbulent boundary layer) . A comparison between results obtained by the developed model, experimental tests and the finite element method is given and discussed. The results are very promising with respect to the potential of such a model for industrial use as a prediction tool, and even for design. The code can be also integrated within an SEA (Statistical Energy Analysis) strategy in order to model a full vehicle by computing in particular the insertion loss and the equivalent damping added by the sound package. Keywords: Transfer Matrix Method, Wave Approach,Turbulent Boundary Layer, Rain on the Roof, Monopole, Insertion loss, Double-wall, Sound Package.

  7. Code-switched English Pronunciation Modeling for Swahili Spoken Term Detection (Pub Version, Open Access)

    DTIC Science & Technology

    2016-05-03

    using additional English resources. 2. Background The Babel program1 is an international collaborative effort sponsored by the US Intelligence Advanced...phenomenon is not as well studied for English / African language pairs, but some results are available8,9. 3. Experimental Setup The Swahili analysis...word pronunciations. From the analysis it was concluded that in most cases English words were pronounced using standard English letter-to-sound rules

  8. A Study on the Feasibility of Creating a Web-Accessible Marine Mammal Sound Library Based upon the Collections at the Woods Hole Oceanographic Institution

    DTIC Science & Technology

    2008-09-01

    NPS-OC-08-005 NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA Approved for public release; distribution is...official policy or position of the Department of Defense or the U.S. Government. 12a. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public...release; distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) A universally accessible web-based marine

  9. Ethical issues in a pediatric private practice.

    PubMed

    Jakubowitz, Melissa

    2011-11-01

    Building a successful pediatric private practice requires clinical expertise and an understanding of the business process, as well as familiarity with the American Speech-Language-Hearing Association Code of Ethics. This article provides an overview of the ethical issues that may be encountered when building a practice, including a look at marketing and advertising, financial management, privacy, and documentation. Ethically sound decision making is a key to a successful business. © Thieme Medical Publishers.

  10. A hybrid finite element - statistical energy analysis approach to robust sound transmission modeling

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin; Langley, Robin S.; Dijckmans, Arne; Vermeir, Gerrit

    2014-09-01

    When considering the sound transmission through a wall in between two rooms, in an important part of the audio frequency range, the local response of the rooms is highly sensitive to uncertainty in spatial variations in geometry, material properties and boundary conditions, which have a wave scattering effect, while the local response of the wall is rather insensitive to such uncertainty. For this mid-frequency range, a computationally efficient modeling strategy is adopted that accounts for this uncertainty. The partitioning wall is modeled deterministically, e.g. with finite elements. The rooms are modeled in a very efficient, nonparametric stochastic way, as in statistical energy analysis. All components are coupled by means of a rigorous power balance. This hybrid strategy is extended so that the mean and variance of the sound transmission loss can be computed as well as the transition frequency that loosely marks the boundary between low- and high-frequency behavior of a vibro-acoustic component. The method is first validated in a simulation study, and then applied for predicting the airborne sound insulation of a series of partition walls of increasing complexity: a thin plastic plate, a wall consisting of gypsum blocks, a thicker masonry wall and a double glazing. It is found that the uncertainty caused by random scattering is important except at very high frequencies, where the modal overlap of the rooms is very high. The results are compared with laboratory measurements, and both are found to agree within the prediction uncertainty in the considered frequency range.

  11. Pediatric severe sepsis in U.S. children's hospitals.

    PubMed

    Balamuth, Fran; Weiss, Scott L; Neuman, Mark I; Scott, Halden; Brady, Patrick W; Paul, Raina; Farris, Reid W D; McClead, Richard; Hayes, Katie; Gaieski, David; Hall, Matt; Shah, Samir S; Alpern, Elizabeth R

    2014-11-01

    To compare the prevalence, resource utilization, and mortality for pediatric severe sepsis identified using two established identification strategies. Observational cohort study from 2004 to 2012. Forty-four pediatric hospitals contributing data to the Pediatric Health Information Systems database. Children 18 years old or younger. We identified patients with severe sepsis or septic shock by using two International Classification of Diseases, 9th edition, Clinical Modification-based coding strategies: 1) combinations of International Classification of Diseases, 9th edition, Clinical Modification codes for infection plus organ dysfunction (combination code cohort); 2) International Classification of Diseases, 9th edition, Clinical Modification codes for severe sepsis and septic shock (sepsis code cohort). Outcomes included prevalence of severe sepsis, as well as hospital and ICU length of stay, and mortality. Outcomes were compared between the two cohorts examining aggregate differences over the study period and trends over time. The combination code cohort identified 176,124 hospitalizations (3.1% of all hospitalizations), whereas the sepsis code cohort identified 25,236 hospitalizations (0.45%), a seven-fold difference. Between 2004 and 2012, the prevalence of sepsis increased from 3.7% to 4.4% using the combination code cohort and from 0.4% to 0.7% using the sepsis code cohort (p < 0.001 for trend in each cohort). Length of stay (hospital and ICU) and costs decreased in both cohorts over the study period (p < 0.001). Overall, hospital mortality was higher in the sepsis code cohort than the combination code cohort (21.2% [95% CI, 20.7-21.8] vs 8.2% [95% CI, 8.0-8.3]). Over the 9-year study period, there was an absolute reduction in mortality of 10.9% (p < 0.001) in the sepsis code cohort and 3.8% (p < 0.001) in the combination code cohort. Prevalence of pediatric severe sepsis increased in the studied U.S. children's hospitals over the past 9 years, whereas resource utilization and mortality decreased. Epidemiologic estimates of pediatric severe sepsis varied up to seven-fold depending on the strategy used for case ascertainment.

  12. Pediatric Severe Sepsis in US Children’s Hospitals

    PubMed Central

    Balamuth, Fran; Weiss, Scott L.; Neuman, Mark I.; Scott, Halden; Brady, Patrick W.; Paul, Raina; Farris, Reid W.D.; McClead, Richard; Hayes, Katie; Gaieski, David; Hall, Matt; Shah, Samir S.; Alpern, Elizabeth R.

    2014-01-01

    Objective To compare the prevalence, resource utilization, and mortality for pediatric severe sepsis identified using two established identification strategies. Design Observational cohort study from 2004–2012. Setting Forty-four pediatric hospitals contributing data to the Pediatric Health Information Systems database. Patients Children ≤18 years of age. Measurements and Main Results We identified patients with severe sepsis or septic shock by using two International Classification of Diseases, 9th edition-Clinical Modification (ICD9-CM) based coding strategies: 1) combinations of ICD9-CM codes for infection plus organ dysfunction (combination code cohort); 2) ICD9-CM codes for severe sepsis and septic shock (sepsis code cohort). Outcomes included prevalence of severe sepsis, as well as hospital and intensive care unit (ICU) length of stay (LOS), and mortality. Outcomes were compared between the two cohorts examining aggregate differences over the study period and trends over time. The combination code cohort identified, 176,124 hospitalizations (3.1% of all hospitalizations), while the sepsis code cohort identified 25,236 hospitalizations (0.45%), a 7-fold difference. Between 2004 and 2012, the prevalence of sepsis increased from 3.7% to 4.4% using the combination code cohort and from 0.4% to 0.7% using the sepsis code cohort (p<0.001 for trend in each cohort). LOS (hospital and ICU) and costs decreased in both cohorts over the study period (p<0.001). Overall hospital mortality was higher in the sepsis code cohort than the combination code cohort (21.2%, (95% CI: 20.7–21.8 vs. 8.2%,(95% CI: 8.0–8.3). Over the 9 year study period, there was an absolute reduction in mortality of 10.9% (p<0.001) in the sepsis code cohort and 3.8% (p<0.001) in the combination code cohort. Conclusions Prevalence of pediatric severe sepsis increased in the studied US children’s hospitals over the past 9 years, though resource utilization and mortality decreased. Epidemiologic estimates of pediatric severe sepsis varied up to 7-fold depending on the strategy used for case ascertainment. PMID:25162514

  13. Sound levels in a neonatal intensive care unit significantly exceeded recommendations, especially inside incubators.

    PubMed

    Parra, Johanna; de Suremain, Aurelie; Berne Audeoud, Frederique; Ego, Anne; Debillon, Thierry

    2017-12-01

    This study measured sound levels in a 2008 built French neonatal intensive care unit (NICU) and compared them to the 2007 American Academy of Pediatrics (AAP) recommendations. The ultimate aim was to identify factors that could influence noise levels. The study measured sound in 17 single or double rooms in the NICU. Two dosimeters were installed in each room, one inside and one outside the incubators, and these conducted measurements over a 24-hour period. The noise metrics measured were the equivalent continuous sound level (L eq ), the maximum noise level (L max ) and the noise level exceeded for 10% of the measurement period (L 10 ). The mean L eq , L 10 and L max were 60.4, 62.1 and 89.1 decibels (dBA), which exceeded the recommended levels of 45, 50 and 65 dBA (p < 0.001), respectively. The L eq inside the incubator was significantly higher than in the room (+8 dBA, p < 0.001). None of the newborns' characteristics, the environment or medical care was correlated to an increased noise level, except for a postconceptional age below 32 weeks. The sound levels significantly exceeded the AAP recommendations, particularly inside incubators. A multipronged strategy is required to improve the sound environment and protect the neonates' sensory development. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  14. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  15. Working memory: a developmental study of phonological recoding.

    PubMed

    Palmer, S

    2000-05-01

    A cross-sectional study using children aged 3 to 7 years and a cross-sequential study using children aged between 5 and 8 years showed that the development of phonological recoding in working memory was more complex than the simple dichotomous picture portrayed in the current literature. It appears that initially children use no strategy in recall, which is proposed to represent the level of automatic activation of representations in long-term memory and the storage capacity of the central executive. This is followed by a period in which a visual strategy prevails, followed by a period of dual visual-verbal coding before the adult-like strategy of verbal coding finally emerges. The results are discussed in terms of three working memory models (Baddeley, 1990; Engle, 1996; Logie, 1996) where strategy use is seen as the development of attentional processes and phonological recoding as the development of inhibitory mechanisms in the central executive to suppress the habitual response set of visual coding.

  16. Active acoustical impedance using distributed electrodynamical transducers.

    PubMed

    Collet, M; David, P; Berthillier, M

    2009-02-01

    New miniaturization and integration capabilities available from emerging microelectromechanical system (MEMS) technology will allow silicon-based artificial skins involving thousands of elementary actuators to be developed in the near future. SMART structures combining large arrays of elementary motion pixels coated with macroscopic components are thus being studied so that fundamental properties such as shape, stiffness, and even reflectivity of light and sound could be dynamically adjusted. This paper investigates the acoustic impedance capabilities of a set of distributed transducers connected with a suitable controlling strategy. Research in this domain aims at designing integrated active interfaces with a desired acoustical impedance for reaching an appropriate global acoustical behavior. This generic problem is intrinsically connected with the control of multiphysical systems based on partial differential equations (PDEs) and with the notion of multiscaled physics when a dense array of electromechanical systems (or MEMS) is considered. By using specific techniques based on PDE control theory, a simple boundary control equation capable of annihilating the wave reflections has been built. The obtained strategy is also discretized as a low order time-space operator for experimental implementation by using a dense network of interlaced microphones and loudspeakers. The resulting quasicollocated architecture guarantees robustness and stability margins. This paper aims at showing how a well controlled semidistributed active skin can substantially modify the sound transmissibility or reflectivity of the corresponding homogeneous passive interface. In Sec. IV, numerical and experimental results demonstrate the capabilities of such a method for controlling sound propagation in ducts. Finally, in Sec. V, an energy-based comparison with a classical open-loop strategy underlines the system's efficiency.

  17. SSPARR: Development of an efficient autonomous sampling strategy

    NASA Astrophysics Data System (ADS)

    Chayes, D. N.

    2013-12-01

    The Seafloor Sounding in Polar and Remote Regions (SSPARR) effort was launched in 2004 with funding from the US National Science Foundation (Anderson et al. 2005.) Experiments with a prototype were encouraging (Greenspan et al., 2012, Chayes et al. 2012) and we are proceeding toward building and testing units for deployment during the 2014 season season in ice covered parts of the Arctic ocean. The simplest operational mode for a SSPARR buoy will be to wake and sample on a fixed time interval. A slightly more complex mode will check the distance traveled since the pervious sounding and potentially return to sleep-mode if it has not traveled far enough to make a significant new measurement. We are developing a mode that will use a sampling strategy based on querying an on-board copy of the best available digital terrain model (DTM) e.g. IBCAO in the Arctic, to help decide if it is appropriate to turn on the echo sounder and make a new measurement. We anticipate that a robust strategy of this type will allow a buoy to operate substantially longer on a fixed battery size. Anderson, R., D. Chayes, et al. (2005). "Seafloor Soundings in Polar and Remote Regions - A new instrument for unattended bathymetric observations," Eos Trans. AGU 86(18): Abstract C43A-10. Greenspan, D., D. Porter, et al. (2012). "IBuoy: Expendable Echo Sounder Buoy with Satellite Telemetry." EOS Fall Meeting Supplement C13E-0660. Chayes, D. N., S. A. Goemmer, et al. (2012). "SSPARR-3: A cost-effective autonomous drifting echosounder." EOS Fall Meeting supplement C13E-0659.

  18. The nurse as investor: using the strategies of Sarbanes-Oxley corporate legislation to radically transform the work environment of nurses.

    PubMed

    Beason, Charlotte F

    2005-01-01

    Experts in creative management recommend that managers routinely explore the practices of other disciplines to develop innovative strategies for their organizations. The author examines the provisions of the Sarbanes-Oxley Corporate Responsibility Act of 2002, designed to ensure sound corporate fiscal practices, and proposes a model using the same actions to radically transform the nursing work environment.

  19. Towards efficient data exchange and sharing for big-data driven materials science: metadata and data formats

    NASA Astrophysics Data System (ADS)

    Ghiringhelli, Luca M.; Carbogno, Christian; Levchenko, Sergey; Mohamed, Fawzi; Huhs, Georg; Lüders, Martin; Oliveira, Micael; Scheffler, Matthias

    2017-11-01

    With big-data driven materials research, the new paradigm of materials science, sharing and wide accessibility of data are becoming crucial aspects. Obviously, a prerequisite for data exchange and big-data analytics is standardization, which means using consistent and unique conventions for, e.g., units, zero base lines, and file formats. There are two main strategies to achieve this goal. One accepts the heterogeneous nature of the community, which comprises scientists from physics, chemistry, bio-physics, and materials science, by complying with the diverse ecosystem of computer codes and thus develops "converters" for the input and output files of all important codes. These converters then translate the data of each code into a standardized, code-independent format. The other strategy is to provide standardized open libraries that code developers can adopt for shaping their inputs, outputs, and restart files, directly into the same code-independent format. In this perspective paper, we present both strategies and argue that they can and should be regarded as complementary, if not even synergetic. The represented appropriate format and conventions were agreed upon by two teams, the Electronic Structure Library (ESL) of the European Center for Atomic and Molecular Computations (CECAM) and the NOvel MAterials Discovery (NOMAD) Laboratory, a European Centre of Excellence (CoE). A key element of this work is the definition of hierarchical metadata describing state-of-the-art electronic-structure calculations.

  20. Cortical network differences in the sighted versus early blind for recognition of human-produced action sounds

    PubMed Central

    Lewis, James W.; Frum, Chris; Brefczynski-Lewis, Julie A.; Talkington, William J.; Walker, Nathan A.; Rapuano, Kristina M.; Kovach, Amanda L.

    2012-01-01

    Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, while the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when attempting to recognize action sounds. PMID:21305666

  1. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate Scale Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khangaonkar, Tarang; Sackmann, Brandon S.; Long, Wen

    2012-10-01

    The Salish Sea, including Puget Sound, is a large estuarine system bounded by over seven thousand miles of complex shorelines, consists of several subbasins and many large inlets with distinct properties of their own. Pacific Ocean water enters Puget Sound through the Strait of Juan de Fuca at depth over the Admiralty Inlet sill. Ocean water mixed with freshwater discharges from runoff, rivers, and wastewater outfalls exits Puget Sound through the brackish surface outflow layer. Nutrient pollution is considered one of the largest threats to Puget Sound. There is considerable interest in understanding the effect of nutrient loads on themore » water quality and ecological health of Puget Sound in particular and the Salish Sea as a whole. The Washington State Department of Ecology (Ecology) contracted with Pacific Northwest National Laboratory (PNNL) to develop a coupled hydrodynamic and water quality model. The water quality model simulates algae growth, dissolved oxygen, (DO) and nutrient dynamics in Puget Sound to inform potential Puget Sound-wide nutrient management strategies. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or control human impacts to DO levels in the sensitive areas. The project did not include any additional data collection but instead relied on currently available information. This report describes model development effort conducted during the period 2009 to 2012 under a U.S. Environmental Protection Agency (EPA) cooperative agreement with PNNL, Ecology, and the University of Washington awarded under the National Estuary Program« less

  2. Active control of sound transmission through partitions composed of discretely controlled modules

    NASA Astrophysics Data System (ADS)

    Leishman, Timothy W.

    This thesis provides a detailed theoretical and experimental investigation of active segmented partitions (ASPs) for the control of sound transmission. ASPs are physically segmented arrays of interconnected acoustically and structurally small modules that are discretely controlled using electronic controllers. Theoretical analyses of the thesis first address physical principles fundamental to ASP modeling and experimental measurement techniques. Next, they explore specific module configurations, primarily using equivalent circuits. Measured normal-incidence transmission losses and related properties of experimental ASPs are determined using plane wave tubes and the two-microphone transfer function technique. A scanning laser vibrometer is also used to evaluate distributed transmitting surface vibrations. ASPs have the inherent potential to provide excellent active sound transmission control (ASTC) through lightweight structures, using very practical control strategies. The thesis analyzes several unique ASP configurations and evaluates their abilities to produce high transmission losses via global minimization of normal transmitting surface vibrations. A novel dual diaphragm configuration is shown to employ this strategy particularly well. It uses an important combination of acoustical actuation and mechano-acoustical segmentation to produce exceptionally high transmission loss (e.g., 50 to 80 dB) over a broad frequency range-including lower audible frequencies. Such performance is shown to be comparable to that produced by much more massive partitions composed of thick layers of steel or concrete and sand. The configuration uses only simple localized error sensors and actuators, permitting effective use of independent single-channel controllers in a decentralized format. This work counteracts the commonly accepted notion that active vibration control of partitions is an ineffective means of controlling sound transmission. With appropriate construction, actuation, and error sensing, ASPs can achieve high sound transmission loss through efficient global control of transmitting surface vibrations. This approach is applicable to a wide variety of source and receiving spaces-and to both near fields and far fields.

  3. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children

    PubMed Central

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C.; Hornickel, Jane; Strait, Dana L.; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the development of strategies for auditory learning. PMID:25414631

  4. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children.

    PubMed

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the development of strategies for auditory learning.

  5. Traffic safety information systems international scan : strategy implementation white paper

    DOT National Transportation Integrated Search

    2006-09-01

    Safety data provide the key to making sound decisions on the design and operation of roadways, but deficiencies in many States safety databases do not allow for good decisionmaking. The Federal Highway Administration (FHWA), the American Associati...

  6. Managing livestock using animal behavior: Mixed-species stocking and flerds

    USDA-ARS?s Scientific Manuscript database

    Mixed-species stocking can foster sound landscape management while offering economic and ecological advantages compared to mono-species stocking. Producers contemplating a mixed-species enterprise should reflect on several considerations before implementing this animal management strategy. Factors...

  7. Auditory sequence analysis and phonological skill

    PubMed Central

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.

    2012-01-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  8. Computation of interaural time difference in the owl's coincidence detector neurons.

    PubMed

    Funabiki, Kazuo; Ashida, Go; Konishi, Masakazu

    2011-10-26

    Both the mammalian and avian auditory systems localize sound sources by computing the interaural time difference (ITD) with submillisecond accuracy. The neural circuits for this computation in birds consist of axonal delay lines and coincidence detector neurons. Here, we report the first in vivo intracellular recordings from coincidence detectors in the nucleus laminaris of barn owls. Binaural tonal stimuli induced sustained depolarizations (DC) and oscillating potentials whose waveforms reflected the stimulus. The amplitude of this sound analog potential (SAP) varied with ITD, whereas DC potentials did not. The amplitude of the SAP was correlated with firing rate in a linear fashion. Spike shape, synaptic noise, the amplitude of SAP, and responsiveness to current pulses differed between cells at different frequencies, suggesting an optimization strategy for sensing sound signals in neurons tuned to different frequencies.

  9. Afraid of being "witchy with a 'b'": a qualitative study of how gender influences residents' experiences leading cardiopulmonary resuscitation.

    PubMed

    Kolehmainen, Christine; Brennan, Meghan; Filut, Amarette; Isaac, Carol; Carnes, Molly

    2014-09-01

    Ineffective leadership during cardiopulmonary resuscitation ("code") can negatively affect a patient's likelihood of survival. In most teaching hospitals, internal medicine residents lead codes. In this study, the authors explored internal medicine residents' experiences leading codes, with a particular focus on how gender influences the code leadership experience. The authors conducted individual, semistructured telephone or in-person interviews with 25 residents (May 2012 to February 2013) from 9 U.S. internal medicine residency programs. They audio recorded and transcribed the interviews and then thematically analyzed the transcribed text. Participants viewed a successful code as one with effective leadership. They agreed that the ideal code leader was an authoritative presence; spoke with a deep, loud voice; used clear, direct communication; and appeared calm. Although equally able to lead codes as their male colleagues, female participants described feeling stress from having to violate gender behavioral norms in the role of code leader. In response, some female participants adopted rituals to signal the suspension of gender norms while leading a code. Others apologized afterwards for their counternormative behavior. Ideal code leadership embodies highly agentic, stereotypical male behaviors. Female residents employed strategies to better integrate the competing identities of code leader and female gender. In the future, residency training should acknowledge how female gender stereotypes may conflict with the behaviors required to enact code leadership and offer some strategies, such as those used by the female residents in this study, to help women integrate these dual identities.

  10. Large Eddy Simulation of Sound Generation by Turbulent Reacting and Nonreacting Shear Flows

    NASA Astrophysics Data System (ADS)

    Najafi-Yazdi, Alireza

    The objective of the present study was to investigate the mechanisms of sound generation by subsonic jets. Large eddy simulations were performed along with bandpass filtering of the flow and sound in order to gain further insight into the pole of coherent structures in subsonic jet noise generation. A sixth-order compact scheme was used for spatial discretization of the fully compressible Navier-Stokes equations. Time integration was performed through the use of the standard fourth-order, explicit Runge-Kutta scheme. An implicit low dispersion, low dissipation Runge-Kutta (ILDDRK) method was developed and implemented for simulations involving sources of stiffness such as flows near solid boundaries, or combustion. A surface integral acoustic analogy formulation, called Formulation 1C, was developed for farfield sound pressure calculations. Formulation 1C was derived based on the convective wave equation in order to take into account the presence of a mean flow. The formulation was derived to be easy to implement as a numerical post-processing tool for CFD codes. Sound radiation from an unheated, Mach 0.9 jet at Reynolds number 400, 000 was considered. The effect of mesh size on the accuracy of the nearfield flow and farfield sound results was studied. It was observed that insufficient grid resolution in the shear layer results in unphysical laminar vortex pairing, and increased sound pressure levels in the farfield. Careful examination of the bandpass filtered pressure field suggested that there are two mechanisms of sound radiation in unheated subsonic jets that can occur in all scales of turbulence. The first mechanism is the stretching and the distortion of coherent vortical structures, especially close to the termination of the potential core. As eddies are bent or stretched, a portion of their kinetic energy is radiated. This mechanism is quadrupolar in nature, and is responsible for strong sound radiation at aft angles. The second sound generation mechanism appears to be associated with the transverse vibration of the shear-layer interface within the ambient quiescent flow, and has dipolar characteristics. This mechanism is believed to be responsible for sound radiation along the sideline directions. Jet noise suppression through the use of microjets was studied. The microjet injection induced secondary instabilities in the shear layer which triggered the transition to turbulence, and suppressed laminar vortex pairing. This in turn resulted in a reduction of OASPL at almost all observer locations. In all cases, the bandpass filtering of the nearfield flow and the associated sound provides revealing details of the sound radiation process. The results suggest that circumferential modes are significant and need to be included in future wavepacket models for jet noise prediction. Numerical simulations of sound radiation from nonpremixed flames were also performed. The simulations featured the solution of the fully compressible Navier-Stokes equations. Therefore, sound generation and radiation were directly captured in the simulations. A thickened flamelet model was proposed for nonpremixed flames. The model yields artificially thickened flames which can be better resolved on the computational grid, while retaining the physically currect values of the total heat released into the flow. Combustion noise has monopolar characteristics for low frequencies. For high frequencies, the sound field is no longer omni-directional. Major sources of sound appear to be located in the jet shear layer within one potential core length from the jet nozzle.

  11. A Strategy for Reusing the Data of Electronic Medical Record Systems for Clinical Research.

    PubMed

    Matsumura, Yasushi; Hattori, Atsushi; Manabe, Shiro; Tsuda, Tsutomu; Takeda, Toshihiro; Okada, Katsuki; Murata, Taizo; Mihara, Naoki

    2016-01-01

    There is a great need to reuse data stored in electronic medical records (EMR) databases for clinical research. We previously reported the development of a system in which progress notes and case report forms (CRFs) were simultaneously recorded using a template in the EMR in order to exclude redundant data entry. To make the data collection process more efficient, we are developing a system in which the data originally stored in the EMR database can be populated within a frame in a template. We developed interface plugin modules that retrieve data from the databases of other EMR applications. A universal keyword written in a template master is converted to a local code using a data conversion table, then the objective data is retrieved from the corresponding database. The template element data, which are entered by a template, are stored in the template element database. To retrieve the data entered by other templates, the objective data is designated by the template element code with the template code, or by the concept code if it is written for the element. When the application systems in the EMR generate documents, they also generate a PDF file and a corresponding document profile XML, which includes important data, and send them to the document archive server and the data sharing saver, respectively. In the data sharing server, the data are represented by an item with an item code with a document class code and its value. By linking a concept code to an item identifier, an objective data can be retrieved by designating a concept code. We employed a flexible strategy in which a unique identifier for a hospital is initially attached to all of the data that the hospital generates. The identifier is secondarily linked with concept codes. The data that are not linked with a concept code can also be retrieved using the unique identifier of the hospital. This strategy makes it possible to reuse any of a hospital's data.

  12. Short-Term Memory Coding in Children with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Henry, Lucy

    2008-01-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and…

  13. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING: DESCRIPTIVE QUESTIONNAIRE (UA-D-6.0)

    EPA Science Inventory

    This purpose of this SOP is to define the coding strategy for the Descriptive Questionnaire. This questionnaire was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; descriptive questionnaire.

    The National Human Exposure Assessment...

  14. Combined electric and acoustic hearing performance with Zebra® speech processor: speech reception, place, and temporal coding evaluation.

    PubMed

    Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J

    2013-06-01

    To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.

  15. Effect of Casino-Related Sound, Red Light and Pairs on Decision-Making During the Iowa Gambling Task

    PubMed Central

    Noël, Xavier; Bechara, Antoine; Vanavermaete, Nora; Verbanck, Paul; Kornreich, Charles

    2014-01-01

    Casino venues are often characterized by “warm” colors, reward-related sounds, and the presence of others. These factors have always been identified as a key factor in energizing gambling. However, few empirical studies have examined their impact on gambling behaviors. Here, we aimed to explore the impact of combined red light and casino-related sounds, with or without the presence of another participant, on gambling-related behaviors. Gambling behavior was estimated with the Iowa Gambling Task (IGT). Eighty non-gamblers participants took part in one of four experimental conditions (20 participants in each condition); (1) IGT without casino-related sound and under normal (white) light (control), (2) IGT with combined casino-related sound and red light (casino alone), (3) IGT with combined casino-related sound, red light and in front of another participant (casino competition—implicit), and (4) IGT with combined casino-related sound, red light and against another participant (casino competition—explicit). Results showed that, in contrast to the control condition, participants in the three “casino” conditions did not exhibit slower deck selection reaction time after losses than after rewards. Moreover, participants in the two “competition” conditions displayed lowered deck selection reaction time after losses and rewards, as compared with the control and the “casino alone” conditions. These findings suggest that casino environment may diminish the time used for reflecting and thinking before acting after losses. These findings are discussed along with the methodological limitations, potential directions for future studies, as well as implications to enhance prevention strategies of abnormal gambling. PMID:24414096

  16. Effect of casino-related sound, red light and pairs on decision-making during the Iowa gambling task.

    PubMed

    Brevers, Damien; Noël, Xavier; Bechara, Antoine; Vanavermaete, Nora; Verbanck, Paul; Kornreich, Charles

    2015-06-01

    Casino venues are often characterized by "warm" colors, reward-related sounds, and the presence of others. These factors have always been identified as a key factor in energizing gambling. However, few empirical studies have examined their impact on gambling behaviors. Here, we aimed to explore the impact of combined red light and casino-related sounds, with or without the presence of another participant, on gambling-related behaviors. Gambling behavior was estimated with the Iowa Gambling Task (IGT). Eighty non-gamblers participants took part in one of four experimental conditions (20 participants in each condition); (1) IGT without casino-related sound and under normal (white) light (control), (2) IGT with combined casino-related sound and red light (casino alone), (3) IGT with combined casino-related sound, red light and in front of another participant (casino competition-implicit), and (4) IGT with combined casino-related sound, red light and against another participant (casino competition-explicit). Results showed that, in contrast to the control condition, participants in the three "casino" conditions did not exhibit slower deck selection reaction time after losses than after rewards. Moreover, participants in the two "competition" conditions displayed lowered deck selection reaction time after losses and rewards, as compared with the control and the "casino alone" conditions. These findings suggest that casino environment may diminish the time used for reflecting and thinking before acting after losses. These findings are discussed along with the methodological limitations, potential directions for future studies, as well as implications to enhance prevention strategies of abnormal gambling.

  17. Is the Speech Transmission Index (STI) a robust measure of sound system speech intelligibility performance?

    NASA Astrophysics Data System (ADS)

    Mapp, Peter

    2002-11-01

    Although RaSTI is a good indicator of the speech intelligibility capability of auditoria and similar spaces, during the past 2-3 years it has been shown that RaSTI is not a robust predictor of sound system intelligibility performance. Instead, it is now recommended, within both national and international codes and standards, that full STI measurement and analysis be employed. However, new research is reported, that indicates that STI is not as flawless, nor robust as many believe. The paper highlights a number of potential error mechanisms. It is shown that the measurement technique and signal excitation stimulus can have a significant effect on the overall result and accuracy, particularly where DSP-based equipment is employed. It is also shown that in its current state of development, STI is not capable of appropriately accounting for a number of fundamental speech and system attributes, including typical sound system frequency response variations and anomalies. This is particularly shown to be the case when a system is operating under reverberant conditions. Comparisons between actual system measurements and corresponding word score data are reported where errors of up to 50 implications for VA and PA system performance verification will be discussed.

  18. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V

    2018-04-01

    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.

  19. Inspecting Engineering Samples

    NASA Image and Video Library

    2017-12-08

    Goddard's Ritsko Wins 2011 SAVE Award The winner of the 2011 SAVE Award is Matthew Ritsko, a Goddard financial manager. His tool lending library would track and enable sharing of expensive space-flight tools and hardware after projects no longer need them. This set of images represents the types of tools used at NASA. To read more go to: www.nasa.gov/topics/people/features/ritsko-save.html Dr. Doug Rabin (Code 671) and PI La Vida Cooper (Code 564) inspect engineering samples of the HAS-2 imager which will be tested and readout using a custom ASIC with a 16-bit ADC (analog to digital converter) and CDS (correlated double sampling) circuit designed by the Code 564 ASIC group as a part of an FY10 IRAD. The purpose of the IRAD was to develop and high resolution digitizer for Heliophysics applications such as imaging. Future goals for the collaboration include characterization testing and eventually a sounding rocket flight of the integrated system. *ASIC= Application Specific Integrated Circuit NASA/GSFC/Chris Gunn

  20. Acoustic Prediction State of the Art Assessment

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2007-01-01

    The acoustic assessment task for both the Subsonic Fixed Wing and the Supersonic projects under NASA s Fundamental Aeronautics Program was designed to assess the current state-of-the-art in noise prediction capability and to establish baselines for gauging future progress. The documentation of our current capabilities included quantifying the differences between predictions of noise from computer codes and measurements of noise from experimental tests. Quantifying the accuracy of both the computed and experimental results further enhanced the credibility of the assessment. This presentation gives sample results from codes representative of NASA s capabilities in aircraft noise prediction both for systems and components. These include semi-empirical, statistical, analytical, and numerical codes. System level results are shown for both aircraft and engines. Component level results are shown for a landing gear prototype, for fan broadband noise, for jet noise from a subsonic round nozzle, and for propulsion airframe aeroacoustic interactions. Additional results are shown for modeling of the acoustic behavior of duct acoustic lining and the attenuation of sound in lined ducts with flow.

  1. Insights into inner ear-specific gene regulation: epigenetics and non-coding RNAs in inner ear development and regeneration

    PubMed Central

    Avraham, Karen B.

    2016-01-01

    The vertebrate inner ear houses highly specialized sensory organs, tuned to detect and encode sound, head motion and gravity. Gene expression programs under the control of transcription factors orchestrate the formation and specialization of the non-sensory inner ear labyrinth and its sensory constituents. More recently, epigenetic factors and non-coding RNAs emerged as an additional layer of gene regulation, both in inner ear development and disease. In this review, we provide an overview on how epigenetic modifications and non-coding RNAs, in particular microRNAs (miRNAs), influence gene expression and summarize recent discoveries that highlight their critical role in the proper formation of the inner ear labyrinth and its sensory organs. In contrast to non-mammalian vertebrates, adult mammals lack the ability to regenerate inner ear mechano-sensory hair cells. Finally, we discuss recent insights into how epigenetic factors and miRNAs may facilitate, or in the case of mammals, restrict sensory hair cell regeneration. PMID:27836639

  2. Anticounterfeiting Quick Response Code with Emission Color of Invisible Metal-Organic Frameworks as Encoding Information.

    PubMed

    Wang, Yong-Mei; Tian, Xue-Tao; Zhang, Hui; Yang, Zhong-Rui; Yin, Xue-Bo

    2018-06-21

    Counterfeiting is a global epidemic that is compelling the development of new anticounterfeiting strategy. Herein, we report a novel multiple anticounterfeiting encoding strategy of invisible fluorescent quick response (QR) codes with emission color as information storage unit. The strategy requires red, green, and blue (RGB) light-emitting materials for different emission colors as encrypting information, single excitation for all of the emission for practicability, and ultraviolet (UV) excitation for invisibility under daylight. Therefore, RGB light-emitting nanoscale metal-organic frameworks (NMOFs) are designed as inks to construct the colorful light-emitting boxes for information encrypting, while three black vertex boxes were used for positioning. Full-color emissions are obtained by mixing the trichromatic NMOFs inks through inkjet printer. The encrypting information capacity is easily adjusted by the number of light-emitting boxes with the infinite emission colors. The information is decoded with specific excitation light at 275 nm, making the QR codes invisible under daylight. The composition of inks, invisibility, inkjet printing, and the abundant encrypting information all contribute to multiple anticounterfeiting. The proposed QR codes pattern holds great potential for advanced anticounterfeiting.

  3. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  4. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  5. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  6. Statistical classification of drug incidents due to look-alike sound-alike mix-ups.

    PubMed

    Wong, Zoie Shui Yee

    2016-06-01

    It has been recognised that medication names that look or sound similar are a cause of medication errors. This study builds statistical classifiers for identifying medication incidents due to look-alike sound-alike mix-ups. A total of 227 patient safety incident advisories related to medication were obtained from the Canadian Patient Safety Institute's Global Patient Safety Alerts system. Eight feature selection strategies based on frequent terms, frequent drug terms and constituent terms were performed. Statistical text classifiers based on logistic regression, support vector machines with linear, polynomial, radial-basis and sigmoid kernels and decision tree were trained and tested. The models developed achieved an average accuracy of above 0.8 across all the model settings. The receiver operating characteristic curves indicated the classifiers performed reasonably well. The results obtained in this study suggest that statistical text classification can be a feasible method for identifying medication incidents due to look-alike sound-alike mix-ups based on a database of advisories from Global Patient Safety Alerts. © The Author(s) 2014.

  7. Anxiety sensitivity and auditory perception of heartbeat.

    PubMed

    Pollock, R A; Carter, A S; Amir, N; Marks, L E

    2006-12-01

    Anxiety sensitivity (AS) is the fear of sensations associated with autonomic arousal. AS has been associated with the development and maintenance of panic disorder. Given that panic patients often rate cardiac symptoms as the most fear-provoking feature of a panic attack, AS individuals may be especially responsive to cardiac stimuli. Consequently, we developed a signal-in-white-noise detection paradigm to examine the strategies that high and low AS individuals use to detect and discriminate normal and abnormal heartbeat sounds. Compared to low AS individuals, high AS individuals demonstrated a greater propensity to report the presence of normal, but not abnormal, heartbeat sounds. High and low AS individuals did not differ in their ability to perceive normal heartbeat sounds against a background of white noise; however, high AS individuals consistently demonstrated lower ability to discriminate abnormal heartbeats from background noise and between abnormal and normal heartbeats. AS was characterized by an elevated false alarm rate across all tasks. These results suggest that heartbeat sounds may be fear-relevant cues for AS individuals, and may affect their attention and perception in tasks involving threat signals.

  8. Long-term underwater sound measurements in the shipping noise indicator bands 63Hz and 125Hz from the port of Falmouth Bay, UK.

    PubMed

    Garrett, J K; Blondel, Ph; Godley, B J; Pikesley, S K; Witt, M J; Johanning, L

    2016-09-15

    Chronic low-frequency anthropogenic sound, such as shipping noise, may be negatively affecting marine life. The EU's Marine Strategy Framework Directive (MSFD) includes a specific indicator focused on this noise. This indicator is the yearly average sound level in third-octave bands with centre frequencies at 63Hz and 125Hz. These levels are described for Falmouth Bay, UK, an active port at the entrance to the English Channel. Underwater sound was recorded for 30min h(-1) over the period June 2012 to November 2013 for a total of 435days. Mean third-octave levels were louder in the 125-Hz band (annual mean level of 96.0dB re 1μPa) than in the 63-Hz band (92.6dB re 1 μPa). These levels and variations are assessed as a function of seasons, shipping activity and wave height, providing comparison points for future monitoring activities, including the MSFD and emerging international regulation. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Spectral summation and facilitation in on- and off-responses for optimized representation of communication calls in mouse inferior colliculus.

    PubMed

    Akimov, Alexander G; Egorova, Marina A; Ehret, Günter

    2017-02-01

    Selectivity for processing of species-specific vocalizations and communication sounds has often been associated with the auditory cortex. The midbrain inferior colliculus, however, is the first center in the auditory pathways of mammals integrating acoustic information processed in separate nuclei and channels in the brainstem and, therefore, could significantly contribute to enhance the perception of species' communication sounds. Here, we used natural wriggling calls of mouse pups, which communicate need for maternal care to adult females, and further 15 synthesized sounds to test the hypothesis that neurons in the central nucleus of the inferior colliculus of adult females optimize their response rates for reproduction of the three main harmonics (formants) of wriggling calls. The results confirmed the hypothesis showing that average response rates, as recorded extracellularly from single units, were highest and spectral facilitation most effective for both onset and offset responses to the call and call models with three resolved frequencies according to critical bands in perception. In addition, the general on- and/or off-response enhancement in almost half the investigated 122 neurons favors not only perception of single calls but also of vocalization rhythm. In summary, our study provides strong evidence that critical-band resolved frequency components within a communication sound increase the probability of its perception by boosting the signal-to-noise ratio of neural response rates within the inferior colliculus for at least 20% (our criterion for facilitation). These mechanisms, including enhancement of rhythm coding, are generally favorable to processing of other animal and human vocalizations, including formants of speech sounds. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Integrating speech in time depends on temporal expectancies and attention.

    PubMed

    Scharinger, Mathias; Steinberg, Johanna; Tavano, Alessandro

    2017-08-01

    Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125-150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Practice management.

    PubMed

    Althausen, Peter L; Mead, Lisa

    2014-07-01

    The practicing orthopaedic traumatologist must have a sound knowledge of business fundamentals to be successful in the changing healthcare environment. Practice management encompasses multiple topics including governance, the financial aspects of billing and coding, physician extender management, ancillary service development, information technology, transcription utilization, and marketing. Some of these are universal, but several of these areas may be most applicable to the private practice of medicine. Attention to each component is vital to develop an understanding of the intricacies of practice management.

  12. A Multigrid Approach to Embedded-Grid Solvers

    DTIC Science & Technology

    1992-08-01

    previously as a Master’s Thesis at the University of Florida. Not edited by TESCO , Inc. 12a. DISTRIBUTION / AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE...domain decomposition techniques in order to accurately model the aerodynamics of complex geometries 𔃾, 5, 11, 12, 13, 24’. Although these high...quantities subscripted by oc denote reference values in the undisturbed gas. Uv v, e e P - (10) Where • = (7b,/•)1/2, is the speed of sound in the

  13. Acoustic fill factors for a 120 inch diameter fairing

    NASA Technical Reports Server (NTRS)

    Lee, Y. Albert

    1992-01-01

    Data from the acoustic test of a 120-inch diameter payload fairing were collected and an analysis of acoustic fill factors were performed. Correction factors for obtaining a weighted spatial average of the interior sound pressure level (SPL) were derived based on this database and a normalized 200-inch diameter fairing database. The weighted fill factors were determined and compared with statistical energy analysis (VAPEPS code) derived fill factors. The comparison is found to be reasonable.

  14. Public Law 94-553-Oct. 19, 1976. An Act For the General Revision of the Copyright Law, Title 17 of the United States Code, and for Other Purposes. Title 17-Copyrights. Ninety-Fourth Congress.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC.

    The copyright law of the United States is amended in its entirety by this act that takes effect in 1978. Literary works; musical works; dramatic works; pantomimes and choreographic works; pictorial, graphic, and sculptural works; motion pictures and other audiovisual works; and sound recordings are included in the subject matter of copyright.…

  15. Advance of the Black Flags: Symbolism, Social Identity, and Psychological Operations in Violent Conflict

    DTIC Science & Technology

    2015-12-01

    priorities. Cross points out that even the definition of music itself is somewhat subjective, as sound, rhythm, melody, and even body movement ...to disrupt the conditions that allow a violent enemy to develop . The literature indicates that music is a universal social phenomenon. Music is a...Upon viewing the sample films and videos and listening to the music samples, a set of codes was developed and organized into a codebook in order

  16. Maritime Security: Potential Terrorist Attacks and Protection Priorities

    DTIC Science & Technology

    2007-01-09

    Liquefied Natural Gas: Siting and Safety .” Feb. 15, 2005. 108 U.S. Coast Guard. U.S. Coast Guard Captain of the Port Long Island Sound Waterways...Order Code RL33787 Maritime Security: Potential Terrorist Attacks and Protection Priorities January 9, 2007 Paul W. Parfomak and John Frittelli...Terrorist Attacks and Protection Priorities 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK

  17. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  18. Equilibrium states of homogeneous sheared compressible turbulence

    NASA Astrophysics Data System (ADS)

    Riahi, M.; Lili, T.

    2011-06-01

    Equilibrium states of homogeneous compressible turbulence subjected to rapid shear is studied using rapid distortion theory (RDT). The purpose of this study is to determine the numerical solutions of unsteady linearized equations governing double correlations spectra evolution. In this work, RDT code developed by authors solves these equations for compressible homogeneous shear flows. Numerical integration of these equations is carried out using a second-order simple and accurate scheme. The two Mach numbers relevant to homogeneous shear flow are the turbulent Mach number Mt, given by the root mean square turbulent velocity fluctuations divided by the speed of sound, and the gradient Mach number Mg which is the mean shear rate times the transverse integral scale of the turbulence divided by the speed of sound. Validation of this code is performed by comparing RDT results with direct numerical simulation (DNS) of [A. Simone, G.N. Coleman, and C. Cambon, Fluid Mech. 330, 307 (1997)] and [S. Sarkar, J. Fluid Mech. 282, 163 (1995)] for various values of initial gradient Mach number Mg0. It was found that RDT is valid for small values of the non-dimensional times St (St < 3.5). It is important to note that RDT is also valid for large values of St (St > 10) in particular for large values of Mg0. This essential feature justifies the resort to RDT in order to determine equilibrium states in the compressible regime.

  19. Accessing Talent: The Foundation of a U.S. Army Officer Corps Strategy

    DTIC Science & Technology

    2010-02-01

    strategy grounded in sound theory. ENDNOTES 1. Janet C. Lowe, Warren Buffet Speaks: Wit and Wisdom from the World’s Greatest Investor, New York...is what you get. Warren Buffett1 INTRODUCTION Since its completion in 1883, the Brooklyn Bridge has been a symbol of American ingenuity and...marketing efforts must account for these deviations since they are likely to play an important role in the market for new officer talent

  20. The Influence of Learning Strategies and Performance Strategies upon Engineering Design.

    DTIC Science & Technology

    1979-09-12

    of an intruder alarm system. Subjects were provided with details of-how simple devices function, how detectors could be wired together, etc., and... experimenter , myself. I had the strong impression at that stage (some year or more ago),that many of the inn- ovations were due to the experimenter , even though...accidentally intro- duced. On listening to sample tapes (all sessions were sound recorded and many video recorded) this pessimistic impression is

Top