Issues in Humanoid Audition and Sound Source Localization by Active Audition
NASA Astrophysics Data System (ADS)
Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki
In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.
The Integrated Sounding System: Description and Preliminary Observations from TOGA COARE.
NASA Astrophysics Data System (ADS)
Parsons, David; Dabberdt, Walter; Cole, Harold; Hock, Terrence; Martin, Charles; Barrett, Anne-Leslie; Miller, Erik; Spowart, Michael; Howard, Michael; Ecklund, Warner; Carter, David; Gage, Kenneth; Wilson, John
1994-04-01
An Integrated Sounding System (ISS) that combines state-of- the-art remote and in situ sensors into a single transportable facility has been developed jointly by the National Center for Atmospheric Research (NCAR) and the Aeronomy laboratory of the National Oceanic and Atmospheric Administration (NOAA/AL). The instrumentation for each ISS includes a 915-MHz wind profiler, a Radio Acoustic Sounding System (RASS), an Omega-based NAVAID sounding system, and an enhanced surface meteorological station. The general philosophy behind the ISS is that the integration of various measurement systems overcomes each system's respective limitations while taking advantage of its positive attributes. The individual observing systems within the ISS provide high-level data products to a central workstation that manages and integrates these measurements. The ISS software package performs a wide range of functions: real-time data acquisition, database support, and graphical displays; data archival and communications; and operational and post time analysis. The first deployment of the ISS consists of six sites in the western tropical Pacific-four land-based deployments and two ship-based deployments. The sites serve the Coupled Ocean-Atmosphere Response Experiment (COARE) of the Tropical Ocean and Global Atmosphere (TOGA) program and TOGA's enhanced atmospheric monitoring effort. Examples of ISS data taken during this deployment are shown in order to demonstrate the capabilities of this new sounding system and to demonstrate the performance of these in situ and remote sensing instruments in a moist tropical environment. In particular, a strong convective outflow with a pronounced impact of the atmospheric boundary layer and heat fluxes from the ocean surface was examined with a shipboard ISS. If these strong outflows commonly occur, they may prove to be an important component of the surface energy budget of the western tropical Pacific.
Synthesis of Systemic Functional Theory & Dynamical Systems Theory for Socio-Cultural Modeling
2011-01-26
is, language and other resources (e.g. images and sound resources) are conceptualised as inter-locking systems of meaning which realise four...hierarchical ranks and strata (e.g. sounds, word groups, clauses, and complex discourse structures in language, and elements, figures and episodes in images ...integrating platform for describing how language and other resources (e.g. images and sound) work together to fulfil particular objectives. While
Development of sound measurement systems for auditory functional magnetic resonance imaging.
Nam, Eui-Cheol; Kim, Sam Soo; Lee, Kang Uk; Kim, Sang Sik
2008-06-01
Auditory functional magnetic resonance imaging (fMRI) requires quantification of sound stimuli in the magnetic environment and adequate isolation of background noise. We report the development of two novel sound measurement systems that accurately measure the sound intensity inside the ear, which can simultaneously provide the similar or greater amount of scanner- noise protection than ear-muffs. First, we placed a 2.6 x 2.6-mm microphone in an insert phone that was connected to a headphone [microphone-integrated, foam-tipped insert-phone with a headphone (MIHP)]. This attenuated scanner noise by 37.8+/-4.6 dB, a level better than the reference amount obtained using earmuffs. The nonmetallic optical microphone was integrated with a headphone [optical microphone in a headphone (OMHP)] and it effectively detected the change of sound intensity caused by variable compression on the cushions of the headphone. Wearing the OMHP reduced the noise by 28.5+/-5.9 dB and did not affect echoplanar magnetic resonance images. We also performed an auditory fMRI study using the MIHP system and presented increase in the auditory cortical activation following 10-dB increment in the intensity of sound stimulation. These two newly developed sound measurement systems successfully achieved the accurate quantification of sound stimuli with maintaining the similar level of noise protection of wearing earmuffs in the auditory fMRI experiment.
High Definition Sounding System Test and Integration with NASA Atmospheric Science Program Aircraft
2013-09-30
of the High Definition Sounding System (HDSS) on NASA high altitude Airborne Science Program platforms, specifically the NASA P-3 and NASA WB-57. When...demonstrate the system reliability in a Global Hawk’s 62000’ altitude regime of thin air and very cold temperatures. APPROACH: Mission Profile One or more WB...57 test flights will prove airworthiness and verify the High Definition Sounding System (HDSS) is safe and functional at high altitudes , essentially
Integrating sensorimotor systems in a robot model of cricket behavior
NASA Astrophysics Data System (ADS)
Webb, Barbara H.; Harrison, Reid R.
2000-10-01
The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.
[Functional anatomy of the cochlear nerve and the central auditory system].
Simon, E; Perrot, X; Mertens, P
2009-04-01
The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.
Bonfiglio, Paolo; Pompoli, Francesco; Lionti, Riccardo
2016-04-01
The transfer matrix method is a well-established prediction tool for the simulation of sound transmission loss and the sound absorption coefficient of flat multilayer systems. Much research has been dedicated to enhancing the accuracy of the method by introducing a finite size effect of the structure to be simulated. The aim of this paper is to present a reduced-order integral formulation to predict radiation efficiency and radiation impedance for a panel with equal lateral dimensions. The results are presented and discussed for different materials in terms of radiation efficiency, sound transmission loss, and the sound absorption coefficient. Finally, the application of the proposed methodology for rectangular multilayer systems is also investigated and validated against experimental data.
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene
Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling
2016-01-01
A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239
Hologlyphics: volumetric image synthesis performance system
NASA Astrophysics Data System (ADS)
Funk, Walter
2008-02-01
This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.
NCAR Integrated Sounding System Observations during the SOAS / SAS Field Campaign
NASA Astrophysics Data System (ADS)
Brown, W. O.; Moore, J.
2013-12-01
The National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL) deployed an Integrated Sounding Systems (ISS) for the SOAS (Southern Oxidant and Aerosol Study) field campaign in Alabama in the summer of 2013. The ISS was split between two sites: a former NWS site approximately 1km from the main SOAS chemistry ground site near Centerville AL, and about 20km to the south at the Alabama fish hatchery site approximately 1km from the flux tower site near Marion, AL. At the former-NWS site we launched 106 radiosonde soundings, operated a 915 MHz boundary layer radar wind profiler with RASS (Radio Acoustic Sounding System), ceilometer and various surface meteorological sensors. At the AABC site we operated a Lesosphere WIndcube 200S Doppler lidar and a Metek mini-Doppler sodar. Other NCAR facilities at the AABC site included a 45-m instrumented flux tower. This poster will present a sampling observations made by these instruments, including examples of boundary layer evolution and structure, and summarize the performance of the instrumentation.
Integrated Syntactic/Semantic XML Data Validation with a Reusable Software Component
ERIC Educational Resources Information Center
Golikov, Steven
2013-01-01
Data integration is a critical component of enterprise system integration, and XML data validation is the foundation for sound data integration of XML-based information systems. Since B2B e-commerce relies on data validation as one of the critical components for enterprise integration, it is imperative for financial industries and e-commerce…
Prediction of break-out sound from a rectangular cavity via an elastically mounted panel.
Wang, Gang; Li, Wen L; Du, Jingtao; Li, Wanyou
2016-02-01
The break-out sound from a cavity via an elastically mounted panel is predicted in this paper. The vibroacoustic system model is derived based on the so-called spectro-geometric method in which the solution over each sub-domain is invariably expressed as a modified Fourier series expansion. Unlike the traditional modal superposition methods, the continuity of the normal velocities is faithfully enforced on the interfaces between the flexible panel and the (interior and exterior) acoustic media. A fully coupled vibro-acoustic system is obtained by taking into account the strong coupling between the vibration of the elastic panel and the sound fields on the both sides. The typical time-consuming calculations of quadruple integrals encountered in determining the sound power radiation from a panel has been effectively avoided by reducing them, via discrete cosine transform, into a number of single integrals which are subsequently calculated analytically in a closed form. Several numerical examples are presented to validate the system model, understand the effects on the sound transmissions of panel mounting conditions, and demonstrate the dependence on the size of source room of the "measured" transmission loss.
Development of analog watch with minute repeater
NASA Astrophysics Data System (ADS)
Okigami, Tomio; Aoyama, Shigeru; Osa, Takashi; Igarashi, Kiyotaka; Ikegami, Tomomi
A complementary metal oxide semiconductor with large scale integration was developed for an electronic minute repeater. It is equipped with the synthetic struck sound circuit to generate natural struck sound necessary for the minute repeater. This circuit consists of an envelope curve drawing circuit, frequency mixer, polyphonic mixer, and booster circuit made by using analog circuit technology. This large scale integration is a single chip microcomputer with motor drivers and input ports in addition to the synthetic struck sound circuit, and it is possible to make an electronic system of minute repeater at a very low cost in comparison with the conventional type.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Y; Zhu, X; Zheng, D
Purpose: Tracking the surrogate placed on patient skin surface sometimes leads to problematic signals for certain patients, such as shallow breathers. This in turn impairs the 4D CT image quality and dosimetric accuracy. In this pilot study, we explored the feasibility of monitoring human breathing motion by integrating breathing sound signal with surface surrogates. Methods: The breathing sound signals were acquired though a microphone attached adjacently to volunteer’s nostrils, and breathing curve were analyzed using a low pass filter. Simultaneously, the Real-time Position Management™ (RPM) system from Varian were employed on a volunteer to monitor respiratory motion including both shallowmore » and deep breath modes. The similar experiment was performed by using Calypso system, and three beacons taped on volunteer abdominal region to capture breath motion. The period of each breathing curves were calculated with autocorrelation functions. The coherence and consistency between breathing signals using different acquisition methods were examined. Results: Clear breathing patterns were revealed by the sound signal which was coherent with the signal obtained from both the RPM system and Calypso system. For shallow breathing, the periods of breathing cycle were 3.00±0.19 sec (sound) and 3.00±0.21 sec (RPM); For deep breathing, the periods were 3.49± 0.11 sec (sound) and 3.49±0.12 sec (RPM). Compared with 4.54±0.66 sec period recorded by the calypso system, the sound measured 4.64±0.54 sec. The additional signal from sound could be supplement to the surface monitoring, and provide new parameters to model the hysteresis lung motion. Conclusion: Our preliminary study shows that the breathing sound signal can provide a comparable way as the RPM system to evaluate the respiratory motion. It’s instantaneous and robust characteristics facilitate it possibly to be a either independently or as auxiliary methods to manage respiratory motion in radiotherapy.« less
Nair, Erika L; Sousa, Rhonda; Wannagot, Shannon
Guidelines established by the AAA currently recommend behavioral testing when fitting frequency modulated (FM) systems to individuals with cochlear implants (CIs). A protocol for completing electroacoustic measures has not yet been validated for personal FM systems or digital modulation (DM) systems coupled to CI sound processors. In response, some professionals have used or altered the AAA electroacoustic verification steps for fitting FM systems to hearing aids when fitting FM systems to CI sound processors. More recently steps were outlined in a proposed protocol. The purpose of this research is to review and compare the electroacoustic test measures outlined in a 2013 article by Schafer and colleagues in the Journal of the American Academy of Audiology titled "A Proposed Electroacoustic Test Protocol for Personal FM Receivers Coupled to Cochlear Implant Sound Processors" to the AAA electroacoustic verification steps for fitting FM systems to hearing aids when fitting DM systems to CI users. Electroacoustic measures were conducted on 71 CI sound processors and Phonak Roger DM systems using a proposed protocol and an adapted AAA protocol. Phonak's recommended default receiver gain setting was used for each CI sound processor manufacturer and adjusted if necessary to achieve transparency. Electroacoustic measures were conducted on Cochlear and Advanced Bionics (AB) sound processors. In this study, 28 Cochlear Nucleus 5/CP810 sound processors, 26 Cochlear Nucleus 6/CP910 sound processors, and 17 AB Naida CI Q70 sound processors were coupled in various combinations to Phonak Roger DM dedicated receivers (25 Phonak Roger 14 receivers-Cochlear dedicated receiver-and 9 Phonak Roger 17 receivers-AB dedicated receiver) and 20 Phonak Roger Inspiro transmitters. Employing both the AAA and the Schafer et al protocols, electroacoustic measurements were conducted with the Audioscan Verifit in a clinical setting on 71 CI sound processors and Phonak Roger DM systems to determine transparency and verify FM advantage, comparing speech inputs (65 dB SPL) in an effort to achieve equal outputs. If transparency was not achieved at Phonak's recommended default receiver gain, adjustments were made to the receiver gain. The integrity of the signal was monitored with the appropriate manufacturer's monitor earphones. Using the AAA hearing aid protocol, 50 of the 71 CI sound processors achieved transparency, and 59 of the 71 CI sound processors achieved transparency when using the proposed protocol at Phonak's recommended default receiver gain. After the receiver gain was adjusted, 3 of 21 CI sound processors still did not meet transparency using the AAA protocol, and 2 of 12 CI sound processors still did not meet transparency using the Schafer et al proposed protocol. Both protocols were shown to be effective in taking reliable electroacoustic measurements and demonstrate transparency. Both protocols are felt to be clinically feasible and to address the needs of populations that are unable to reliably report regarding the integrity of their personal DM systems. American Academy of Audiology
Franken, Tom P; Joris, Philip X; Smith, Philip H
2018-06-14
The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.
Evaluation and Systems Integration of Physical Security Barrier Systems
1991-05-30
INVESTIGATED 1 (31)/ RESPONSE/DETERRENT SYSTEMS 2 BONICH, R./ BELVOIR RD&E/ - 3 1473 4 01-01-82 5 - 6 BARRIER RESPONSE SYSTEMS (I.E. FOAM, SOUND, LIGHT, NITINOL ...NONMAGNETIC NITONOL ALLOYS 2 BUCHLER, W. 3 33-216 4 -- /--/-- 5- 6 NITINOL ALLOY ’MEMORY METAL’ (PACKAGE OF PAPERS) 1 (52)/ A PROCEDURE TO INTEGRATE
Sound source measurement by using a passive sound insulation and a statistical approach
NASA Astrophysics Data System (ADS)
Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.
2015-10-01
This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.
Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L
2018-01-01
Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farren Hunt
Idaho National Laboratory (INL) performed an Annual Effectiveness Review of the Integrated Safety Management System (ISMS), per 48 Code of Federal Regulations (CFR) 970.5223 1, “Integration of Environment, Safety and Health into Work Planning and Execution.” The annual review assessed Integrated Safety Management (ISM) effectiveness, provided feedback to maintain system integrity, and identified target areas for focused improvements and assessments for fiscal year (FY) 2013. Results of the FY 2012 annual effectiveness review demonstrated that the INL’s ISMS program was significantly strengthened. Actions implemented by the INL demonstrate that the overall Integrated Safety Management System is sound and ensures safemore » and successful performance of work while protecting workers, the public, and environment. This report also provides several opportunities for improvement that will help further strengthen the ISM Program and the pursuit of safety excellence. Demonstrated leadership and commitment, continued surveillance, and dedicated resources have been instrumental in maturing a sound ISMS program. Based upon interviews with personnel, reviews of assurance activities, and analysis of ISMS process implementation, this effectiveness review concludes that ISM is institutionalized and is “Effective”.« less
NED-2: A decision support system for integrated forest ecosystem management
Mark J. Twery; Peter D. Knopp; Scott A. Thomasma; H. Michael Rauscher; Donald E. Nute; Walter D. Potter; Frederick Maier; Jin Wang; Mayukh Dass; Hajime Uchiyama; Astrid Glende; Robin E. Hoffman
2005-01-01
NED-2 is a Windows-based system designed to improve project-level planning and decision making by providing useful and scientifically sound information to natural resource managers. Resources currently addressed include visual quality, ecology, forest health, timber, water, and wildlife. NED-2 expands on previous versions of NED applications by integrating treatment...
NED-2: a decision support system for integrated forest ecosystem management
Mark J. Twery; Peter D. Knopp; Scott A. Thomasma; H. Michael Rauscher; Donald E. Nute; Walter D. Potter; Frederick Maier; Jin Wang; Mayukh Dass; Hajime Uchiyama; Astrid Glende; Robin E. Hoffman
2005-01-01
NED-2 is a Windows-based system designed to improve project-level planning and decision making by providing useful and scientifically sound information to natural resource managers. Resources currently addressed include visual quality, ecology, forest health, timber, water, and wildlife. NED-2 expands on previous versions of NED applications by integrating treatment...
Active noise control for infant incubators.
Yu, Xun; Gujjula, Shruthi; Kuo, Sen M
2009-01-01
This paper presents an active noise control system for infant incubators. Experimental results show that global noise reduction can be achieved for infant incubator ANC systems. An audio-integration algorithm is presented to introduce a healthy audio (intrauterine) sound with the ANC system to mask the residual noise and soothe the infant. Carbon nanotube based transparent thin film speaker is also introduced in this paper as the actuator for the ANC system to generate the destructive secondary sound, which can significantly save the congested incubator space and without blocking the view of doctors and nurses.
Sounds of silence: How to animate virtual worlds with sound
NASA Technical Reports Server (NTRS)
Astheimer, Peter
1993-01-01
Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.
I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia
2017-02-01
Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (<30 min) letter-speech sound training initializes audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A neurally inspired musical instrument classification system based upon the sound onset.
Newton, Michael J; Smith, Leslie S
2012-06-01
Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.
Felix, Richard A; Portfors, Christine V
2007-06-01
Individuals with age-related hearing loss often have difficulty understanding complex sounds such as basic speech. The C57BL/6 mouse suffers from progressive sensorineural hearing loss and thus is an effective tool for dissecting the neural mechanisms underlying changes in complex sound processing observed in humans. Neural mechanisms important for processing complex sounds include multiple tuning and combination sensitivity, and these responses are common in the inferior colliculus (IC) of normal hearing mice. We examined neural responses in the IC of C57Bl/6 mice to single and combinations of tones to examine the extent of spectral integration in the IC after age-related high frequency hearing loss. Ten percent of the neurons were tuned to multiple frequency bands and an additional 10% displayed non-linear facilitation to the combination of two different tones (combination sensitivity). No combination-sensitive inhibition was observed. By comparing these findings to spectral integration properties in the IC of normal hearing CBA/CaJ mice, we suggest that high frequency hearing loss affects some of the neural mechanisms in the IC that underlie the processing of complex sounds. The loss of spectral integration properties in the IC during aging likely impairs the central auditory system's ability to process complex sounds such as speech.
Integration and segregation in auditory scene analysis
NASA Astrophysics Data System (ADS)
Sussman, Elyse S.
2005-03-01
Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..
Ganguli, S
1976-11-01
This paper introduces an integrated, objective and biomechanically sound approach for the analysis and evaluation of the functional status of lower extremity amputee-appliance systems. The method is demonstrated here in its application to the unilateral lower extremity amputee-axillary crutches system and the unilateral below-knee amputee-PTB prosthesis system, both of which are commonly encountered in day-to-day rehabilitation practice.
Multi-sensory learning and learning to read.
Blomert, Leo; Froyen, Dries
2010-09-01
The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter-speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter-speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter-speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter-sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter-speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter-speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar audiovisual associations. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hemmatian, M.; Sedaghati, R.
2016-04-01
This study aims to investigate the effect of using magnetorheological elastomer (MRE)-based adaptive tuned vibration absorbers (ATVA) on the sound transmission in an elastic plate. Sound transmission loss (STL) of an elastic circular thin plate is analytically studied. The plate is excited by a plane acoustic wave as an incident sound and the displacement of the plate is calculated using corresponding mode shapes of the system for clamped boundary condition. Rayleigh integral approach is used to express the transmitted sound pressure in terms of the plate's displacement modal amplitude. In order to increase sound transmission loss of the plate, the MRE-based ATVA is considered. The basic idea is to be able to change the stiffness of the ATVA by varying magnetic field in order to reduce the transmitted acoustic energy of the host structure in a wide frequency range. Here, a MRE-based ATVA under the shear mode consisting of an oscillator mass, magnetic conductor, coils and MRE is investigated. In order to predict the viscoelastic characteristics of the field-dependent MRE based on the applied magnetic field, the double pole model is used. Finally, MRE-based ATVAs are integrated with the plate to absorb the plate energy with the aim of decreasing the transmitted sound power. Results show that plate with integrated MRE-based ATVAs suppresses the axisymmetric vibration of the plate and thus considerably improves the STL. Parametric studies on the influence of the position of MRE-based ATVAs and the effects of applied current on their performance are also presented.
Calibration of phase contrast imaging on HL-2A Tokamak
NASA Astrophysics Data System (ADS)
Yu, Y.; Gong, S. B.; Xu, M.; Xiao, C. J.; Jiang, W.; Zhong, W. L.; Shi, Z. B.; Wang, H. J.; Wu, Y. F.; Yuan, B. D.; Lan, T.; Ye, M. Y.; Duan, X. R.; HL-2A Team
2017-10-01
Phase contrast imaging (PCI) has recently been developed on HL-2A tokamak. In this article we present the calibration of this diagnostic. This system is to diagnose chord integral density fluctuations by measuring the phase shift of a CO2 laser beam with a wavelength of 10.6 μm when the laser beam passes through plasma. Sound waves are used to calibrate PCI diagnostic. The signal series in different PCI channels show a pronounced modulation of incident laser beam by the sound wave. Frequency-wavenumber spectrum is achieved. Calibrations by sound waves with different frequencies exhibit a maximal wavenumber response of 12 cm-1. The conversion relationship between the chord integral plasma density fluctuation and the signal intensity is 2.3 × 1013 m-2/mV, indicating a high sensitivity.
NASA Astrophysics Data System (ADS)
Waheed, R.; Tarar, W.; Saeed, H. A.
2016-08-01
Sound proof canopies for diesel power generators are fabricated with a layer of sound absorbing material applied to all the inner walls. The physical properties of the majority of commercially available sound proofing materials reveal that a material with high sound absorption coefficient has very low thermal conductivity. Consequently a good sound absorbing material is also a good heat insulator. In this research it has been found through various experiments that ordinary sound proofing materials tend to rise the inside temperature of sound proof enclosure in certain turbo engines by capturing the heat produced by engine and not allowing it to be transferred to atmosphere. The same phenomenon is studied by creating a finite element model of the sound proof enclosure and performing a steady state and transient thermal analysis. The prospects of using aluminium foam as sound proofing material has been studied and it is found that inside temperature of sound proof enclosure can be cut down to safe working temperature of power generator engine without compromise on sound proofing.
ERIC Educational Resources Information Center
Koh, Joyce Hwee Ling
2017-01-01
E-learning quality depends on sound pedagogical integration between the content resources and lesson activities within an e-learning system. This study proposes that a meaningful learning with technology framework can be used to guide the design and integration of content resources with e-learning activities in ways that promote learning…
Absolute auditory threshold: testing the absolute.
Heil, Peter; Matysiak, Artur
2017-11-02
The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Platt, R.
1998-01-01
This is the Performance Verification Report. the process specification establishes the requirements for the comprehensive performance test (CPT) and limited performance test (LPT) of the earth observing system advanced microwave sounding unit-A2 (EOS/AMSU-A2), referred to as the unit. The unit is defined on drawing 1356006.
Multi-domain boundary element method for axi-symmetric layered linear acoustic systems
NASA Astrophysics Data System (ADS)
Reiter, Paul; Ziegelwanger, Harald
2017-12-01
Homogeneous porous materials like rock wool or synthetic foam are the main tool for acoustic absorption. The conventional absorbing structure for sound-proofing consists of one or multiple absorbers placed in front of a rigid wall, with or without air-gaps in between. Various models exist to describe these so called multi-layered acoustic systems mathematically for incoming plane waves. However, there is no efficient method to calculate the sound field in a half space above a multi layered acoustic system for an incoming spherical wave. In this work, an axi-symmetric multi-domain boundary element method (BEM) for absorbing multi layered acoustic systems and incoming spherical waves is introduced. In the proposed BEM formulation, a complex wave number is used to model absorbing materials as a fluid and a coordinate transformation is introduced which simplifies singular integrals of the conventional BEM to non-singular radial and angular integrals. The radial and angular part are integrated analytically and numerically, respectively. The output of the method can be interpreted as a numerical half space Green's function for grounds consisting of layered materials.
Aquatic Acoustic Metrics Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-12-18
Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals.more » In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.« less
USDA-ARS?s Scientific Manuscript database
Sustainable agriculture is ecologically sound, economically viable, socially just, and humane. These four goals for sustainability can be applied to all aspects of any agricultural system, from production and marketing, to processing and consumption. Integrated Pest Management (IPM) may be conside...
ERIC Educational Resources Information Center
Nash, Hannah M.; Gooch, Debbie; Hulme, Charles; Mahajan, Yatin; McArthur, Genevieve; Steinmetzger, Kurt; Snowling, Margaret J.
2017-01-01
The "automatic letter-sound integration hypothesis" (Blomert, [Blomert, L., 2011]) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio-visual objects. We tested this hypothesis in a sample of English-speaking children with dyslexic difficulties (N = 13) and samples of…
Educational Adequacy: Building an Adequate School Finance System.
ERIC Educational Resources Information Center
National Conference of State Legislatures, Denver, CO.
This report suggests a framework for approaching and integrating adequacy as a cornerstone principle in developing a sound state school finance system. The text defines student performance-centered expectations for the education system and suggests that districts determine the educational capacity needed to allow each student reasonable…
An integrated analysis-synthesis array system for spatial sound fields.
Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao
2015-03-01
An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, SShao-sheng R.; Allen, Christopher S.
2009-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.
Happel, Max F. K.; Ohl, Frank W.
2017-01-01
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062
Methods of recording and analysing cough sounds.
Subburaj, S; Parvez, L; Rajagopalan, T G
1996-01-01
Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.
Examining INM Accuracy Using Empirical Sound Monitoring and Radar Data
NASA Technical Reports Server (NTRS)
Miller, Nicholas P.; Anderson, Grant S.; Horonjeff, Richard D.; Kimura, Sebastian; Miller, Jonathan S.; Senzig, David A.; Thompson, Richard H.; Shepherd, Kevin P. (Technical Monitor)
2000-01-01
Aircraft noise measurements were made using noise monitoring systems at Denver International and Minneapolis St. Paul Airports. Measured sound exposure levels for a large number of operations of a wide range of aircraft types were compared with predictions using the FAA's Integrated Noise Model. In general it was observed that measured levels exceeded the predicted levels by a significant margin. These differences varied according to the type of aircraft and also depended on the distance from the aircraft. Many of the assumptions which affect the predicted sound levels were examined but none were able to fully explain the observed differences.
Sound symbolism scaffolds language development in preverbal infants.
Asano, Michiko; Imai, Mutsumi; Kita, Sotaro; Kitajo, Keiichi; Okada, Hiroyuki; Thierry, Guillaume
2015-02-01
A fundamental question in language development is how infants start to assign meaning to words. Here, using three Electroencephalogram (EEG)-based measures of brain activity, we establish that preverbal 11-month-old infants are sensitive to the non-arbitrary correspondences between language sounds and concepts, that is, to sound symbolism. In each trial, infant participants were presented with a visual stimulus (e.g., a round shape) followed by a novel spoken word that either sound-symbolically matched ("moma") or mismatched ("kipi") the shape. Amplitude increase in the gamma band showed perceptual integration of visual and auditory stimuli in the match condition within 300 msec of word onset. Furthermore, phase synchronization between electrodes at around 400 msec revealed intensified large-scale, left-hemispheric communication between brain regions in the mismatch condition as compared to the match condition, indicating heightened processing effort when integration was more demanding. Finally, event-related brain potentials showed an increased adult-like N400 response - an index of semantic integration difficulty - in the mismatch as compared to the match condition. Together, these findings suggest that 11-month-old infants spontaneously map auditory language onto visual experience by recruiting a cross-modal perceptual processing system and a nascent semantic network within the first year of life. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
An Audio Architecture Integrating Sound and Live Voice for Virtual Environments
2002-09-01
implementation of a virtual environment. As real world training locations become scarce and training budgets are trimmed, training system developers ...look more and more towards virtual environments as the answer. Virtual environments provide training system developers with several key benefits
Absolute calibration of Phase Contrast Imaging on HL-2A tokamak
NASA Astrophysics Data System (ADS)
Yu, Yi; Gong, Shaobo; Xu, Min; Wu, Yifan; Yuan, Boda; Ye, Minyou; Duan, Xuru; HL-2A Team Team
2017-10-01
Phase contrast imaging (PCI) has recently been developed on HL-2A tokamak. In this article we present the calibration of this diagnostic. This system is to diagnose chord integral density fluctuations by measuring the phase shift of a CO2 laser beam with a wavelength of 10.6 μm when the laser beam passes through plasma. Sound waves are used to calibrate PCI diagnostic. The signal series in different PCI channels show a pronounced modulation of incident laser beam by the sound wave. Frequency-wavenumber spectrum is achieved. Calibrations by sound waves with different frequencies exhibit a maximal wavenumber response of 12 cm-1. The conversion relationship between the chord integral plasma density fluctuation and the signal intensity is 2.3-1013 m-2/mV, indicating a high sensitivity. Supported by the National Magnetic Confinement Fusion Energy Research Project (Grant No.2015GB120002, 2013GB107001).
An FPGA-Based Rapid Wheezing Detection System
Lin, Bor-Shing; Yen, Tian-Shiue
2014-01-01
Wheezing is often treated as a crucial indicator in the diagnosis of obstructive pulmonary diseases. A rapid wheezing detection system may help physicians to monitor patients over the long-term. In this study, a portable wheezing detection system based on a field-programmable gate array (FPGA) is proposed. This system accelerates wheezing detection, and can be used as either a single-process system, or as an integrated part of another biomedical signal detection system. The system segments sound signals into 2-second units. A short-time Fourier transform was used to determine the relationship between the time and frequency components of wheezing sound data. A spectrogram was processed using 2D bilateral filtering, edge detection, multithreshold image segmentation, morphological image processing, and image labeling, to extract wheezing features according to computerized respiratory sound analysis (CORSA) standards. These features were then used to train the support vector machine (SVM) and build the classification models. The trained model was used to analyze sound data to detect wheezing. The system runs on a Xilinx Virtex-6 FPGA ML605 platform. The experimental results revealed that the system offered excellent wheezing recognition performance (0.912). The detection process can be used with a clock frequency of 51.97 MHz, and is able to perform rapid wheezing classification. PMID:24481034
ERIC Educational Resources Information Center
Clayton, Francina J.; Hulme, Charles
2018-01-01
The automatic letter-sound integration hypothesis proposes that the decoding difficulties seen in dyslexia arise from a specific deficit in establishing automatic letter-sound associations. We report the findings of 2 studies in which we used a priming task to assess automatic letter-sound integration. In Study 1, children between 5 and 7 years of…
Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
General RMP Guidance - Chapter 6: Prevention Program (Program 2)
Sound prevention practices are founded on safety information, hazard review, operating procedures, training, maintenance, compliance audits, and accident investigation. These must be integrated into a risk management system that you implement consistently.
75 FR 42431 - Notice of Intent To Grant Partially Exclusive License; METOCEAN Data System
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-21
... exclusive license, with exclusive fields of use in portable acoustic scoring, acoustic sounding and..., issued February 7, 2006, entitled ``Integrated Maritime Portable Acoustic Scoring and Simulator Control...
A spatially collocated sound thrusts a flash into awareness
Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta
2015-01-01
To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126
Sounds can boost the awareness of visual events through attention without cross-modal integration.
Pápai, Márta Szabina; Soto-Faraco, Salvador
2017-01-31
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
The Research Path to the Virtual Class. ZIFF Papiere 105.
ERIC Educational Resources Information Center
Rajasingham, Lalita
This paper describes a project conducted in 1991-92, based on research conducted in 1986-87 that demonstrated the need for a telecommunications system with the capacity of integrated services digital networks (ISDN) that would allow for sound, vision, and integrated computer services. Called the Tri-Centre Project, it set out to explore, from the…
ERIC Educational Resources Information Center
Wei, Wei; Yue, Kwok-Bun
2017-01-01
Concept map (CM) is a theoretically sound yet easy to learn tool and can be effectively used to represent knowledge. Even though many disciplines have adopted CM as a teaching and learning tool to improve learning effectiveness, its application in IS curriculum is sparse. Meaningful learning happens when one iteratively integrates new concepts and…
1997-09-30
field experiments in Puget Sound . Each research vessel will use multi- sensor profiling instrument packages which obtain high-resolution physical...field deployment of the wireless network is planned for May-July, 1998, at Orcas Island, WA. IMPACT We expect that wireless communication systems will...East Sound project to be a first step toward continental shelf and open ocean deployments with the next generation of wireless and satellite
Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S
2013-01-01
Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.
Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis
Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun Daniel; Carlson, Thomas J.
2012-01-01
Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. In this paper, we provide a detailed description of a new software package, the Aquatic Acoustic Metrics Interface (AAMI), specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame. The features of the AAMI software are discussed, and several case studies are presented to illustrate its functionality. PMID:22969353
HESTIA Commodities Exchange Pallet and Sounding Rocket Test Stand
NASA Technical Reports Server (NTRS)
Chaparro, Javier
2013-01-01
During my Spring 2016 internship, my two major contributions were the design of the Commodities Exchange Pallet and the design of a test stand for a 100 pounds-thrust sounding rocket. The Commodities Exchange Pallet is a prototype developed for the Human Exploration Spacecraft Testbed for Integration and Advancement (HESTIA) program. Under the HESTIA initiative the Commodities Exchange Pallet was developed as a method for demonstrating multi-system integration thru the transportation of In-Situ Resource Utilization produced oxygen and water to a human habitat. Ultimately, this prototype's performance will allow for future evaluation of integration, which may lead to the development of a flight capable pallet for future deep-space exploration missions. For HESTIA, my main task was to design the Commodities Exchange Pallet system to be used for completing an integration demonstration. Under the guidance of my mentor, I designed, both, the structural frame and fluid delivery system for the commodities pallet. The fluid delivery system includes a liquid-oxygen to gaseous-oxygen system, a water delivery system, and a carbon-dioxide compressors system. The structural frame is designed to meet safety and transportation requirements, as well as the ability to interface with the ER division's Portable Utility Pallet. The commodities pallet structure also includes independent instrumentation oxygen/water panels for operation and system monitoring. My major accomplishments for the commodities exchange pallet were the completion of the fluid delivery systems and the structural frame designs. In addition, parts selection was completed in order to expedite construction of the prototype, scheduled to begin in May of 2016. Once the commodities pallet is assembled and tested it is expected to complete a fully integrated transfer demonstration with the ISRU unit and the Environmental Control and Life Support System test chamber in September of 2016. In addition to the development of the Commodities Exchange Pallet, I also assisted in preparation for testing the upper stage of a sounding rocket developed as a Center Innovation Fund project. The main objective of this project is to demonstrate the integration between a propulsion system and a solid oxide fuel cell (SOFC). The upper stage and SOFC are scheduled to complete an integrated test in August of 2016. As part of preparation for scheduled testing, I was responsible for designing the upper stage's test stand/support structure and main engine plume deflector to be used during hot-fire testing (fig. 3). The structural components of the test stand need to meet safety requirements for operation of the propulsion system, which consist of a 100 pounds-thrust main engine and two 15 pounds-thrust reaction control thrusters. My main accomplishment for this project was the completion of the design and the parts selection for construction of the structure, scheduled to begin late April of 2016.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-31
... Financial Integrity of Markets c. Price Discovery d. Sound Risk Management Practices e. Other Public... Discovery d. Sound Risk Management Practices e. Other Public Interest Considerations E. Costs and Benefits..., Competitiveness and Financial Integrity c. Price Discovery d. Sound Risk Management Practices e. Other Public...
ERIC Educational Resources Information Center
Sabatini, John P.
An analysis was conducted of the results of a formative evaluation of the LiteracyLink "Workplace Essential Skills" (WES) learning system conducted in the fall of 1998. (The WES learning system is a multimedia learning system integrating text, sound, graphics, animation, video, and images in a computer system and includes a videotape series, a…
Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.
Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S
2002-06-01
Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.
Employing Knowledge Transfer to Support IS Implementation in SMEs
ERIC Educational Resources Information Center
Wynn, Martin; Turner, Phillip; Abas, Hanida; Shen, Rui
2009-01-01
Information systems strategy is an increasingly important component of overall business strategy in small and medium-sized enterprises (SMEs). The need for readily available and consistent management information, drawn from integrated systems based on sound and upgradeable technologies, has led many senior company managers to review the business…
Cortical Interactions Underlying the Production of Speech Sounds
ERIC Educational Resources Information Center
Guenther, Frank H.
2006-01-01
Speech production involves the integration of auditory, somatosensory, and motor information in the brain. This article describes a model of speech motor control in which a feedforward control system, involving premotor and primary motor cortex and the cerebellum, works in concert with auditory and somatosensory feedback control systems that…
NASA Astrophysics Data System (ADS)
Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.
2015-12-01
Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.
NASA Technical Reports Server (NTRS)
Fay, M.
1998-01-01
This Contamination Control Plan is submitted in response the Contract Document requirements List (CDRL) 007 under contract NAS5-32314 for the Earth Observing System (EOS) Advanced Microwave Sounding Unit A (AMSU-A). In response to the CDRL instructions, this document defines the level of cleanliness and methods/procedures to be followed to achieve adequate cleanliness/contamination control, and defines the required approach to maintain cleanliness/contamination control through shipping, observatory integration, test, and flight. This plan is also applicable to the Meteorological Satellite (METSAT) except where requirements are identified as EOS-specific. This plan is based on two key factors: a. The EOS/METSAT AMSU-A Instruments are not highly contamination sensitive. b. Potential contamination of other EOS Instruments is a key concern as addressed in Section 9/0 of the Performance Assurance Requirements for EOS/METSAT Integrated Programs AMSU-A Instrument (MR) (NASA Specification S-480-79).
Design of laser monitoring and sound localization system
NASA Astrophysics Data System (ADS)
Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang
2013-08-01
In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.
Development of the Astrobee F sounding rocket system.
NASA Technical Reports Server (NTRS)
Jenkins, R. B.; Taylor, J. P.; Honecker, H. J., Jr.
1973-01-01
The development of the Astrobee F sounding rocket vehicle through the first flight test at NASA-Wallops Station is described. Design and development of a 15 in. diameter, dual thrust, solid propellant motor demonstrating several new technology features provided the basis for the flight vehicle. The 'F' motor test program described demonstrated the following advanced propulsion technology: tandem dual grain configuration, low burning rate HTPB case-bonded propellant, and molded plastic nozzle. The resultant motor integrated into a flight vehicle was successfully flown with extensive diagnostic instrumentation.-
Data processing for the DMSP microwave radiometer system
NASA Technical Reports Server (NTRS)
Rigone, J. L.; Stogryn, A. P.
1977-01-01
A software program was developed and tested to process microwave radiometry data to be acquired by the microwave sensor (SSM/T) on the Defense Meteorological Satellite Program spacecraft. The SSM/T 7-channel microwave radiometer and systems data will be data-linked to Air Force Global Weather Central (AFGWC) where they will be merged with ephemeris data prior to product processing for use in the AFGWC upper air data base (UADB). The overall system utilizes an integrated design to provide atmospheric temperature soundings for global applications. The fully automated processing at AFGWC was accomplished by four related computer processor programs to produce compatible UADB soundings, evaluate system performance, and update the a priori developed inversion matrices. Tests with simulated data produced results significantly better than climatology.
[Perception and selectivity of sound duration in the central auditory midbrain].
Wang, Xin; Li, An-An; Wu, Fei-Jian
2010-08-25
Sound duration plays important role in acoustic communication. Information of acoustic signal is mainly encoded in the amplitude and frequency spectrum of different durations. Duration selective neurons exist in the central auditory system including inferior colliculus (IC) of frog, bat, mouse and chinchilla, etc., and they are important in signal recognition and feature detection. Two generally accepted models, which are "coincidence detector model" and "anti-coincidence detector model", have been raised to explain the mechanism of neural selective responses to sound durations based on the study of IC neurons in bats. Although they are different in details, they both emphasize the importance of synaptic integration of excitatory and inhibitory inputs, and are able to explain the responses of most duration-selective neurons. However, both of the hypotheses need to be improved since other sound parameters, such as spectral pattern, amplitude and repetition rate, could affect the duration selectivity of the neurons. The dynamic changes of sound parameters are believed to enable the animal to effectively perform recognition of behavior related acoustic signals. Under free field sound stimulation, we analyzed the neural responses in the IC and auditory cortex of mouse and bat to sounds with different duration, frequency and amplitude, using intracellular or extracellular recording techniques. Based on our work and previous studies, this article reviews the properties of duration selectivity in central auditory system and discusses the mechanisms of duration selectivity and the effect of other sound parameters on the duration coding of auditory neurons.
Microsoft C#.NET program and electromagnetic depth sounding for large loop source
NASA Astrophysics Data System (ADS)
Prabhakar Rao, K.; Ashok Babu, G.
2009-07-01
A program, in the C# (C Sharp) language with Microsoft.NET Framework, is developed to compute the normalized vertical magnetic field of a horizontal rectangular loop source placed on the surface of an n-layered earth. The field can be calculated either inside or outside the loop. Five C# classes with member functions in each class are, designed to compute the kernel, Hankel transform integral, coefficients for cubic spline interpolation between computed values and the normalized vertical magnetic field. The program computes the vertical magnetic field in the frequency domain using the integral expressions evaluated by a combination of straightforward numerical integration and the digital filter technique. The code utilizes different object-oriented programming (OOP) features. It finally computes the amplitude and phase of the normalized vertical magnetic field. The computed results are presented for geometric and parametric soundings. The code is developed in Microsoft.NET visual studio 2003 and uses various system class libraries.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 85 dBA, or equivalently a dose of 50%, integrating all sound levels from 80 dBA to at least 130 dBA... Protection Level. A TWA8 of 105 dBA, or equivalently, a dose of 800% of that permitted by the standard, integrating all sound levels from 90 dBA to at least 140 dBA. Exchange rate. The amount of increase in sound...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 85 dBA, or equivalently a dose of 50%, integrating all sound levels from 80 dBA to at least 130 dBA... Protection Level. A TWA8 of 105 dBA, or equivalently, a dose of 800% of that permitted by the standard, integrating all sound levels from 90 dBA to at least 140 dBA. Exchange rate. The amount of increase in sound...
NASA Astrophysics Data System (ADS)
Liu, Jinmei; Cui, Nuanyang; Gu, Long; Chen, Xiaobo; Bai, Suo; Zheng, Youbin; Hu, Caixia; Qin, Yong
2016-02-01
An integrated triboelectric nanogenerator (ITNG) with a three-dimensional structure benefiting sound propagation and adsorption is demonstrated to more effectively harvest sound energy with improved output performance. With different multifunctional integrated layers working harmonically, it could generate a short-circuit current up to 2.1 mA, an open-circuit voltage up to 232 V and the maximum charging rate can reach 453 μC s-1 for a 1 mF capacitor, which are 4.6 times, 2.6 times and 7.4 times the highest reported values, respectively. Further study shows that the ITNG works well under sound in a wide range of sound intensity levels (SILs) and frequencies, and its output is sensitive to the SIL and frequency of the sound, which reveals that the ITNG can act as a self-powered active sensor for real-time noise surveillance and health care. Moreover, this generator can be used to directly power the Fe(OH)3 sol electrophoresis and shows great potential as a wireless power supply in the electrochemical industry.An integrated triboelectric nanogenerator (ITNG) with a three-dimensional structure benefiting sound propagation and adsorption is demonstrated to more effectively harvest sound energy with improved output performance. With different multifunctional integrated layers working harmonically, it could generate a short-circuit current up to 2.1 mA, an open-circuit voltage up to 232 V and the maximum charging rate can reach 453 μC s-1 for a 1 mF capacitor, which are 4.6 times, 2.6 times and 7.4 times the highest reported values, respectively. Further study shows that the ITNG works well under sound in a wide range of sound intensity levels (SILs) and frequencies, and its output is sensitive to the SIL and frequency of the sound, which reveals that the ITNG can act as a self-powered active sensor for real-time noise surveillance and health care. Moreover, this generator can be used to directly power the Fe(OH)3 sol electrophoresis and shows great potential as a wireless power supply in the electrochemical industry. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr09087c
Litovsky, Ruth Y.; Godar, Shelly P.
2010-01-01
The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369
Development of the engineering design integration (EDIN) system: A computer aided design development
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.
1977-01-01
The EDIN (Engineering Design Integration) System which provides a collection of hardware and software, enabling the engineer to perform man-in-the-loop interactive evaluation of aerospace vehicle concepts, was considered. Study efforts were concentrated in the following areas: (1) integration of hardware with the Univac Exec 8 System; (2) development of interactive software for the EDIN System; (3) upgrading of the EDIN technology module library to an interactive status; (4) verification of the soundness of the developing EDIN System; (5) support of NASA in design analysis studies using the EDIN System; (6) provide training and documentation in the use of the EDIN System; and (7) provide an implementation plan for the next phase of development and recommendations for meeting long range objectives.
Local, state, federal, tribal and private stakeholders have committed significant resources to restoring Puget Sound’s terrestrial-marine ecosystem. Though jurisdictional issues have promoted a fragmented approach to restoration planning, there is growing recognition that a...
Local, state, federal, tribal and private stakeholders have committed significant resources to restoring Puget Sound’s terrestrial-marine ecosystem. Though jurisdictional issues have promoted a fragmented approach to restoration planning, there is growing recognition that a...
How learning to abstract shapes neural sound representations
Ley, Anke; Vroomen, Jean; Formisano, Elia
2014-01-01
The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes—even in absence of changes in overall signal level—these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations. PMID:24917783
Liu, B; Wang, Z; Wu, G; Meng, X
2011-04-28
In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
ECOLOGICAL AND SOCIALECONOMIC BENEFITS OF RESTORING AND-IMPAIRED STREAMS: EMERGY-BASED VALUATION
Sound environmental decisions require an integrated, systemic method of valuation that accurately accounts for environmental and social, as well as economic, costs and benefits. More inclusive methods are particularly needed for assessing ecological benefits because these are so...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 20026. (e) For more detailed description of NASA's organizational structure, see the “U.S. Government..., and structure of the universe, the solar system, and the integrated functioning of the Earth. The...-occupied spacecraft, sounding rockets, balloons, aircraft, and ground-based research to conduct its...
An intelligent artificial throat with sound-sensing ability based on laser induced graphene
Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling
2017-01-01
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas. PMID:28232739
An intelligent artificial throat with sound-sensing ability based on laser induced graphene.
Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling
2017-02-24
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.
An intelligent artificial throat with sound-sensing ability based on laser induced graphene
NASA Astrophysics Data System (ADS)
Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling
2017-02-01
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.
Consort 1 sounding rocket flight
NASA Technical Reports Server (NTRS)
Wessling, Francis C.; Maybee, George W.
1989-01-01
This paper describes a payload of six experiments developed for a 7-min microgravity flight aboard a sounding rocket Consort 1, in order to investigate the effects of low gravity on certain material processes. The experiments in question were designed to test the effect of microgravity on the demixing of aqueous polymer two-phase systems, the electrodeposition process, the production of elastomer-modified epoxy resins, the foam formation process and the characteristics of foam, the material dispersion, and metal sintering. The apparatuses designed for these experiments are examined, and the rocket-payload integration and operations are discussed.
Liu, Jinmei; Cui, Nuanyang; Gu, Long; Chen, Xiaobo; Bai, Suo; Zheng, Youbin; Hu, Caixia; Qin, Yong
2016-03-07
An integrated triboelectric nanogenerator (ITNG) with a three-dimensional structure benefiting sound propagation and adsorption is demonstrated to more effectively harvest sound energy with improved output performance. With different multifunctional integrated layers working harmonically, it could generate a short-circuit current up to 2.1 mA, an open-circuit voltage up to 232 V and the maximum charging rate can reach 453 μC s(-1) for a 1 mF capacitor, which are 4.6 times, 2.6 times and 7.4 times the highest reported values, respectively. Further study shows that the ITNG works well under sound in a wide range of sound intensity levels (SILs) and frequencies, and its output is sensitive to the SIL and frequency of the sound, which reveals that the ITNG can act as a self-powered active sensor for real-time noise surveillance and health care. Moreover, this generator can be used to directly power the Fe(OH)3 sol electrophoresis and shows great potential as a wireless power supply in the electrochemical industry.
Brown, Andrew D; Tollin, Daniel J
2016-09-21
In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable. In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of both ITD and ILD cues. Here we demonstrate that behavioral ILD sensitivity (in humans) and neural ILD sensitivity (in single neurons of the chinchilla auditory midbrain) remain robust under stimulus conditions that render ITD cues undetectable. This result can be explained by "slow" temporal integration arising from several-millisecond-long windows of excitatory-inhibitory interaction evident in midbrain, but not brainstem, neurons. Such integrative coding can account for the preservation of ILD sensitivity despite even extreme temporal degradations in ecological acoustic stimuli. Copyright © 2016 the authors 0270-6474/16/369908-14$15.00/0.
Humpback whale bioacoustics: From form to function
NASA Astrophysics Data System (ADS)
Mercado, Eduardo, III
This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.
Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) software management plan
NASA Technical Reports Server (NTRS)
Schwantje, Robert
1994-01-01
This document defines the responsibilites for the management of the like-cycle development of the flight software installed in the AMSU-A instruments, and the ground support software used in the test and integration of the AMSU-A instruments.
Puget Sound Area Electric Reliability Plan : Appendix E, Transmission Reinforcement Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
United States. Bonneville Power Administration.
1992-04-01
The purpose of this appendix to the draft environmental impact statement (EIS) report is to provide an update of the latest study work done on transmission system options for the Puget Sound Area Electric Reliability Plan. Also included in the attachments to the EIS are 2 reports analyzing the voltage stability of the Puget Sound transmission system and a review by Power Technologies, Inc. of the BPA voltage stability analysis and reactive options. Five transmission line options and several reactive options are presently being considered as possible solutions to the PSAFRP by the Transmission Team. The first two line optionsmore » would be built on new rights-of way adjacent (as much as possible) to existing corridors. The reactive options would optimize the existing transmission system capability by adding new stations for series capacitors and/or switchgear. The other three line options are rebuilds or upgrades of existing cross mountain transmission lines. These options are listed below and include a preliminary assessment of the additional transmission system reinforcement required to integrate the new facilities into the existing transmission system. Plans were designed to provide at least 500 MVAR reactive margin.« less
Integrated decision support tools for Puget Sound salmon recovery planning
We developed a set of tools to provide decision support for community-based salmon recovery planning in Salish Sea watersheds. Here we describe how these tools are being integrated and applied in collaboration with Puget Sound tribes and community stakeholders to address restora...
Decentralized Control of Sound Radiation from an Aircraft-Style Panel Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2008-01-01
A decentralized LQG-based control strategy is designed to reduce low-frequency sound transmission through periodically stiffened panels. While modern control strategies have been used to reduce sound radiation from relatively simple structural acoustic systems, significant implementation issues have to be addressed before these control strategies can be extended to large systems such as the fuselage of an aircraft. For instance, centralized approaches typically require a high level of connectivity and are computationally intensive, while decentralized strategies face stability problems caused by the unmodeled interaction between neighboring control units. Since accurate uncertainty bounds are not known a priori, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is validated using real-time control experiments performed on a built-up aluminum test structure representative of the fuselage of an aircraft. Experiments demonstrate that the iterative approach is capable of achieving 12 dB peak reductions and a 3.6 dB integrated reduction in radiated sound power from the stiffened panel.
Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A) software assurance plan
NASA Technical Reports Server (NTRS)
Schwantje, Robert; Smith, Claude
1994-01-01
This document defines the responsibilities of Software Quality Assurance (SOA) for the development of the flight software installed in EOS/AMSU-A instruments, and the ground support software used in the test and integration of the EOS/AMSU-A instruments.
The Pesticide Problem: Is Any Amount Safe?
ERIC Educational Resources Information Center
Cooper, Susan
1991-01-01
Discusses the use of integrated pest management to foster a safe school environment free from pesticides. This effective, environmentally sound system minimizes human exposure and reduces the toxicity of materials used to control pests. Parents, teachers, and students can educate themselves to improve school pest control practices. (SM)
Active acoustical impedance using distributed electrodynamical transducers.
Collet, M; David, P; Berthillier, M
2009-02-01
New miniaturization and integration capabilities available from emerging microelectromechanical system (MEMS) technology will allow silicon-based artificial skins involving thousands of elementary actuators to be developed in the near future. SMART structures combining large arrays of elementary motion pixels coated with macroscopic components are thus being studied so that fundamental properties such as shape, stiffness, and even reflectivity of light and sound could be dynamically adjusted. This paper investigates the acoustic impedance capabilities of a set of distributed transducers connected with a suitable controlling strategy. Research in this domain aims at designing integrated active interfaces with a desired acoustical impedance for reaching an appropriate global acoustical behavior. This generic problem is intrinsically connected with the control of multiphysical systems based on partial differential equations (PDEs) and with the notion of multiscaled physics when a dense array of electromechanical systems (or MEMS) is considered. By using specific techniques based on PDE control theory, a simple boundary control equation capable of annihilating the wave reflections has been built. The obtained strategy is also discretized as a low order time-space operator for experimental implementation by using a dense network of interlaced microphones and loudspeakers. The resulting quasicollocated architecture guarantees robustness and stability margins. This paper aims at showing how a well controlled semidistributed active skin can substantially modify the sound transmissibility or reflectivity of the corresponding homogeneous passive interface. In Sec. IV, numerical and experimental results demonstrate the capabilities of such a method for controlling sound propagation in ducts. Finally, in Sec. V, an energy-based comparison with a classical open-loop strategy underlines the system's efficiency.
Bio-Inspired Micromechanical Directional Acoustic Sensor
NASA Astrophysics Data System (ADS)
Swan, William; Alves, Fabio; Karunasiri, Gamani
Conventional directional sound sensors employ an array of spatially separated microphones and the direction is determined using arrival times and amplitudes. In nature, insects such as the Ormia ochracea fly can determine the direction of sound using a hearing organ much smaller than the wavelength of sound it detects. The fly's eardrums are mechanically coupled, only separated by about 1 mm, and have remarkable directional sensitivity. A micromechanical sensor based on the fly's hearing system was designed and fabricated on a silicon on insulator (SOI) substrate using MEMS technology. The sensor consists of two 1 mm2 wings connected using a bridge and to the substrate using two torsional legs. The dimensions of the sensor and material stiffness determine the frequency response of the sensor. The vibration of the wings in response to incident sound at the bending resonance was measured using a laser vibrometer and found to be about 1 μm/Pa. The electronic response of the sensor to sound was measured using integrated comb finger capacitors and found to be about 25 V/Pa. The fabricated sensors showed good directional sensitivity. In this talk, the design, fabrication and characteristics of the directional sound sensor will be described. Supported by ONR and TDSI.
Mesoscale and severe storms (Mass) data management and analysis system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.; Dickerson, M.
1984-01-01
Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.
Harrison, Jolie; Ferguson, Megan; Gedamke, Jason; Hatch, Leila; Southall, Brandon; Van Parijs, Sofie
2016-01-01
To help manage chronic and cumulative impacts of human activities on marine mammals, the National Oceanic and Atmospheric Administration (NOAA) convened two working groups, the Underwater Sound Field Mapping Working Group (SoundMap) and the Cetacean Density and Distribution Mapping Working Group (CetMap), with overarching effort of both groups referred to as CetSound, which (1) mapped the predicted contribution of human sound sources to ocean noise and (2) provided region/time/species-specific cetacean density and distribution maps. Mapping products were presented at a symposium where future priorities were identified, including institutionalization/integration of the CetSound effort within NOAA-wide goals and programs, creation of forums and mechanisms for external input and funding, and expanded outreach/education. NOAA is subsequently developing an ocean noise strategy to articulate noise conservation goals and further identify science and management actions needed to support them.
NASA Technical Reports Server (NTRS)
Yunck, Tom P.; Hajj, George A.
2003-01-01
The vast illuminating power of the Global Positioning System (GPS), which transformed space geodesy in the 199Os, is now serving to probe the earth's fluid envelope in unique ways. Three distinct techniques have emerged: ground-based sensing of the integrated atmospheric moisture; space-based profiling of atmospheric refractivity, pressure, temperature, moisture, and other properties by active limb sounding; and surface (ocean and ice) altimetry and scatterometry with reflected signals detected from space. Ground-based GPS moisture sensing is already in provisional use for numerical weather prediction. Limb sounding, while less mature, offers a bevy of attractions, including high accuracy, stability, and vertical resolution; all-weather operation; and exceptionally low cost. GPS bistatic radar, r 'reflectometry,' is the least advanced but shows promise for a number of niche applications.
NASA Technical Reports Server (NTRS)
Haapala, C.
1999-01-01
This is the Performance Verification Report, Antenna Drive Subassembly, Antenna Drive Subsystem, METSAT AMSU-A2 (P/N 1331200-2, SN: 108), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
Field refurbishment of recoverable sounding rocket payloads.
NASA Technical Reports Server (NTRS)
Needleman, H. C.; Tackett, C. D.
1973-01-01
Sounding rocket payload field refurbishment has been shown to be an effective means for obtaining additional scientific data with substantial time and monetary savings. In a recent campaign three successful missions were flown using two payloads. Field refurbished hardware from two previously flown and recovered payloads were field integrated to form a third payload. Although this operational method may result in compromises in the refurbished system, it allows for quick turn around when the mission requires it. This paper describes the recent success of this approach with the Dudley Observatory Nike-Apache micrometeorite collection experiments launched from Kiruna, Sweden, in October 1972.
Differential Neural Contributions to Native- and Foreign-Language Talker Identification
ERIC Educational Resources Information Center
Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C. M.
2009-01-01
Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system's ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies…
Camera! Action! Collaborate with Digital Moviemaking
ERIC Educational Resources Information Center
Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.
2007-01-01
Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…
Automatic fixation facility for plant seedlings in the TEXUS Sounding Rocket Programme.
Tewinkel, M; Burfeindt, J; Rank, P; Volkmann, D
1991-10-01
Automatic chemical fixation of plant seedlings within a 6 min period of reduced gravity (10(-4)g) was performed on three ballistic rocket flights provided by the German Sounding Rocket Programme TEXUS (Technologische Experimente unter Schwerelosigkeit = Technological Experiments in Microgravity). The described TEXUS experiment module consists of a standard experiment housing with batteries, cooling and heating systems, timer, and a data recording unit. Typically, 60 min before launch an experiment plug-in unit containing chambers with the plant material, the fixation system, and the temperature sensors is installed into the module which is already integrated in the payload section of the sounding rocket (late access). During the ballistic flight plant chambers are rapidly filled at pre-selected instants to preserve the cell structure of gravity sensing cells. After landing the plant material is processed for transmission electron microscopy. Up to now three experiments were successfully performed with cress roots (Lepidium sativum L.). Detailed improvements resulted in an automatic fixation facility which in principle can be used in unmanned missions.
Yip, Marcus; Jin, Rui; Nakajima, Hideko Heidi; Stankovic, Konstantina M; Chandrakasan, Anantha P
2015-01-01
A system-on-chip for an invisible, fully-implantable cochlear implant is presented. Implantable acoustic sensing is achieved by interfacing the SoC to a piezoelectric sensor that detects the sound-induced motion of the middle ear. Measurements from human cadaveric ears demonstrate that the sensor can detect sounds between 40 and 90 dB SPL over the speech bandwidth. A highly-reconfigurable digital sound processor enables system power scalability by reconfiguring the number of channels, and provides programmable features to enable a patient-specific fit. A mixed-signal arbitrary waveform neural stimulator enables energy-optimal stimulation pulses to be delivered to the auditory nerve. The energy-optimal waveform is validated with in-vivo measurements from four human subjects which show a 15% to 35% energy saving over the conventional rectangular waveform. Prototyped in a 0.18 μ m high-voltage CMOS technology, the SoC in 8-channel mode consumes 572 μ W of power including stimulation. The SoC integrates implantable acoustic sensing, sound processing, and neural stimulation on one chip to minimize the implant size, and proof-of-concept is demonstrated with measurements from a human cadaver ear.
Leng, Shuang; Tan, Ru San; Chai, Kevin Tshun Chuan; Wang, Chao; Ghista, Dhanjoo; Zhong, Liang
2015-07-10
Most heart diseases are associated with and reflected by the sounds that the heart produces. Heart auscultation, defined as listening to the heart sound, has been a very important method for the early diagnosis of cardiac dysfunction. Traditional auscultation requires substantial clinical experience and good listening skills. The emergence of the electronic stethoscope has paved the way for a new field of computer-aided auscultation. This article provides an in-depth study of (1) the electronic stethoscope technology, and (2) the methodology for diagnosis of cardiac disorders based on computer-aided auscultation. The paper is based on a comprehensive review of (1) literature articles, (2) market (state-of-the-art) products, and (3) smartphone stethoscope apps. It covers in depth every key component of the computer-aided system with electronic stethoscope, from sensor design, front-end circuitry, denoising algorithm, heart sound segmentation, to the final machine learning techniques. Our intent is to provide an informative and illustrative presentation of the electronic stethoscope, which is valuable and beneficial to academics, researchers and engineers in the technical field, as well as to medical professionals to facilitate its use clinically. The paper provides the technological and medical basis for the development and commercialization of a real-time integrated heart sound detection, acquisition and quantification system.
Vortex/Body Interaction and Sound Generation in Low-Speed Flow
NASA Technical Reports Server (NTRS)
Kao, Hsiao C.
1998-01-01
The problem of sound generation by vortices interacting with an arbitrary body in a low-speed flow has been investigated by the method of matched asymptotic expansions. For the purpose of this report, it is convenient to divide the problem into three parts. In the first part the mechanism of the vortex/body interaction, which is essentially the inner solution in the inner region, is examined. The trajectories for a system of vortices rotating about their centroid are found to undergo enormous changes after interaction; from this, some interesting properties emerged. In the second part, the problem is formulated, the outer solution is found, matching is implemented, and solutions for acoustic pressure are obtained. In the third part, Fourier integrals are evaluated and predicated results presented. An examination of these results reveals the following: (a) the background noise can be either augmented or attenuated by a body after interaction, (b) sound generated by vortex/body interaction obeys a scaling factor, (C) sound intensity can be reduced substantially by positioning the vortex system in the "favorable" side of the body instead of the "unfavorable" side, and (d) acoustic radiation from vortex/bluff-body interaction is less than that from vortex/airfoil interaction under most circumstances.
NASA Technical Reports Server (NTRS)
2000-01-01
This is the As-Designed Parts List, Electrical, Electronic, and Electromechanical (EEE) As-Built Parts Lists For The AMSU-A Instruments, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
NASA Technical Reports Server (NTRS)
Pines, D.
1999-01-01
This is the Performance Verification Report, METSAT (Meteorological Satellites) Phase Locked Oscillator Assembly, P/N 1348360-1, S/N F09 and F10, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
Severe Speech Sound Disorders: An Integrated Multimodal Intervention
ERIC Educational Resources Information Center
King, Amie M.; Hengst, Julie A.; DeThorne, Laura S.
2013-01-01
Purpose: This study introduces an integrated multimodal intervention (IMI) and examines its effectiveness for the treatment of persistent and severe speech sound disorders (SSD) in young children. The IMI is an activity-based intervention that focuses simultaneously on increasing the "quantity" of a child's meaningful productions of target words…
NASA Technical Reports Server (NTRS)
1996-01-01
This Failure Modes and Effects Analysis (FMEA) is for the Advanced Microwave Sounding Unit-A (AMSU-A) instruments that are being designed and manufactured for the Meteorological Satellites Project (METSAT) and the Earth Observing System (EOS) integrated programs. The FMEA analyzes the design of the METSAT and EOS instruments as they currently exist. This FMEA is intended to identify METSAT and EOS failure modes and their effect on spacecraft-instrument and instrument-component interfaces. The prime objective of this FMEA is to identify potential catastrophic and critical failures so that susceptibility to the failures and their effects can be eliminated from the METSAT/EOS instruments.
Development of an alarm sound database and simulator.
Takeuchi, Akihiro; Hirose, Minoru; Shinbo, Toshiro; Imai, Megumi; Mamorita, Noritaka; Ikeda, Noriaki
2006-10-01
The purpose of this study was to develop an interactive software package of alarm sounds to present, recognize and share problems about alarm sounds among medical staff and medical manufactures. The alarm sounds were recorded in variable alarm conditions in a WAV file. The alarm conditions were arbitrarily induced by modifying attachments of various medical devices. The software package that integrated an alarm sound database and simulator was used to assess the ability to identify the monitor that sounded the alarm for the medical staff. Eighty alarm sound files (40MB in total) were recorded from 41 medical devices made by 28 companies. There were three pairs of similar alarm sounds that could not easily be distinguished, two alarm sounds which had a different priority, either low or high. The alarm sound database was created in an Excel file (ASDB.xls 170 kB, 40 MB with photos), and included a list of file names that were hyperlinked to alarm sound files. An alarm sound simulator (AlmSS) was constructed with two modules for simultaneously playing alarm sound files and for designing new alarm sounds. The AlmSS was used in the assessing procedure to determine whether 19 clinical engineers could identify 13 alarm sounds only by their distinctive sounds. They were asked to choose from a list of devices and to rate the priority of each alarm. The overall correct identification rate of the alarm sounds was 48%, and six characteristic alarm sounds were correctly recognized by beetween 63% to 100% of the subjects. The overall recognition rate of the alarm sound priority was only 27%. We have developed an interactive software package of alarm sounds by integrating the database and the alarm sound simulator (URL: http://info.ahs.kitasato-u.ac.jp/tkweb/alarm/asdb.html ). The AlmSS was useful for replaying multiple alarm sounds simultaneously and designing new alarm sounds interactively.
Seasonal transport variations in the straits connecting Prince William Sound to the Gulf of Alaska
NASA Astrophysics Data System (ADS)
Halverson, Mark J.; Bélanger, Claude; Gay, Shelton M.
2013-07-01
Exchange of water between Prince William Sound and the Gulf of Alaska has a significant impact on its circulation and biological productivity. Current meter records from moored instruments in the two major straits connecting Prince William Sound to the Gulf of Alaska are analyzed to characterize the seasonal variations in water exchange. Eight individual deployments, each lasting for about 6 months, were made during the years 2005-2010. Two moorings were placed across each passage to account for horizontal flow variability. Monthly averaged, depth-integrated transport in winter is characterized by a strong barotropic inflow through Hinchinbrook Entrance and outflow through Montague Strait. The transport through each passage can reach 0.2Sv, which could replenish the volume of Prince William Sound in as little as 3 months. Depth-integrated transport is weaker and more variable in direction in summer than in winter, implying that Prince William sound is not always a simple flow-through system. Monthly transports range between -0.05 and 0.08Sv in each passage, and the corresponding flushing times exceed 1 year. The flow through both passages is highly baroclinic in the summer, so that the layer transport can be significant. For example, the deep inflow through Hinchinbrook Entrance can reach 0.05Sv, which would flush the deep regions of Prince William Sound (>400m) in only 23 days. The transport imbalance between Montague Strait and Hinchinbrook Entrance cannot be accounted for by considering other terms in a volume budget such as local freshwater input, meaning the imbalance is mostly a result of under-resolving the cross-strait flow variability. The magnitude of the monthly mean depth-integrated transport through Montague Strait and Hinchinbrook Entrance depends non-linearly on the shelf winds. Strong downwelling conditions, characteristic of the winter, drive inflow through Hinchinbrook Entrance, which is balanced by outflow through Montague Strait. Weak downwelling or upwelling conditions, characteristic of the summer, allow deep water from below the shelf break to flow in through Hinchinbrook Entrance.
Modified modular imaging system designed for a sounding rocket experiment
NASA Astrophysics Data System (ADS)
Veach, Todd J.; Scowen, Paul A.; Beasley, Matthew; Nikzad, Shouleh
2012-09-01
We present the design and system calibration results from the fabrication of a charge-coupled device (CCD) based imaging system designed using a modified modular imager cell (MIC) used in an ultraviolet sounding rocket mission. The heart of the imaging system is the MIC, which provides the video pre-amplifier circuitry and CCD clock level filtering. The MIC is designed with standard four-layer FR4 printed circuit board (PCB) with surface mount and through-hole components for ease of testing and lower fabrication cost. The imager is a 3.5k by 3.5k LBNL p-channel CCD with enhanced quantum efficiency response in the UV using delta-doping technology at JPL. The recently released PCIe/104 Small-Cam CCD controller from Astronomical Research Cameras, Inc (ARC) performs readout of the detector. The PCIe/104 Small-Cam system has the same capabilities as its larger PCI brethren, but in a smaller form factor, which makes it ideally suited for sub-orbital ballistic missions. The overall control is then accomplished using a PCIe/104 computer from RTD Embedded Technologies, Inc. The design, fabrication, and testing was done at the Laboratory for Astronomical and Space Instrumentation (LASI) at Arizona State University. Integration and flight calibration are to be completed at the University of Colorado Boulder before integration into CHESS.
Rainsticks: Integrating Culture, Folklore, and the Physics of Sound
ERIC Educational Resources Information Center
Moseley, Christine; Fies, Carmen
2007-01-01
The purpose of this activity is for students to build a rainstick out of materials in their own environment and imitate the sound of rain while investigating the physical principles of sound. Students will be able to relate the sound produced by an instrument to the type and quantity of materials used in its construction.
Global Picture Archiving and Communication Systems (GPACS): An Overview
1994-04-01
a separate entity in most hospitals because of the integration problems that exilb. Eventually these systems should be connected so they appear to the...extremely important to efficient image data transfer include the protocol being used between the two transferring entities . Image data is currently...images, sound or video. The actual database consists of a collection of persistent data that is used by an application system of some entity , in this
NASA Technical Reports Server (NTRS)
1973-01-01
Design and development efforts for a spaceborne modular computer system are reported. An initial baseline description is followed by an interface design that includes definition of the overall system response to all classes of failure. Final versions for the register level designs for all module types were completed. Packaging, support and control executive software, including memory utilization estimates and design verification plan, were formalized to insure a soundly integrated design of the digital computer system.
Human-inspired sound environment recognition system for assistive vehicles
NASA Astrophysics Data System (ADS)
González Vidal, Eduardo; Fredes Zarricueta, Ernesto; Auat Cheein, Fernando
2015-02-01
Objective. The human auditory system acquires environmental information under sound stimuli faster than visual or touch systems, which in turn, allows for faster human responses to such stimuli. It also complements senses such as sight, where direct line-of-view is necessary to identify objects, in the environment recognition process. This work focuses on implementing human reaction to sound stimuli and environment recognition on assistive robotic devices, such as robotic wheelchairs or robotized cars. These vehicles need environment information to ensure safe navigation. Approach. In the field of environment recognition, range sensors (such as LiDAR and ultrasonic systems) and artificial vision devices are widely used; however, these sensors depend on environment constraints (such as lighting variability or color of objects), and sound can provide important information for the characterization of an environment. In this work, we propose a sound-based approach to enhance the environment recognition process, mainly for cases that compromise human integrity, according to the International Classification of Functioning (ICF). Our proposal is based on a neural network implementation that is able to classify up to 15 different environments, each selected according to the ICF considerations on environment factors in the community-based physical activities of people with disabilities. Main results. The accuracy rates in environment classification ranges from 84% to 93%. This classification is later used to constrain assistive vehicle navigation in order to protect the user during daily activities. This work also includes real-time outdoor experimentation (performed on an assistive vehicle) by seven volunteers with different disabilities (but without cognitive impairment and experienced in the use of wheelchairs), statistical validation, comparison with previously published work, and a discussion section where the pros and cons of our system are evaluated. Significance. The proposed sound-based system is very efficient at providing general descriptions of the environment. Such descriptions are focused on vulnerable situations described by the ICF. The volunteers answered a questionnaire regarding the importance of constraining the vehicle velocities in risky environments, showing that all the volunteers felt comfortable with the system and its performance.
Human-inspired sound environment recognition system for assistive vehicles.
Vidal, Eduardo González; Zarricueta, Ernesto Fredes; Cheein, Fernando Auat
2015-02-01
The human auditory system acquires environmental information under sound stimuli faster than visual or touch systems, which in turn, allows for faster human responses to such stimuli. It also complements senses such as sight, where direct line-of-view is necessary to identify objects, in the environment recognition process. This work focuses on implementing human reaction to sound stimuli and environment recognition on assistive robotic devices, such as robotic wheelchairs or robotized cars. These vehicles need environment information to ensure safe navigation. In the field of environment recognition, range sensors (such as LiDAR and ultrasonic systems) and artificial vision devices are widely used; however, these sensors depend on environment constraints (such as lighting variability or color of objects), and sound can provide important information for the characterization of an environment. In this work, we propose a sound-based approach to enhance the environment recognition process, mainly for cases that compromise human integrity, according to the International Classification of Functioning (ICF). Our proposal is based on a neural network implementation that is able to classify up to 15 different environments, each selected according to the ICF considerations on environment factors in the community-based physical activities of people with disabilities. The accuracy rates in environment classification ranges from 84% to 93%. This classification is later used to constrain assistive vehicle navigation in order to protect the user during daily activities. This work also includes real-time outdoor experimentation (performed on an assistive vehicle) by seven volunteers with different disabilities (but without cognitive impairment and experienced in the use of wheelchairs), statistical validation, comparison with previously published work, and a discussion section where the pros and cons of our system are evaluated. The proposed sound-based system is very efficient at providing general descriptions of the environment. Such descriptions are focused on vulnerable situations described by the ICF. The volunteers answered a questionnaire regarding the importance of constraining the vehicle velocities in risky environments, showing that all the volunteers felt comfortable with the system and its performance.
ERIC Educational Resources Information Center
Forrest, Charles
1988-01-01
Reviews technological developments centered around microcomputers that have led to the design of integrated workstations. Topics discussed include methods of information storage, information retrieval, telecommunications networks, word processing, data management, graphics, interactive video, sound, interfaces, artificial intelligence, hypermedia,…
On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common.
Weninger, Felix; Eyben, Florian; Schuller, Björn W; Mortillaro, Marcello; Scherer, Klaus R
2013-01-01
WITHOUT DOUBT, THERE IS EMOTIONAL INFORMATION IN ALMOST ANY KIND OF SOUND RECEIVED BY HUMANS EVERY DAY: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow's pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of "the sound that something makes," in order to evaluate the system's auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects.
NASA Technical Reports Server (NTRS)
Haigh, R.; Krimchansky, S. (Technical Monitor)
2000-01-01
This is the Performance Verification Report, METSAT (S/N 108) AMSU-A1 Receiver Assemblies P/N 1356429-1 S/N F05 and P/N 1356409-1 S/N F05, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). The ATP for the AMSU-A Receiver Subsystem, AE-26002/6A, is prepared to describe in detail the configuration of the test setups and the procedures of the tests to verify that the receiver subsystem meets the specifications as required either in the AMSU-A Instrument Performance and Operation Specifications, S-480-80, or in AMSU-A Receiver Subsystem Specifications, AE-26608, derived by the Aerojet System Engineering. Test results that verify the conformance to the specifications demonstrate the acceptability of that particular receiver subsystem.
Sound For Animation And Virtual Reality
NASA Technical Reports Server (NTRS)
Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)
1995-01-01
Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.
ERIC Educational Resources Information Center
Noguchi, Masaki; Hudson Kam, Carla L.
2018-01-01
In human languages, different speech sounds can be contextual variants of a single phoneme, called allophones. Learning which sounds are allophones is an integral part of the acquisition of phonemes. Whether given sounds are separate phonemes or allophones in a listener's language affects speech perception. Listeners tend to be less sensitive to…
Deficient multisensory integration in schizophrenia: an event-related potential study.
Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean
2013-07-01
In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.
Tervaniemi, M; Schröger, E; Näätänen, R
1997-05-23
Neuronal mechanisms involved in the processing of complex sounds with asynchronous onsets were studied in reading subjects. The sound onset asynchrony (SOA) between the leading partial and the remaining complex tone was varied between 0 and 360 ms. Infrequently occurring deviant sounds (in which one out of 10 harmonics was different in pitch relative to the frequently occurring standard sound) elicited the mismatch negativity (MMN), a change-specific cortical event-related potential (ERP) component. This indicates that the pitch of standard stimuli had been pre-attentively coded by sensory-memory traces. Moreover, when the complex-tone onset fell within temporal integration window initiated by the leading-partial onset, the deviants elicited the N2b component. This indexes that involuntary attention switch towards the sound change occurred. In summary, the present results support the existence of pre-perceptual integration mechanism of 100-200 ms duration and emphasize its importance in switching attention towards the stimulus change.
Sound It Out! Phonics in a Comprehensive Reading System
ERIC Educational Resources Information Center
Savage, John
2006-01-01
Rather than treating phonics as an end in itself, this brief text shows how phonics fits into the overall process of a child's learning to read. It helps students understand how phonics can be integrated successfully into an effective classroom reading program. While it includes a wealth of suggestions for practical classroom applications, the…
The Functional Neuroanatomy of Prelexical Processing in Speech Perception
ERIC Educational Resources Information Center
Scott, Sophie K.; Wise, Richard J. S.
2004-01-01
In this paper we attempt to relate the prelexical processing of speech, with particular emphasis on functional neuroimaging studies, to the study of auditory perceptual systems by disciplines in the speech and hearing sciences. The elaboration of the sound-to-meaning pathways in the human brain enables their integration into models of the human…
Innovative Use of Smartphones for a Sound Resonance Tube Experiment
ERIC Educational Resources Information Center
Tho, Siew Wei; Yeung, Yau Yuen
2014-01-01
A Smartphone is not only a mobile device that is used for communication but is also integrated with a personal digital assistant (PDA) and other technological capabilities such as built-in acceleration, magnetic and light sensors, microphone, camera and Global Positioning System (GPS) unit. This handheld device has become very popular with the…
Acoustic agglomeration of fine particles based on a high intensity acoustical resonator
NASA Astrophysics Data System (ADS)
Zhao, Yun; Zeng, Xinwu; Tian, Zhangfu
2015-10-01
Acoustic agglomeration (AA) is considered to be a promising method for reducing the air pollution caused by fine aerosol particles. Removal efficiency and energy consuming are primary parameters and generally have a conflict with each other for the industry applications. It was proved that removal efficiency is increased with sound intensity and optimal frequency is presented for certain polydisperse aerosol. As a result, a high efficiency and low energy cost removal system was constructed using acoustical resonance. High intensity standing wave is generated by a tube system with abrupt section driven by four loudspeakers. Numerical model of the tube system was built base on the finite element method, and the resonance condition and SPL increase were confirmd. Extensive tests were carried out to investigate the acoustic field in the agglomeration chamber. Removal efficiency of fine particles was tested by the comparison of filter paper mass and particle size distribution at different operating conditions including sound pressure level (SPL), and frequency. The experimental study has demonstrated that agglomeration increases with sound pressure level. Sound pressure level in the agglomeration chamber is between 145 dB and 165 dB from 500 Hz to 2 kHz. The resonance frequency can be predicted with the quarter tube theory. Sound pressure level gain of more than 10 dB is gained at resonance frequency. With the help of high intensity sound waves, fine particles are reduced greatly, and the AA effect is enhanced at high SPL condition. The optimal frequency is 1.1kHz for aerosol generated by coal ash. In the resonace tube, higher resonance frequencies are not the integral multiplies of the first one. As a result, Strong nonlinearity is avoided by the dissonant characteristic and shock wave is not found in the testing results. The mechanism and testing system can be used effectively in industrial processes in the future.
Mel'tser, A V; Erastova, N V; Kiselev, A V
2013-01-01
Providing population with quality drinking water--one of the priority tasks of the state policy aimed at maintaining the health of citizens. Hygienic rating of the drinking water quality envisages requirements to assurance its safety in the epidemiological and radiation relations, harmlessness of chemical composition and good organoleptic properties. There are numerous data proving the relationship between the chemical composition of drinking water and human health, and therefore the issue of taking a hygienically sound measures to improve the efficiency of water treatment has more and more priority. High water quality--the result of complex solution of tasks, including an integral approach to assessment of the quality of drinking water the use of hygienically sound decisions in the modernization of water treatment systems. The results of the integral assessment of drinking water on the properties of harmlessness have shown its actuality in the development and implementation of management decisions. The use of the spatial characteristics of integrated indices permits to visualize changes in the quality of drinking water in all stages of production and transportation from the position of health risks, evaluate the effectiveness of technological solutions and set priorities for investing.
ERIC Educational Resources Information Center
Villasenor, Romana F.; Smith, Sarah L.; Jewell, Vanessa D.
2018-01-01
This systematic review evaluates current evidence for using sound-based interventions (SBIs) to improve educational participation for children with challenges in sensory processing and integration. Databases searched included CINAHL, MEDLINE Complete, PsychINFO, ERIC, Web of Science, and Cochrane. No studies explicitly measured participation-level…
NASA Technical Reports Server (NTRS)
Heffner, R. J.
1998-01-01
This is the Engineering Test Report, AMSU-AL METSAT Instrument (S/N 105) Qualification Level Vibration Tests of December 1998 (S/0 605445, OC-419), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
NASA Technical Reports Server (NTRS)
Valdez, A.
2000-01-01
This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A1, S/N 109, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
NASA Technical Reports Server (NTRS)
Brooks, Rodney Allen; Stein, Lynn Andrea
1994-01-01
We describe a project to capitalize on newly available levels of computational resources in order to understand human cognition. We will build an integrated physical system including vision, sound input and output, and dextrous manipulation, all controlled by a continuously operating large scale parallel MIMD computer. The resulting system will learn to 'think' by building on its bodily experiences to accomplish progressively more abstract tasks. Past experience suggests that in attempting to build such an integrated system we will have to fundamentally change the way artificial intelligence, cognitive science, linguistics, and philosophy think about the organization of intelligence. We expect to be able to better reconcile the theories that will be developed with current work in neuroscience.
Electromagnetic mapping of buried paleochannels in eastern Abu Dhabi Emirate, U.A.E.
Fitterman, D.V.; Menges, C.M.; Al Kamali, A.M.; Essa, Jama F.
1991-01-01
Transient electromagnetic soundings and terrain conductivity meter measurements were used to map paleochannel geometry in the Al Jaww Plain of eastern Abu Dhabi Emirate, U.A.E. as part of an integrated hydrogeologic study of the Quaternary alluvial aquifer system. Initial interpretation of the data without benefit of well log information was able to map the depth to a conductive clay layer of Tertiary age that forms the base of the aquifer. Comparison of the results with induction logs reveals that a resistive zone exists that was incorporated into the interpretation and its lateral extent mapped with the transient electromagnetic sounding data. ?? 1991.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Copping, Andrea E.; Yang, Zhaoqing; Voisin, Nathalie
2013-12-01
Final Report for the EPA-sponsored project Snow Caps to White Caps that provides data products and insight for water resource managers to support their predictions and management actions to address future changes in water resources (fresh and marine) in the Puget Sound basin. This report details the efforts of a team of scientists and engineers from Pacific Northwest National Laboratory (PNNL) and the University of Washington (UW) to examine the movement of water in the Snohomish Basin, within the watershed and the estuary, under present and future conditions, using a set of linked numerical models.
Integrating speech in time depends on temporal expectancies and attention.
Scharinger, Mathias; Steinberg, Johanna; Tavano, Alessandro
2017-08-01
Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125-150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yan, W Y; Li, L; Yang, Y G; Lin, X L; Wu, J Z
2016-08-01
We designed a computer-based respiratory sound analysis system to identify pediatric normal lung sound. To verify the validity of the computer-based respiratory sound analysis system. First we downloaded the standard lung sounds from the network database (website: http: //www.easyauscultation.com/lung-sounds-reference-guide) and recorded 3 samples of abnormal loud sound (rhonchi, wheeze and crackles) from three patients of The Department of Pediatrics, the First Affiliated Hospital of Xiamen University. We regarded such lung sounds as"reference lung sounds". The"test lung sounds"were recorded from 29 children form Kindergarten of Xiamen University. we recorded lung sound by portable electronic stethoscope and valid lung sounds were selected by manual identification. We introduced Mel-frequency cepstral coefficient (MFCC) to extract lung sound features and dynamic time warping (DTW) for signal classification. We had 39 standard lung sounds, recorded 58 test lung sounds. This computer-based respiratory sound analysis system was carried out in 58 lung sound recognition, correct identification of 52 times, error identification 6 times. Accuracy was 89.7%. Based on MFCC and DTW, our computer-based respiratory sound analysis system can effectively identify healthy lung sounds of children (accuracy can reach 89.7%), fully embodies the reliability of the lung sounds analysis system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shiyuan, E-mail: redaple@bit.edu.cn; Sun, Haoyu, E-mail: redaple@bit.edu.cn; Xu, Chunguang, E-mail: redaple@bit.edu.cn
The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of “energy coefficient” in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energymore » coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.« less
NASA Astrophysics Data System (ADS)
Zhou, Shiyuan; Sun, Haoyu; Xu, Chunguang; Cao, Xiandong; Cui, Liming; Xiao, Dingguo
2015-03-01
The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of "energy coefficient" in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energy coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.
Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang
2011-07-01
In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Research on the Integration of IT Network Technology and TV Production and Broadcasting System
NASA Astrophysics Data System (ADS)
Zhang, Wenqing
2017-12-01
In recent years, based on the development of China’s economy and the progress of science and technology, China’s TV industry has made great progress and provided a new platform for residents to understand the social situation. In this situation, in order to protect the efficiency of the TV system and the steady improvement on quality, technical staff have strengthened the rational use of IT technology, and as a basis to promote the sound of television production system. Based on this, this paper focuses on the connotation of IT network technology, and discusses the integration of the design and TV production system, hoping to realize the sustainable development of China’s TV industry.
Complete geometric computer simulation of a classical guitar
NASA Astrophysics Data System (ADS)
Bader, Rolf
2005-04-01
The aim of formulating a complete model of a classical guitar body as a transient-time geometry is to get detailed insight into the vibrating and coupling behavior of the time-dependent guitar system. Here, especially the evolution of the guitars initial transient can be looked at with great detail and the produced sounds from this computer implementation can be listened to. Therefore, a stand-alone software was developed to build, calculate, and visualize the guitar. The model splits the guitar body into top plate, back plate, ribs, neck, inclosed air, and strings and couples these parts together including the coupling of bending waves and in-plane waves of these plates to serve for a better understanding of the coupling between the guitar parts and between these two kinds of waves. The resulting waveforms are integrated over the geometry and the resulting sounds show up the different roles and contributions of the different guitar body parts to the guitar sound. Here cooperation with guitar makers is established, as changes on the guitars geometry on the resulting sound can be considered as computer simulation and promising new sound qualities can then be used again in real instrument production.
Geometric Constraints on Human Speech Sound Inventories
Dunbar, Ewan; Dupoux, Emmanuel
2016-01-01
We investigate the idea that the languages of the world have developed coherent sound systems in which having one sound increases or decreases the chances of having certain other sounds, depending on shared properties of those sounds. We investigate the geometries of sound systems that are defined by the inherent properties of sounds. We document three typological tendencies in sound system geometries: economy, a tendency for the differences between sounds in a system to be definable on a relatively small number of independent dimensions; local symmetry, a tendency for sound systems to have relatively large numbers of pairs of sounds that differ only on one dimension; and global symmetry, a tendency for sound systems to be relatively balanced. The finding of economy corroborates previous results; the two symmetry properties have not been previously documented. We also investigate the relation between the typology of inventory geometries and the typology of individual sounds, showing that the frequency distribution with which individual sounds occur across languages works in favor of both local and global symmetry. PMID:27462296
14 CFR 36.6 - Incorporation by reference.
Code of Federal Regulations, 2010 CFR
2010-01-01
... No. 179, entitled “Precision Sound Level Meters,” dated 1973. (ii) IEC Publication No. 225, entitled... 1966. (iii) IEC Publication No. 651, entitled “Sound Level Meters,” first edition, dated 1979. (iv) IEC... edition, dated 1976. (v) IEC Publication No. 804, entitled “Integrating-averaging Sound Level Meters...
A comparative study of wood highway sound barriers
Stefan Grgurevich; Thomas Boothby; Harvey Manbeck; Courtney Burroughs; Stephen Cegelka; Craig Bernecker; Michael A. Ritter
2002-01-01
Prototype designs for wood highway sound barriers meeting the multiple criteria of structural integrity, acoustic effectiveness, durability, and potential for public acceptance have been developed. Existing installations of wood sound barriers were reviewed and measurements conducted in the field to estimate acoustic insertion losses. A complete matrix of design...
Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search
Song, Kai; Liu, Qi; Wang, Qi
2011-01-01
Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401
Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources
Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2007-03-13
A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Sound radiation from a subsonic rotor subjected to turbulence
NASA Technical Reports Server (NTRS)
Sevik, M.
1974-01-01
The broadband sound radiated by a subsonic rotor subjected to turbulence in the approach stream has been analyzed. The power spectral density of the sound intensity has been found to depend on a characteristic time scale-namely, the integral scale of the turbulence divided by the axial flow velocity-as well as several length-scale ratios. These consist of the ratio of the integral scale to the acoustic wavelength, rotor radius, and blade chord. Due to the simplified model chosen, only a limited number of cascade parameters appear. Limited comparisons with experimental data indicate good agreement with predicted values.
NASA Technical Reports Server (NTRS)
Heffner, R.
2000-01-01
This is the Engineering Test Report, AMSU-A2 METSAT Instrument (S/N 108) Acceptance Level Vibration Test of Dec 1999/Jan 2000 (S/O 784077, OC-454), for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
NASA Technical Reports Server (NTRS)
Valdez, A.
2000-01-01
This is the Engineering Test Report, Radiated Emissions and SARR, SARP, DCS Receivers, Link Frequencies EMI Sensitive Band Test Results, AMSU-A2, S/N 108, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
Ludeña-Choez, Jimmy; Quispe-Soncco, Raisa; Gallardo-Antolín, Ascensión
2017-01-01
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC.
Quispe-Soncco, Raisa
2017-01-01
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC. PMID:28628630
Wallez, Catherine; Schaeffer, Jennifer; Meguerditchian, Adrien; Vauclair, Jacques; Schapiro, Steven J.; Hopkins, William D.
2013-01-01
Studies involving oro-facial asymmetries in nonhuman primates have largely demonstrated a right hemispheric dominance for communicative signals and conveyance of emotional information. A recent study on chimpanzee reported the first evidence of significant left-hemispheric dominance when using attention-getting sounds and rightward bias for species-typical vocalizations (Losin, Russell, Freeman, Meguerditchian, Hopkins & Fitch, 2008). The current study sought to extend the findings from Losin et al. (2008) with additional oro-facial assessment in a new colony of chimpanzees. When combining the two populations, the results indicated a consistent leftward bias for attention-getting sounds and a right lateralization for species-typical vocalizations. Collectively, the results suggest that both voluntary- controlled oro-facial and gestural communication might share the same left-hemispheric specialization and might have coevolved into a single integrated system present in a common hominid ancestor. PMID:22867751
Temporal processing and adaptation in the songbird auditory forebrain.
Nagel, Katherine I; Doupe, Allison J
2006-09-21
Songbird auditory neurons must encode the dynamics of natural sounds at many volumes. We investigated how neural coding depends on the distribution of stimulus intensities. Using reverse-correlation, we modeled responses to amplitude-modulated sounds as the output of a linear filter and a nonlinear gain function, then asked how filters and nonlinearities depend on the stimulus mean and variance. Filter shape depended strongly on mean amplitude (volume): at low mean, most neurons integrated sound over many milliseconds, while at high mean, neurons responded more to local changes in amplitude. Increasing the variance (contrast) of amplitude modulations had less effect on filter shape but decreased the gain of firing in most cells. Both filter and gain changes occurred rapidly after a change in statistics, suggesting that they represent nonlinearities in processing. These changes may permit neurons to signal effectively over a wider dynamic range and are reminiscent of findings in other sensory systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borg, Lori; Tobin, David; Reale, Anthony
This IOP has been a coordinated effort involving the U.S. Department of Energy (DOE) Atmospheric Radiation (ARM) Climate Research Facility, the University of Wisconsin (UW)-Madison, and the JPSS project to validate SNPP NOAA Unique Combined Atmospheric Processing System (NUCAPS) temperature and moisture sounding products from the Cross-track Infrared Sounder (CrIS) and the Advanced Technology Microwave Sounder (ATMS). In this arrangement, funding for radiosondes was provided by the JPSS project to ARM. These radiosondes were launched coincident with the SNPP satellite overpasses (OP) at four of the ARM field sites beginning in July 2012 and running through September 2017. Combined withmore » other ARM data, an assessment of the radiosonde data quality was performed and post-processing corrections applied producing an ARM site Best Estimate (BE) product. The SNPP targeted radiosondes were integrated into the NOAA Products Validation System (NPROVS+) system, which collocated the radiosondes with satellite products (NOAA, National Aeronautics and Space Administration [NASA], European Organisation for the Exploitation of Meteorological Satellites [EUMETSAT], Geostationary Operational Environmental Satellite [GOES], Constellation Observing System for Meteorology, Ionosphere, and Climate [COSMIC]) and Numerical Weather Prediction (NWP forecasts for use in product assessment and algorithm development. This work was a fundamental, integral, and cost-effective part of the SNPP validation effort and provided critical accuracy assessments of the SNPP temperature and water vapor soundings.« less
Moving Digital Libraries into the Student Learning Space: The GetSmart Experience
ERIC Educational Resources Information Center
Marshall, Byron B.; Chen, Hsinchun; Shen, Rao; Fox, Edward A.
2006-01-01
The GetSmart system was built to support theoretically sound learning processes in a digital library environment by integrating course management, digital library, and concept mapping components to support a constructivist, six-step, information search process. In the fall of 2002 more than 100 students created 1400 concept maps as part of…
DIY Soundcard Based Temperature Logging System. Part I: Design
ERIC Educational Resources Information Center
Nunn, John
2016-01-01
This paper aims to enable schools to make their own low-cost temperature logging instrument and to learn something about its calibration in the process. This paper describes how a thermistor can be integrated into a simple potential divider circuit which is powered with the sound output of a computer and monitored by the microphone input. The…
ERIC Educational Resources Information Center
Stevenson, Ryan A.; Zemtsov, Raquel K.; Wallace, Mark T.
2012-01-01
Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of…
Design, Fabrication, and Characterization of a Microelectromechanical Directional Microphone
2011-06-01
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES...Figure 5.2 SOIC packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Figure 5.3 Laboratory setup...Mean Squared SOC System-On-Chip SOIC Small Outline Integrated Circuit SOIMUMPS Silicon-On-Insulator Multi-User MEMS Process SPL Sound Pressure Level
NASA Technical Reports Server (NTRS)
Baumeister, K. J.; Horowitz, S. J.
1982-01-01
An iterative finite element integral technique is used to predict the sound field radiated from the JT15D turbofan inlet. The sound field is divided into two regions: the sound field within and near the inlet which is computed using the finite element method and the radiation field beyond the inlet which is calculated using an integral solution technique. The velocity potential formulation of the acoustic wave equation was employed in the program. For some single mode JT15D data, the theory and experiment are in good agreement for the far field radiation pattern as well as suppressor attenuation. Also, the computer program is used to simulate flight effects that cannot be performed on a ground static test stand.
Zhao, Sipei; Qiu, Xiaojun; Cheng, Jianchun
2015-09-01
This paper proposes a different method for calculating a sound field diffracted by a rigid barrier based on the integral equation method, where a virtual boundary is assumed above the rigid barrier to divide the whole space into two subspaces. Based on the Kirchhoff-Helmholtz equation, the sound field in each subspace is determined with the source inside and the boundary conditions on the surface, and then the diffracted sound field is obtained by using the continuation conditions on the virtual boundary. Simulations are carried out to verify the feasibility of the proposed method. Compared to the MacDonald method and other existing methods, the proposed method is a rigorous solution for whole space and is also much easier to understand.
Practical Application of Sheet Lead for Sound Barriers.
ERIC Educational Resources Information Center
Lead Industries Association, New York, NY.
Techniques for improving sound barriers through the use of lead sheeting are described. To achieve an ideal sound barrier a material should consist of the following properties--(1) high density, (2) freedom from stiffness, (3) good damping capacity, and (4) integrity as a non-permeable membrane. Lead combines these desired properties to a greater…
NASA Astrophysics Data System (ADS)
West, Eva
2012-11-01
Researchers have highlighted the increasing problem of loud sounds among young people in leisure-time environments, recently even emphasizing portable music players, because of the risk of suffering from hearing impairments such as tinnitus. However, there is a lack of studies investigating compulsory-school students' standpoints and explanations in connection with teaching interventions integrating school subject content with auditory health. In addition, there are few health-related studies in the international science education literature. This paper explores students' standpoints on loud sounds including the use of hearing-protection devices in connection with a teaching intervention based on a teaching-learning sequence about sound, hearing and auditory health. Questionnaire data from 199 students, in grades 4, 7 and 8 (aged 10-14), from pre-, post- and delayed post-tests were analysed. Additionally, information on their experiences of tinnitus as well as their listening habits regarding portable music players was collected. The results show that more students make healthier choices in questions of loud sounds after the intervention, and especially among the older ones this result remains or is further improved one year later. There are also signs of positive behavioural change in relation to loud sounds. Significant gender differences are found; generally, the girls show more healthy standpoints and expressions than boys do. If this can be considered to be an outcome of students' improved and integrated knowledge about sound, hearing and health, then this emphasizes the importance of integrating health issues into regular school science.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2003-01-01
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C
2013-05-21
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2007-10-16
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Wind-tunnel acoustic results of two rotor models with several tip designs
NASA Technical Reports Server (NTRS)
Martin, R. M.; Connor, A. B.
1986-01-01
A three-phase research program has been undertaken to study the acoustic signals due to the aerodynamic interaction of rotorcraft main rotors and tail rotors. During the first phase, two different rotor models with several interchangeable tips were tested in the Langley 4- by 7-Meter Tunnel on the U.S. Army rotor model system. An extensive acoustic data base was acquired, with special emphasis on blade-vortex interaction (BVI) noise. The details of the experimental procedure, acoustic data acquisition, and reduction are documented. The overall sound pressure level (OASPL) of the high-twist rotor systems is relatively insensitive to flight speed but generally increases with rotor tip-path-plane angle. The OASPL of the high-twist rotors is dominated by acoustic energy in the low-frequency harmonics. The OASPL of the low-twist rotor systems shows more dependence on flight speed than the high-twist rotors, in addition to being quite sensitive to tip-path-plane angle. An integrated band-limited sound pressure level, limited by 500 to 3000 Hz, is a useful metric to quantify the occurrence of BVI noise. The OASPL of the low-twist rotors is strongly influenced by the band-limited sound levels, indicating that the blade-vortex impulsive noise is a dominant noise source for this rotor design. The midfrequency acoustic levels for both rotors show a very strong dependence on rotor tip-path-plane angle. The tip-path-plane angle at which the maximum midfrequency sound level occurs consistently decreases with increasing flight speed. The maximum midfrequency sound level measured at a given location is constant regardless of the flight speed.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Virtual environment display for a 3D audio room simulation
NASA Astrophysics Data System (ADS)
Chapin, William L.; Foster, Scott
1992-06-01
Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.
Optimal wavelet denoising for smart biomonitor systems
NASA Astrophysics Data System (ADS)
Messer, Sheila R.; Agzarian, John; Abbott, Derek
2001-03-01
Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.
Diversity and evolution of sound production in the social behavior of Chaetodon butterflyfishes.
Tricas, Timothy C; Boyle, Kelly S
2015-05-15
Fish produce context-specific sounds during social communication, but it is not known how acoustic behaviors have evolved in relation to specializations of the auditory system. Butterflyfishes (family Chaetodontidae) have a well-defined phylogeny and produce pulsed communication sounds during social interactions on coral reefs. Recent work indicates that two sound production mechanisms exist in the bannerfish clade and that other mechanisms are used in the Chaetodon clade, which is distinguished by an auditory specialization, the laterophysic connection (LC). Here, we determine the kinematic action patterns associated with sound production during social interactions in four Chaetodon subgenera and the non-laterophysic fish Forcipiger flavissimus. Some Chaetodon species share the head bob acoustic behavior with F. flavissimus, which along with other sounds in the 100-1000 Hz spectrum, are probably adequate to stimulate the ear, swim bladder or LC of a receiver fish. In contrast, only Chaetodon species produced the tail slap sound, which involves a 1-30 Hz hydrodynamic pulse that is likely to stimulate the receiver's ear and lateral line at close distances, but not the swim bladder or LC. Reconstructions of ancestral character states appear equivocal for the head bob and divergent for the tail slap acoustic behaviors. Independent contrast analysis shows a correlation between sound duration and stimulus intensity characters. The intensities of the tail slap and body pulse sounds in Chaeotodon species are correlated with body size and can provide honest communication signals. Future studies on fish acoustic communication should investigate low-frequency and infrasound acoustic fields to understand the integrated function of the ear and lateral line, and their evolutionary patterns. © 2015. Published by The Company of Biologists Ltd.
A two-beam acoustic system for tissue analysis.
Sachs, T D; Janney, C D
1977-03-01
In the 'thermo-acoustic sensing technique' (TAST), a burst of sound, called the 'thermometer' beam is passed through tissue and its transit time is measured. A focused sound field, called the heating field, then warms a small volume in the path of the therometer beam, in proportion to the absorption. Finally, the therometer beam burst is repeated and its transit time subtracted from that of the initial thermometer burst. This difference measures the velocity perturbation in the tissue produced by the heating field. The transit time difference is td = K integral of infinity-infinity IP dchi where K is the instrument constant, I the heating field intensity, and P a perturbation factor which characterizes the tissues. The integration is carried out along the path of the thermometer beam. The perturbation factor is P = (formula: see text) where C is the specific heat, rho the denisty, V the velocity of sound, (formula: see text) the temperature coefficient of velocity and alpha the heating field absorption coefficient which is apparently sensitive to tissue structure and condition. Experiments on a fixed human brain showed an ability to distinguish between various tissue types combined with a spatial resolution of better than 3 mm. Should predictions based on the data and theory prove correct, TAST may become a non-invasive alternative to biopsy.
NASA Astrophysics Data System (ADS)
Lee, Woo Seok; Jeong, Wonhee; Ahn, Kang-Hun
2014-12-01
We provide a simple dynamical model of a hair cell with an afferent neuron where the spectral and the temporal responses are controlled by the hair bundle's criticality and the neuron's excitability. To demonstrate that these parameters, indeed, specify the resolution of the sound encoding, we fabricate a neuromorphic device that models the hair cell bundle and its afferent neuron. Then, we show that the neural response of the biomimetic system encodes sounds with either high temporal or spectral resolution or with a combination of both resolutions. Our results suggest that the hair cells may easily specialize to fulfil various roles in spite of their similar physiological structures.
Status of the Micro-X Sounding Rocket X-Ray Spectrometer
NASA Technical Reports Server (NTRS)
Goldfinger, D. C.; Adams, J. S.; Baker, R.; Bandler, S. R.; Danowski, M. E.; Doriese, W. B.; Eckart, M. E.; Figueroa-Feliciano, E.; Hilton, G. C.; Hubbard, A. J. F.;
2016-01-01
Micro-X is a sounding rocket borne X-ray telescope that utilizes transition edge sensors to perform imaging spectroscopy with a high level of energy resolution. Its 2.1m focal length X-ray optic has an effective area of 300 sq cm, a field of view of 11.8 arcmin, and a bandpass of 0.12.5 keV. The detector array has 128 pixels and an intrinsic energy resolution of 4.5 eV FWHM. The integration of the system has progressed with functional tests of the detectors and electronics complete, and performance characterization of the detectors is underway. We present an update of ongoing progress in preparation for the upcoming launch of the instrument.
ERIC Educational Resources Information Center
Vilorio, Dennis
2011-01-01
In this article, the author talks about a small occupation of sound reproduction specialists known as foley artists. Foley artists work behind the scenes in filmmaking and television, using props to recreate all the physical sounds that are integrated into a movie or TV show. These sounds need to be recreated because the microphones used on a set…
Liu, Baolin; Wang, Zhongning; Jin, Zhixing
2009-09-11
In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.
LOx / LCH4: A Unifying Technology for Future Exploration
NASA Technical Reports Server (NTRS)
Falker, John; Terrier, Douglas; Clayton, Ronald G.; Banker, Brian; Ryan, Abigail
2015-01-01
Reduced mass due to increasing commonality between spacecraft subsystems such as power and propulsion have been identified as critical to enabling human missions to Mars. This project represents the first ever integrated propulsion and power system testing and lays the foundations for future sounding rocket flight testing, which will yield the first in-space ignition of a LOx / LCH4 rocket engine.
Western Grid Can Handle High Renewables in Challenging Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-11-01
Fact sheet outlining the key findings of Phase 3 of the Western Wind and Solar Integration Study (WWSIS-3). NREL and GE find that with good system planning, sound engineering practices, and commercially available technologies, the Western grid can maintain reliability and stability during the crucial first minute after grid disturbances with high penetrations of wind and solar power.
Web-based courses. More than curriculum.
Mills, M E; Fisher, C; Stair, N
2001-01-01
Online program development depends on an educationally and technologically sound curriculum supported by a solid infrastructure. Creation of a virtual environment through design of online registration and records, financial aid, orientation, advisement, resources, and evaluation and assessment provides students with access and program integrity.The planning of an academic support system as an electronic environment provides challenges and institutional issues requiring systematic analysis.
Side-branch resonators modelling with Green's function methods
NASA Astrophysics Data System (ADS)
Perrey-Debain, E.; Maréchal, R.; Ville, J. M.
2014-09-01
This paper deals with strategies for computing efficiently the propagation of sound waves in ducts containing passive components. In many cases of practical interest, these components are acoustic cavities which are connected to the duct. Though standard Finite Element software could be used for the numerical prediction of sound transmission through such a system, the method is known to be extremely demanding, both in terms of data preparation and computation, especially in the mid-frequency range. To alleviate this, a numerical technique that exploits the benefit of the FEM and the BEM approach has been devised. First, a set of eigenmodes is computed in the cavity to produce a numerical impedance matrix connecting the pressure and the acoustic velocity on the duct wall interface. Then an integral representation for the acoustic pressure in the main duct is used. By choosing an appropriate Green's function for the duct, the integration procedure is limited to the duct-cavity interface only. This allows an accurate computation of the scattering matrix of such an acoustic system with a numerical complexity that grows very mildly with the frequency. Typical applications involving Helmholtz and Herschel-Quincke resonators are presented.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.
2004-01-01
Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.
Emotional sounds modulate early neural processing of emotional pictures
Gerdes, Antje B. M.; Wieser, Matthias J.; Bublatzky, Florian; Kusay, Anita; Plichta, Michael M.; Alpers, Georg W.
2013-01-01
In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant, and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2 s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP), independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception. PMID:24151476
Neural Representation of a Target Auditory Memory in a Cortico-Basal Ganglia Pathway
Bottjer, Sarah W.
2013-01-01
Vocal learning in songbirds, like speech acquisition in humans, entails a period of sensorimotor integration during which vocalizations are evaluated via auditory feedback and progressively refined to achieve an imitation of memorized vocal sounds. This process requires the brain to compare feedback of current vocal behavior to a memory of target vocal sounds. We report the discovery of two distinct populations of neurons in a cortico-basal ganglia circuit of juvenile songbirds (zebra finches, Taeniopygia guttata) during vocal learning: (1) one in which neurons are selectively tuned to memorized sounds and (2) another in which neurons are selectively tuned to self-produced vocalizations. These results suggest that neurons tuned to learned vocal sounds encode a memory of those target sounds, whereas neurons tuned to self-produced vocalizations encode a representation of current vocal sounds. The presence of neurons tuned to memorized sounds is limited to early stages of sensorimotor integration: after learning, the incidence of neurons encoding memorized vocal sounds was greatly diminished. In contrast to this circuit, neurons known to drive vocal behavior through a parallel cortico-basal ganglia pathway show little selective tuning until late in learning. One interpretation of these data is that representations of current and target vocal sounds in the shell circuit are used to compare ongoing patterns of vocal feedback to memorized sounds, whereas the parallel core circuit has a motor-related role in learning. Such a functional subdivision is similar to mammalian cortico-basal ganglia pathways in which associative-limbic circuits mediate goal-directed responses, whereas sensorimotor circuits support motor aspects of learning. PMID:24005299
A real-time electronic imaging system for solar X-ray observations from sounding rockets
NASA Technical Reports Server (NTRS)
Davis, J. M.; Ting, J. W.; Gerassimenko, M.
1979-01-01
A real-time imaging system for displaying the solar coronal soft X-ray emission, focussed by a grazing incidence telescope, is described. The design parameters of the system, which is to be used primarily as part of a real-time control system for a sounding rocket experiment, are identified. Their achievement with a system consisting of a microchannel plate, for the conversion of X-rays into visible light, and a slow-scan vidicon, for recording and transmission of the integrated images, is described in detail. The system has a quantum efficiency better than 8 deg above 8 A, a dynamic range of 1000 coupled with a sensitivity to single photoelectrons, and provides a spatial resolution of 15 arc seconds over a field of view of 40 x 40 square arc minutes. The incident radiation is filtered to eliminate wavelengths longer than 100 A. Each image contains 3.93 x 10 to the 5th bits of information and is transmitted to the ground where it is processed by a mini-computer and displayed in real-time on a standard TV monitor.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-06-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe
2014-01-01
Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce "StorySense", an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children's motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage "low-motor" interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child's gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism.
Visible Contrast Energy Metrics for Detection and Discrimination
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Watson, Andrew
2013-01-01
Contrast energy was proposed by Watson, Robson, & Barlow as a useful metric for representing luminance contrast target stimuli because it represents the detectability of the stimulus in photon noise for an ideal observer. Like the eye, the ear is a complex transducer system, but relatively simple sound level meters are used to characterize sounds. These meters provide a range of frequency sensitivity functions and integration times depending on the intended use. We propose here the use of a range of contrast energy measures with different spatial frequency contrast sensitivity weightings, eccentricity sensitivity weightings, and temporal integration times. When detection threshold are plotting using such measures, the results show what the eye sees best when these variables are taken into account in a standard way. The suggested weighting functions revise the Standard Spatial Observer for luminance contrast detection and extend it into the near periphery. Under the assumption that the detection is limited only by internal noise, discrimination performance can be predicted by metrics based on the visible energy of the difference images
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-01-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517
Feasibility study of a game integrating assessment and therapy of tinnitus.
Wise, K; Kobayashi, K; Searchfield, G D
2015-07-15
Tinnitus, head and ear noise, is due to maladaptive plastic changes in auditory and associated neural networks. Tinnitus has been traditionally managed through the use of sound to passively mask or facilitate habituation to tinnitus, a process that may take 6-12 months. A game-based perceptual training method, requiring localisation and selective attention to sounds, was developed and customised to the individual's tinnitus perception. Eight participants tested the games usability at home. Each participant successfully completed 30 min of training, for 20 days, along with daily psychoacoustic assessment of tinnitus pitch and loudness. The training period and intensity of training appears sufficient to reduce tinnitus handicap. The training approach used may be a viable alternative to frequency discrimination based training for treating tinnitus (Hoare et al., 2014) and a useful tool in exploring learning mechanisms in the auditory system. Integration of tinnitus assessment with therapy in a game is feasible, and the method(s) warrant further investigation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
1999-01-01
This is the Performance Verification Report, METSAT (S/N 109) AMSU-A1 Receiver Assemblies, P/N 1356429-1 S/N F06 and P/N 1356409 S/N F06, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A).
Sonic morphology: Aesthetic dimensional auditory spatial awareness
NASA Astrophysics Data System (ADS)
Whitehouse, Martha M.
The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang; Sackmann, Brandon; Long, Wen
2012-08-14
Nutrient pollution from rivers, nonpoint source runoff, and nearly 100 wastewater discharges is a potential threat to the ecological health of Puget Sound with evidence of hypoxia in some basins. However, the relative contributions of loads entering Puget Sound from natural and anthropogenic sources, and the effects of exchange flow from the Pacific Ocean are not well understood. Development of a quantitative model of Puget Sound is thus presented to help improve our understanding of the annual biogeochemical cycles in this system using the unstructured grid Finite-Volume Coastal Ocean Model framework and the Integrated Compartment Model (CE-QUAL-ICM) water quality kinetics.more » Results based on 2006 data show that phytoplankton growth and die-off, succession between two species of algae, nutrient dynamics, and dissolved oxygen in Puget Sound are strongly tied to seasonal variation of temperature, solar radiation, and the annual exchange and flushing induced by upwelled Pacific Ocean waters. Concentrations in the mixed outflow surface layer occupying approximately 5–20 m of the upper water column show strong effects of eutrophication from natural and anthropogenic sources, spring and summer algae blooms, accompanied by depleted nutrients but high dissolved oxygen levels. The bottom layer reflects dissolved oxygen and nutrient concentrations of upwelled Pacific Ocean water modulated by mixing with biologically active surface outflow in the Strait of Juan de Fuca prior to entering Puget Sound over the Admiralty Inlet. The effect of reflux mixing at the Admiralty Inlet sill resulting in lower nutrient and higher dissolved oxygen levels in bottom waters of Puget Sound than the incoming upwelled Pacific Ocean water is reproduced. Finally, by late winter, with the reduction in algal activity, water column constituents of interest, were renewed and the system appeared to reset with cooler temperature, higher nutrient, and higher dissolved oxygen waters from the Pacific Ocean.« less
Watershed System Model: The Essentials to Model Complex Human-Nature System at the River Basin Scale
NASA Astrophysics Data System (ADS)
Li, Xin; Cheng, Guodong; Lin, Hui; Cai, Ximing; Fang, Miao; Ge, Yingchun; Hu, Xiaoli; Chen, Min; Li, Weiyue
2018-03-01
Watershed system models are urgently needed to understand complex watershed systems and to support integrated river basin management. Early watershed modeling efforts focused on the representation of hydrologic processes, while the next-generation watershed models should represent the coevolution of the water-land-air-plant-human nexus in a watershed and provide capability of decision-making support. We propose a new modeling framework and discuss the know-how approach to incorporate emerging knowledge into integrated models through data exchange interfaces. We argue that the modeling environment is a useful tool to enable effective model integration, as well as create domain-specific models of river basin systems. The grand challenges in developing next-generation watershed system models include but are not limited to providing an overarching framework for linking natural and social sciences, building a scientifically based decision support system, quantifying and controlling uncertainties, and taking advantage of new technologies and new findings in the various disciplines of watershed science. The eventual goal is to build transdisciplinary, scientifically sound, and scale-explicit watershed system models that are to be codesigned by multidisciplinary communities.
The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System
NASA Astrophysics Data System (ADS)
Lin, M.
2016-12-01
Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line to calibrate. GPS Latency is synchronized GPS to echo sounder. Future studies concerning any shallower portion of an area, by this procedure can be more accurate sounding value and can do more detailed research.
Near-infrared image-guided laser ablation of artificial caries lesions.
Tao, You-Chen; Fan, Kenneth; Fried, Daniel
2007-01-01
Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. The objective of this study was to test the hypothesis that two-dimensional NIR images of demineralized tooth surfaces can be used to guide CO(2) laser ablation for the selective removal of artificial caries lesions. Highly patterned artificial lesions were produced by submerging 5 × 5 mm(2) bovine enamel samples in demineralized solution for a 9-day period while sound areas were protected with acid resistant varnish. NIR imaging and polarization sensitive optical coherence tomography (PS-OCT) were used to acquire depth-resolved images at a wavelength of 1310-nm. An imaging processing module was developed to analyze the NIR images and to generate optical maps. The optical maps were used to control a CO(2) laser for the selective removal of the lesions at a uniform depth. This experiment showed that the patterned artificial lesions were removed selectively using the optical maps with minimal damage to sound enamel areas. Post-ablation NIR and PS-OCT imaging confirmed that demineralized areas were removed while sound enamel was conserved. This study successfully demonstrated that near-IR imaging can be integrated with a CO(2) laser ablation system for the selective removal of dental caries.
Near-infrared image-guided laser ablation of artificial caries lesions
Tao, You-Chen; Fan, Kenneth; Fried, Daniel
2012-01-01
Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. The objective of this study was to test the hypothesis that two–dimensional NIR images of demineralized tooth surfaces can be used to guide CO2 laser ablation for the selective removal of artificial caries lesions. Highly patterned artificial lesions were produced by submerging 5 × 5 mm2 bovine enamel samples in demineralized solution for a 9-day period while sound areas were protected with acid resistant varnish. NIR imaging and polarization sensitive optical coherence tomography (PS-OCT) were used to acquire depth-resolved images at a wavelength of 1310-nm. An imaging processing module was developed to analyze the NIR images and to generate optical maps. The optical maps were used to control a CO2 laser for the selective removal of the lesions at a uniform depth. This experiment showed that the patterned artificial lesions were removed selectively using the optical maps with minimal damage to sound enamel areas. Post-ablation NIR and PS-OCT imaging confirmed that demineralized areas were removed while sound enamel was conserved. This study successfully demonstrated that near-IR imaging can be integrated with a CO2 laser ablation system for the selective removal of dental caries. PMID:22866210
Near-infrared image-guided laser ablation of artificial caries lesions
NASA Astrophysics Data System (ADS)
Tao, You-Chen; Fan, Kenneth; Fried, Daniel
2007-02-01
Laser removal of dental hard tissue can be combined with optical, spectral or acoustic feedback systems to selectively ablate dental caries and restorative materials. Near-infrared (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue. The objective of this study was to test the hypothesis that two-dimensional NIR images of demineralized tooth surfaces can be used to guide CO II laser ablation for the selective removal of artificial caries lesions. Highly patterned artificial lesions were produced by submerging 5 x 5 mm2 bovine enamel samples in demineralized solution for a 9-day period while sound areas were protected with acid resistant varnish. NIR imaging and polarization sensitive optical coherence tomography (PS-OCT) were used to acquire depth-resolved images at a wavelength of 1310-nm. An imaging processing module was developed to analyze the NIR images and to generate optical maps. The optical maps were used to control a CO II laser for the selective removal of the lesions at a uniform depth. This experiment showed that the patterned artificial lesions were removed selectively using the optical maps with minimal damage to sound enamel areas. Post-ablation NIR and PS-OCT imaging confirmed that demineralized areas were removed while sound enamel was conserved. This study successfully demonstrated that near-IR imaging can be integrated with a CO II laser ablation system for the selective removal of dental caries.
Nonlinear Bubble Interactions in Acoustic Pressure Fields
NASA Technical Reports Server (NTRS)
Barbat, Tiberiu; Ashgriz, Nasser; Liu, Ching-Shi
1996-01-01
The systems consisting of a two-phase mixture, as clouds of bubbles or drops, have shown many common features in their responses to different external force fields. One of particular interest is the effect of an unsteady pressure field applied to these systems, case in which the coupling of the vibrations induced in two neighboring components (two drops or two bubbles) may result in an interaction force between them. This behavior was explained by Bjerknes by postulating that every body that is moving in an accelerating fluid is subjected to a 'kinetic buoyancy' equal with the product of the acceleration of the fluid multiplied by the mass of the fluid displaced by the body. The external sound wave applied to a system of drops/bubbles triggers secondary sound waves from each component of the system. These secondary pressure fields integrated over the surface of the neighboring drop/bubble may result in a force additional to the effect of the primary sound wave on each component of the system. In certain conditions, the magnitude of these secondary forces may result in significant changes in the dynamics of each component, thus in the behavior of the entire system. In a system containing bubbles, the sound wave radiated by one bubble at the location of a neighboring one is dominated by the volume oscillation mode and its effects can be important for a large range of frequencies. The interaction forces in a system consisting of drops are much smaller than those consisting of bubbles. Therefore, as a first step towards the understanding of the drop-drop interaction subject to external pressure fluctuations, it is more convenient to study the bubble interactions. This paper presents experimental results and theoretical predictions concerning the interaction and the motion of two levitated air bubbles in water in the presence of an acoustic field at high frequencies (22-23 KHz).
Developing an Acoustic Sensing Yarn for Health Surveillance in a Military Setting.
Hughes-Riley, Theodore; Dias, Tilak
2018-05-17
Overexposure to high levels of noise can cause permanent hearing disorders, which have a significant adverse effect on the quality of life of those affected. Injury due to noise can affect people in a variety of careers including construction workers, factory workers, and members of the armed forces. By monitoring the noise exposure of workers, overexposure can be avoided and suitable protective equipment can be provided. This work focused on the creation of a noise dosimeter suitable for use by members of the armed forces, where a discrete dosimeter was integrated into a textile helmet cover. In this way the sensing elements could be incorporated very close to the ears, providing a highly representative indication of the sound level entering the body, and also creating a device that would not interfere with military activities. This was achieved by utilising commercial microelectromechanical system microphones integrated within the fibres of yarn to create an acoustic sensing yarn. The acoustic sensing yarns were fully characterised over a range of relevant sound levels and frequencies at each stage in the yarn production process. The yarns were ultimately integrated into a knitted helmet cover to create a functional acoustic sensing helmet cover prototype.
Developing an Acoustic Sensing Yarn for Health Surveillance in a Military Setting
Dias, Tilak
2018-01-01
Overexposure to high levels of noise can cause permanent hearing disorders, which have a significant adverse effect on the quality of life of those affected. Injury due to noise can affect people in a variety of careers including construction workers, factory workers, and members of the armed forces. By monitoring the noise exposure of workers, overexposure can be avoided and suitable protective equipment can be provided. This work focused on the creation of a noise dosimeter suitable for use by members of the armed forces, where a discrete dosimeter was integrated into a textile helmet cover. In this way the sensing elements could be incorporated very close to the ears, providing a highly representative indication of the sound level entering the body, and also creating a device that would not interfere with military activities. This was achieved by utilising commercial microelectromechanical system microphones integrated within the fibres of yarn to create an acoustic sensing yarn. The acoustic sensing yarns were fully characterised over a range of relevant sound levels and frequencies at each stage in the yarn production process. The yarns were ultimately integrated into a knitted helmet cover to create a functional acoustic sensing helmet cover prototype. PMID:29772756
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256
Sound Radiated by a Wave-Like Structure in a Compressible Jet
NASA Technical Reports Server (NTRS)
Golubev, V. V.; Prieto, A. F.; Mankbadi, R. R.; Dahl, M. D.; Hixon, R.
2003-01-01
This paper extends the analysis of acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. Compared to previous work, a modified approach to the sound source modeling is examined that employs a set of solutions to linearized Euler equations. The sound radiation is then calculated using an integral surface method.
Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István
2005-09-01
The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Auditory integration training and other sound therapies for autism spectrum disorders (ASD).
Sinha, Yashwant; Silove, Natalie; Hayen, Andrew; Williams, Katrina
2011-12-07
Auditory integration therapy was developed as a technique for improving abnormal sound sensitivity in individuals with behavioural disorders including autism spectrum disorders. Other sound therapies bearing similarities to auditory integration therapy include the Tomatis Method and Samonas Sound Therapy. To determine the effectiveness of auditory integration therapy or other methods of sound therapy in individuals with autism spectrum disorders. For this update, we searched the following databases in September 2010: CENTRAL (2010, Issue 2), MEDLINE (1950 to September week 2, 2010), EMBASE (1980 to Week 38, 2010), CINAHL (1937 to current), PsycINFO (1887 to current), ERIC (1966 to current), LILACS (September 2010) and the reference lists of published papers. One new study was found for inclusion. Randomised controlled trials involving adults or children with autism spectrum disorders. Treatment was auditory integration therapy or other sound therapies involving listening to music modified by filtering and modulation. Control groups could involve no treatment, a waiting list, usual therapy or a placebo equivalent. The outcomes were changes in core and associated features of autism spectrum disorders, auditory processing, quality of life and adverse events. Two independent review authors performed data extraction. All outcome data in the included papers were continuous. We calculated point estimates and standard errors from t-test scores and post-intervention means. Meta-analysis was inappropriate for the available data. We identified six randomised comtrolled trials of auditory integration therapy and one of Tomatis therapy, involving a total of 182 individuals aged three to 39 years. Two were cross-over trials. Five trials had fewer than 20 participants. Allocation concealment was inadequate for all studies. Twenty different outcome measures were used and only two outcomes were used by three or more studies. Meta-analysis was not possible due to very high heterogeneity or the presentation of data in unusable forms. Three studies (Bettison 1996; Zollweg 1997; Mudford 2000) did not demonstrate any benefit of auditory integration therapy over control conditions. Three studies (Veale 1993; Rimland 1995; Edelson 1999) reported improvements at three months for the auditory integration therapy group based on the Aberrant Behaviour Checklist, but they used a total score rather than subgroup scores, which is of questionable validity, and Veale's results did not reach statistical significance. Rimland 1995 also reported improvements at three months in the auditory integration therapy group for the Aberrant Behaviour Checklist subgroup scores. The study addressing Tomatis therapy (Corbett 2008) described an improvement in language with no difference between treatment and control conditions and did not report on the behavioural outcomes that were used in the auditory integration therapy trials. There is no evidence that auditory integration therapy or other sound therapies are effective as treatments for autism spectrum disorders. As synthesis of existing data has been limited by the disparate outcome measures used between studies, there is not sufficient evidence to prove that this treatment is not effective. However, of the seven studies including 182 participants that have been reported to date, only two (with an author in common), involving a total of 35 participants, report statistically significant improvements in the auditory intergration therapy group and for only two outcome measures (Aberrant Behaviour Checklist and Fisher's Auditory Problems Checklist). As such, there is no evidence to support the use of auditory integration therapy at this time.
Sala, Marco; Casacci, Luca Pietro; Balletto, Emilio; Bonelli, Simona; Barbero, Francesca
2014-01-01
About 10,000 arthropods live as ants' social parasites and have evolved a number of mechanisms allowing them to penetrate and survive inside the ant nests. Many of them can intercept and manipulate their host communication systems. This is particularly important for butterflies of the genus Maculinea, which spend the majority of their lifecycle inside Myrmica ant nests. Once in the colony, caterpillars of Maculinea “predatory species” directly feed on the ant larvae, while those of “cuckoo species” are fed primarily by attendance workers, by trophallaxis. It has been shown that Maculinea cuckoo larvae are able to reach a higher social status within the colony's hierarchy by mimicking the acoustic signals of their host queen ants. In this research we tested if, when and how myrmecophilous butterflies may change sound emissions depending on their integration level and on stages of their life cycle. We studied how a Maculinea predatory species (M. teleius) can acoustically interact with their host ants and highlighted differences with respect to a cuckoo species (M. alcon). We recorded sounds emitted by Maculinea larvae as well as by their Myrmica hosts, and performed playback experiments to assess the parasites' capacity to interfere with the host acoustic communication system. We found that, although varying between and within butterfly species, the larval acoustic emissions are more similar to queens' than to workers' stridulations. Nevertheless playback experiments showed that ant workers responded most strongly to the sounds emitted by the integrated (i.e. post-adoption) larvae of the cuckoo species, as well as by those of predatory species recorded before any contact with the host ants (i.e. in pre-adoption), thereby revealing the role of acoustic signals both in parasite integration and in adoption rituals. We discuss our findings in the broader context of parasite adaptations, comparing effects of acoustical and chemical mimicry. PMID:24718496
Sala, Marco; Casacci, Luca Pietro; Balletto, Emilio; Bonelli, Simona; Barbero, Francesca
2014-01-01
About 10,000 arthropods live as ants' social parasites and have evolved a number of mechanisms allowing them to penetrate and survive inside the ant nests. Many of them can intercept and manipulate their host communication systems. This is particularly important for butterflies of the genus Maculinea, which spend the majority of their lifecycle inside Myrmica ant nests. Once in the colony, caterpillars of Maculinea "predatory species" directly feed on the ant larvae, while those of "cuckoo species" are fed primarily by attendance workers, by trophallaxis. It has been shown that Maculinea cuckoo larvae are able to reach a higher social status within the colony's hierarchy by mimicking the acoustic signals of their host queen ants. In this research we tested if, when and how myrmecophilous butterflies may change sound emissions depending on their integration level and on stages of their life cycle. We studied how a Maculinea predatory species (M. teleius) can acoustically interact with their host ants and highlighted differences with respect to a cuckoo species (M. alcon). We recorded sounds emitted by Maculinea larvae as well as by their Myrmica hosts, and performed playback experiments to assess the parasites' capacity to interfere with the host acoustic communication system. We found that, although varying between and within butterfly species, the larval acoustic emissions are more similar to queens' than to workers' stridulations. Nevertheless playback experiments showed that ant workers responded most strongly to the sounds emitted by the integrated (i.e. post-adoption) larvae of the cuckoo species, as well as by those of predatory species recorded before any contact with the host ants (i.e. in pre-adoption), thereby revealing the role of acoustic signals both in parasite integration and in adoption rituals. We discuss our findings in the broader context of parasite adaptations, comparing effects of acoustical and chemical mimicry.
Simulating effects of highway embankments on estuarine circulation
Lee, Jonathan K.; Schaffranek, Raymond W.; Baltzer, Robert A.
1994-01-01
A two-dimensional depth-averaged, finite-difference, numerical model was used to simulate tidal circulation and mass transport in the Port Royal Sound. South Carolina, estuarine system. The purpose of the study was to demonstrate the utility of the Surface-Water. Integrated. Flow and Transport model (SWIFT2D) for evaluating changes in circulation patterns and mass transport caused by highway-crossing embankments. A model of subregion of Port Royal Sound including the highway crossings and having a grid size of 61 m (200ft) was derived from a 183-m (600-ft) model of the entire Port Royal Sound estuarine system. The 183-m model was used to compute boundary-value data for the 61-m submodel, which was then used to simulate flow conditions with and without the highway embankments in place. The numerical simulations show that, with the highway embankment in place, mass transport between the Broad River and Battery Creek is reduced and mass transport between the Beaufort River and Battery Creek is increased. The net result is that mass transport into and out of upper Battery Creek is reduced. The presence of the embankments also alters circulation patterns within Battery Creek.
Mobile Disdrometer Observations of Nocturnal Mesoscale Convective Systems During PECAN
NASA Astrophysics Data System (ADS)
Bodine, D. J.; Rasmussen, K. L.
2015-12-01
Understanding microphysical processes in nocturnal mesoscale convective systems (MCSs) is an important objective of the Plains Elevated Convection At Night (PECAN) experiment, which occurred from 1 June - 15 July 2015 in the central Great Plains region of the United States. Observations of MCSs were collected using a large array of mobile and fixed instrumentation, including ground-based radars, soundings, PECAN Integrated Sounding Arrays (PISAs), and aircraft. In addition to these observations, three mobile Parsivel disdrometers were deployed to obtain drop-size distribution (DSD) measurements to further explore microphysical processes in convective and stratiform regions of nocturnal MCSs. Disdrometers were deployed within close range of a multiple frequency network of mobile and fixed dual-polarization radars (5 - 30 km range), and near mobile sounding units and PISAs. Using mobile disdrometer and multiple-wavelength, dual-polarization radar data, microphysical properties of convective and stratiform regions of MCSs are investigated. The analysis will also examine coordinated Range-Height Indicator (RHI) scans over the disdrometers to elucidate vertical DSD structure. Analysis of dense observations obtained during PECAN in combination with mobile disdrometer DSD measurements contributes to a greater understanding of the structural characteristics and evolution of nocturnal MCSs.
Directly solar-pumped iodine laser for beamed power transmission in space
NASA Technical Reports Server (NTRS)
Choi, S. H.; Meador, W. E.; Lee, J. H.
1992-01-01
A new approach for development of a 50-kW directly solar-pumped iodine laser (DSPIL) system as a space-based power station was made using a confocal unstable resonator (CUR). The CUR-based DSPIL has advantages, such as performance enhancement, reduction of total mass, and simplicity which alleviates the complexities inherent in the previous system, master oscillator/power amplifier (MOPA) configurations. In this design, a single CUR-based DSPIL with 50-kW output power was defined and compared to the MOPA-based DSPIL. Integration of multiple modules for power requirements more than 50-kW is physically and structurally a sound approach as compared to building a single large system. An integrated system of multiple modules can respond to various mission power requirements by combining and aiming the coherent beams at the user's receiver.
Numerical evaluation of propeller noise including nonlinear effects
NASA Technical Reports Server (NTRS)
Korkan, K. D.; Von Lavante, E.; Bober, L. J.
1986-01-01
Propeller noise in the acoustic near field is presently determined through the integration of the pressure-time history in the tangential direction of a numerically generated flowfield around a propfan of SR-3 type, including the shock wave system in the vicinity of the propeller tip. This acoustic analysis yields overall sound pressure levels, and the associated frequency spectra, as a function of observer location.
The Shock and Vibration Digest. Volume 15, Number 8
1983-08-01
a number of cracks have occurred in rotor shafts of turbogenerator sys - tems. Methods for detecting such cracks have thus become important, and...Bearing-Foundation Sys - tems Caused by Electrical System Faults," IFTOMM, p 177. 95. Ming, H., Sgroi, V., and Malanoski, S.B., "Fan/ Foundation...vibra- tion fundamentals, deterministic and random signals, convolution integrals, wave motion, continuous sys - tems, sound propagation outdoors
Low Boom Flight Demonstrator Briefing
2018-04-03
Dr. Ed Waggoner, program director, Integrated Aviation Systems Program, NASA, speaks at a briefing on the Low Boom Flight Demonstrator, Tuesday, April 3, 2018 at NASA Headquarters in Washington. This new experimental aircraft will cut cross country travel times in half by flying faster than the speed of sound without creating a sonic boom, enabling travel from New York to Los Angeles in two hours. Photo Credit: (NASA/Aubrey Gemignani)
Mission Assurance, Threat Alert, Disaster Resiliency and Response (MATADRR) Product Reference Guide
2014-06-01
communication systems customized for military , government, healthcare, higher education and commercial organizations. The AtHoc solutions automate the end...how to develop an integrated operational picture across the local, state and military environment where they operate. Considerations such as the...services are used to support sound decision making in disaster response and civil- military humanitarian assistance operations, as well as in disaster
A generalized sound extrapolation method for turbulent flows
NASA Astrophysics Data System (ADS)
Zhong, Siyang; Zhang, Xin
2018-02-01
Sound extrapolation methods are often used to compute acoustic far-field directivities using near-field flow data in aeroacoustics applications. The results may be erroneous if the volume integrals are neglected (to save computational cost), while non-acoustic fluctuations are collected on the integration surfaces. In this work, we develop a new sound extrapolation method based on an acoustic analogy using Taylor's hypothesis (Taylor 1938 Proc. R. Soc. Lon. A 164, 476-490. (doi:10.1098/rspa.1938.0032)). Typically, a convection operator is used to filter out the acoustically inefficient components in the turbulent flows, and an acoustics dominant indirect variable Dcp‧ is solved. The sound pressure p' at the far field is computed from Dcp‧ based on the asymptotic properties of the Green's function. Validations results for benchmark problems with well-defined sources match well with the exact solutions. For aeroacoustics applications: the sound predictions by the aerofoil-gust interaction are close to those by an earlier method specially developed to remove the effect of vortical fluctuations (Zhong & Zhang 2017 J. Fluid Mech. 820, 424-450. (doi:10.1017/jfm.2017.219)); for the case of vortex shedding noise from a cylinder, the off-body predictions by the proposed method match well with the on-body Ffowcs-Williams and Hawkings result; different integration surfaces yield close predictions (of both spectra and far-field directivities) for a co-flowing jet case using an established direct numerical simulation database. The results suggest that the method may be a potential candidate for sound projection in aeroacoustics applications.
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment
NASA Astrophysics Data System (ADS)
Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil
In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.
Intertrial auditory neural stability supports beat synchronization in preschoolers
Carr, Kali Woodruff; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina
2016-01-01
The ability to synchronize motor movements along with an auditory beat places stringent demands on the temporal processing and sensorimotor integration capabilities of the nervous system. Links between millisecond-level precision of auditory processing and the consistency of sensorimotor beat synchronization implicate fine auditory neural timing as a mechanism for forming stable internal representations of, and behavioral reactions to, sound. Here, for the first time, we demonstrate a systematic relationship between consistency of beat synchronization and trial-by-trial stability of subcortical speech processing in preschoolers (ages 3 and 4 years old). We conclude that beat synchronization might provide a useful window into millisecond-level neural precision for encoding sound in early childhood, when speech processing is especially important for language acquisition and development. PMID:26760457
Evaluation of noise impact mitigation protocols to support CSS : final report.
DOT National Transportation Integrated Search
2008-03-01
This research project developed and evaluated practical ways of involving the public in context sensitive sound mitigation strategies. The integrated use of photo montage, PowerPoint presentation, linked traffic sound files, and audience response sys...
Potential implications of acoustic stimuli as a non-physical barrier to silver carp and bighead carp
Murchy, Kelsie; Cupp, Aaron R.; Amberg, Jon J.; Vetter, Brooke J.; Fredricks, Kim; Gaikowski, Mark; Mensinger, Allen F.
2017-01-01
The effectiveness of an acoustic barrier to deter the movement of silver carp, Hypophthalmichthys molitrix (Valenciennes) and bighead carp, H. nobilis (Richardson) was evaluated. A pond (10 m × 5 m × 1.2 m) was divided in half by a concrete-block barrier with a channel (1 m across) allowing fish access to each side. Underwater speakers were placed on each side of the barrier opening, and an outboard motor noise (broadband sound; 0.06–10 kHz) was broadcast to repel carp that approached within 1 m of the channel. Broadband sound was effective at reducing the number of successful crossings in schools of silver carp, bighead carp and a combined school. Repulsion rates were 82.5% (silver carp), 93.7% (bighead carp) and 90.5% (combined). This study demonstrates that broadband sound is effective in deterring carp and could be used as a deterrent in an integrated pest management system.
Wireless remote liquid level detector and indicator for well testing
Fasching, George E.; Evans, Donald M.; Ernest, John H.
1985-01-01
An acoustic system is provided for measuring the fluid level in oil, gas or water wells under pressure conditions that does not require an electrical link to the surface for level detection. A battery powered sound transmitter is integrated with a liquid sensor in the form of a conductivity probe, enclosed in a sealed housing which is lowered into a well by means of a wire line reel assembly. The sound transmitter generates an intense identifiable acoustic emission when the sensor contacts liquid in the well. The acoustic emissions propagate up the well which functions as a waveguide and are detected by an acoustic transducer. The output signal from the transducer is filtered to provide noise rejection outside of the acoustic signal spectrum. The filtered signal is used to indicate to an operator the liquid level in the well has been reached and the depth is read from a footage counter coupled with the wire line reel assembly at the instant the sound signal is received.
Influences on infant speech processing: toward a new synthesis.
Werker, J F; Tees, R C
1999-01-01
To comprehend and produce language, we must be able to recognize the sound patterns of our language and the rules for how these sounds "map on" to meaning. Human infants are born with a remarkable array of perceptual sensitivities that allow them to detect the basic properties that are common to the world's languages. During the first year of life, these sensitivities undergo modification reflecting an exquisite tuning to just that phonological information that is needed to map sound to meaning in the native language. We review this transition from language-general to language-specific perceptual sensitivity that occurs during the first year of life and consider whether the changes propel the child into word learning. To account for the broad-based initial sensitivities and subsequent reorganizations, we offer an integrated transactional framework based on the notion of a specialized perceptual-motor system that has evolved to serve human speech, but which functions in concert with other developing abilities. In so doing, we highlight the links between infant speech perception, babbling, and word learning.
Neuronal encoding of sound, gravity, and wind in the fruit fly.
Matsuo, Eriko; Kamikouchi, Azusa
2013-04-01
The fruit fly Drosophila melanogaster responds behaviorally to sound, gravity, and wind. Exposure to male courtship songs results in reduced locomotion in females, whereas males begin to chase each other. When agitated, fruit flies tend to move against gravity. When faced with air currents, they 'freeze' in place. Based on recent studies, Johnston's hearing organ, the antennal ear of the fruit fly, serves as a sensor for all of these mechanosensory stimuli. Compartmentalization of sense cells in Johnston's organ into vibration-sensitive and deflection-sensitive neural groups allows this single organ to mediate such varied functions. Sound and gravity/wind signals sensed by these two neuronal groups travel in parallel from the fly ear to the brain, feeding into neural pathways reminiscent of the auditory and vestibular pathways in the human brain. Studies of the similarities between mammals and flies will lead to a better understanding of the principles of how sound and gravity information is encoded in the brain. Here, we review recent advances in our understanding of these principles and discuss the advantages of the fruit fly as a model system to explore the fundamental principles of how neural circuits and their ensembles process and integrate sensory information in the brain.
Parietal disruption alters audiovisual binding in the sound-induced flash illusion.
Kamke, Marc R; Vieth, Harrison E; Cottrell, David; Mattingley, Jason B
2012-09-01
Selective attention and multisensory integration are fundamental to perception, but little is known about whether, or under what circumstances, these processes interact to shape conscious awareness. Here, we used transcranial magnetic stimulation (TMS) to investigate the causal role of attention-related brain networks in multisensory integration between visual and auditory stimuli in the sound-induced flash illusion. The flash illusion is a widely studied multisensory phenomenon in which a single flash of light is falsely perceived as multiple flashes in the presence of irrelevant sounds. We investigated the hypothesis that extrastriate regions involved in selective attention, specifically within the right parietal cortex, exert an influence on the multisensory integrative processes that cause the flash illusion. We found that disruption of the right angular gyrus, but not of the adjacent supramarginal gyrus or of a sensory control site, enhanced participants' veridical perception of the multisensory events, thereby reducing their susceptibility to the illusion. Our findings suggest that the same parietal networks that normally act to enhance perception of attended events also play a role in the binding of auditory and visual stimuli in the sound-induced flash illusion. Copyright © 2012 Elsevier Inc. All rights reserved.
Fraga González, Gorka; Žarić, Gojko; Tijms, Jurgen; Bonte, Milene; van der Molen, Maurits W.
2015-01-01
A recent account of dyslexia assumes that a failure to develop automated letter-speech sound integration might be responsible for the observed lack of reading fluency. This study uses a pre-test-training-post-test design to evaluate the effects of a training program based on letter-speech sound associations with a special focus on gains in reading fluency. A sample of 44 children with dyslexia and 23 typical readers, aged 8 to 9, was recruited. Children with dyslexia were randomly allocated to either the training program group (n = 23) or a waiting-list control group (n = 21). The training intensively focused on letter-speech sound mapping and consisted of 34 individual sessions of 45 minutes over a five month period. The children with dyslexia showed substantial reading gains for the main word reading and spelling measures after training, improving at a faster rate than typical readers and waiting-list controls. The results are interpreted within the conceptual framework assuming a multisensory integration deficit as the most proximal cause of dysfluent reading in dyslexia. Trial Registration: ISRCTN register ISRCTN12783279 PMID:26629707
Soldier Perceptions of the Rapid Decision Trainer
2005-05-01
utility. "* Integrated 3D spatialized sound, supporting SCORM Integration the most common sound formats including Wav and Midi . A major objective of this...34low" and "very low" ratings in a similar manner for the lowest ratings categories. Pre-LFX Questionnaire Overall training value of the RDT. Lieutenants...on school computers and has issues are similar to ActiveX, however applets issued the RDT on CD-ROM to each IOBC student installed on the client
The sound symbolism bootstrapping hypothesis for language acquisition and language evolution
Imai, Mutsumi; Kita, Sotaro
2014-01-01
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. PMID:25092666
Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System
NASA Technical Reports Server (NTRS)
Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.
2013-01-01
The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers
NASA Technical Reports Server (NTRS)
Platt, R.
1999-01-01
This is the Performance Verification Report, Initial Comprehensive Performance Test Report, P/N 1331200-2-IT, S/N 105/A2, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). The specification establishes the requirements for the Comprehensive Performance Test (CPT) and Limited Performance Test (LPT) of the Advanced Microwave Sounding, Unit-A2 (AMSU-A2), referred to herein as the unit. The unit is defined on Drawing 1331200. 1.2 Test procedure sequence. The sequence in which the several phases of this test procedure shall take place is shown in Figure 1, but the sequence can be in any order.
Measor, Kevin R; Leavell, Brian C; Brewton, Dustin H; Rumschlag, Jeffrey; Barber, Jesse R; Razak, Khaleel A
2017-01-01
In active sensing, animals make motor adjustments to match sensory inputs to specialized neural circuitry. Here, we describe an active sensing system for sound level processing. The pallid bat uses downward frequency-modulated (FM) sweeps as echolocation calls for general orientation and obstacle avoidance. The bat's auditory cortex contains a region selective for these FM sweeps (FM sweep-selective region, FMSR). We show that the vast majority of FMSR neurons are sensitive and strongly selective for relatively low levels (30-60 dB SPL). Behavioral testing shows that when a flying bat approaches a target, it reduces output call levels to keep echo levels between ∼30 and 55 dB SPL. Thus, the pallid bat behaviorally matches echo levels to an optimized neural representation of sound levels. FMSR neurons are more selective for sound levels of FM sweeps than tones, suggesting that across-frequency integration enhances level tuning. Level-dependent timing of high-frequency sideband inhibition in the receptive field shapes increased level selectivity for FM sweeps. Together with previous studies, these data indicate that the same receptive field properties shape multiple filters (sweep direction, rate, and level) for FM sweeps, a sound common in multiple vocalizations, including human speech. The matched behavioral and neural adaptations for low-intensity echolocation in the pallid bat will facilitate foraging with reduced probability of acoustic detection by prey.
Measor, Kevin R.; Leavell, Brian C.; Brewton, Dustin H.; Rumschlag, Jeffrey; Barber, Jesse R.
2017-01-01
Abstract In active sensing, animals make motor adjustments to match sensory inputs to specialized neural circuitry. Here, we describe an active sensing system for sound level processing. The pallid bat uses downward frequency-modulated (FM) sweeps as echolocation calls for general orientation and obstacle avoidance. The bat’s auditory cortex contains a region selective for these FM sweeps (FM sweep-selective region, FMSR). We show that the vast majority of FMSR neurons are sensitive and strongly selective for relatively low levels (30-60 dB SPL). Behavioral testing shows that when a flying bat approaches a target, it reduces output call levels to keep echo levels between ∼30 and 55 dB SPL. Thus, the pallid bat behaviorally matches echo levels to an optimized neural representation of sound levels. FMSR neurons are more selective for sound levels of FM sweeps than tones, suggesting that across-frequency integration enhances level tuning. Level-dependent timing of high-frequency sideband inhibition in the receptive field shapes increased level selectivity for FM sweeps. Together with previous studies, these data indicate that the same receptive field properties shape multiple filters (sweep direction, rate, and level) for FM sweeps, a sound common in multiple vocalizations, including human speech. The matched behavioral and neural adaptations for low-intensity echolocation in the pallid bat will facilitate foraging with reduced probability of acoustic detection by prey. PMID:28275715
Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds
Prather, Jonathan F.
2013-01-01
Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717
A simple computer-based measurement and analysis system of pulmonary auscultation sounds.
Polat, Hüseyin; Güler, Inan
2004-12-01
Listening to various lung sounds has proven to be an important diagnostic tool for detecting and monitoring certain types of lung diseases. In this study a computer-based system has been designed for easy measurement and analysis of lung sound using the software package DasyLAB. The designed system presents the following features: it is able to digitally record the lung sounds which are captured with an electronic stethoscope plugged to a sound card on a portable computer, display the lung sound waveform for auscultation sites, record the lung sound into the ASCII format, acoustically reproduce the lung sound, edit and print the sound waveforms, display its time-expanded waveform, compute the Fast Fourier Transform (FFT), and display the power spectrum and spectrogram.
NASA Technical Reports Server (NTRS)
Platt, R.
1999-01-01
This is the Performance Verification Report, Final Comprehensive Performance Test (CPT) Report, for the Integrated Advanced Microwave Sounding Unit-A (AMSU-A). This specification establishes the requirements for the CPT and Limited Performance Test (LPT) of the AMSU-1A, referred to here in as the unit. The sequence in which the several phases of this test procedure shall take place is shown.
Musical training increases functional connectivity, but does not enhance mu suppression.
Wu, C Carolyn; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J
2017-09-01
Musical training provides an ideal platform for investigating action representation for sound. Learning to play an instrument requires integration of sensory and motor perception-action processes. Functional neuroimaging studies have indicated that listening to trained music can result in the activity in premotor areas, even after a short period of training. These studies suggest that action representation systems are heavily dependent on specific sensorimotor experience. However, others suggest that because humans naturally move to music, sensorimotor training is not necessary and there is a more general action representation for music. We previously demonstrated that EEG mu suppression, commonly implemented to demonstrate mirror-neuron-like action representation while observing movements, can also index action representations for sounds in pianists. The current study extends these findings to a group of non-musicians who learned to play randomised sequences on a piano, in order to acquire specific sound-action mappings for the five fingers of their right hand. We investigated training-related changes in neural dynamics as indexed by mu suppression and task-related coherence measures. To test the specificity of training effects, we included sounds similar to those encountered in the training and additionally rhythm sequences. We found no effect of training on mu suppression between pre- and post-training EEG recordings. However, task-related coherence indexing functional connectivity between electrodes over audiomotor areas increased after training. These results suggest that long-term training in musicians and short-term training in novices may be associated with different stages of audiomotor integration that can be reflected in different EEG measures. Furthermore, the changes in functional connectivity were specifically found for piano tones, and were not apparent when participants listened to rhythms, indicating some degree of specificity related to training. Copyright © 2017 Elsevier Ltd. All rights reserved.
Acoustics of Jet Surface Interaction - Scrubbing Noise
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
2014-01-01
Concepts envisioned for the future of civil air transport consist of unconventional propulsion systems in the close proximity to the structure or embedded in the airframe. While such integrated systems are intended to shield noise from the community, they also introduce new sources of sound. Sound generation due to interaction of a jet flow past a nearby solid surface is investigated here using the generalized acoustic analogy theory. The analysis applies to the boundary layer noise generated at and near a wall, and excludes the scattered noise component that is produced at the leading or the trailing edge. While compressibility effects are relatively unimportant at very low Mach numbers, frictional heat generation and thermal gradient normal to the surface could play important roles in generation and propagation of sound in high speed jets of practical interest. A general expression is given for the spectral density of the far field sound as governed by the variable density Pridmore-Brown equation. The propagation Green's function is solved numerically for a high aspect-ratio rectangular jet starting with the boundary conditions on the surface and subject to specified mean velocity and temperature profiles between the surface and the observer. It is shown the magnitude of the Green's function decreases with increasing source frequency and/or jet temperature. The phase remains constant for a rigid surface, but varies with source location when subject to an impedance type boundary condition. The Green's function in the absence of the surface, and flight effects are also investigated
Acoustics of Jet Surface Interaction-Scrubbing Noise
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
2014-01-01
Concepts envisioned for the future of civil air transport consist of unconventional propulsion systems in the close proximity of the structure or embedded in the airframe. While such integrated systems are intended to shield noise from community, they also introduce new sources of sound. Sound generation due to interaction of a jet flow past a nearby solid surface is investigated here using the generalized acoustic analogy theory. The analysis applies to the boundary layer noise generated at and near a wall, and excludes the scattered noise component that is produced at the leading or the trailing edge. While compressibility effects are relatively unimportant at very low Mach numbers, frictional heat generation and thermal gradient normal to the surface could play important roles in generation and propagation of sound in high speed jets of practical interest. A general expression is given for the spectral density of the far field sound as governed by the variable density Pridmore-Brown equation. The propagation Greens function is solved numerically for a high aspect-ratio rectangular jet starting with the boundary conditions on the surface and subject to specified mean velocity and temperature profiles between the surface and the observer. It is shown the magnitude of the Greens function decreases with increasing source frequency andor jet temperature. The phase remains constant for a rigid surface, but varies with source location when subject to an impedance type boundary condition. The Greens function in the absence of the surface, and flight effect are also investigated.
ERIC Educational Resources Information Center
Fussler, Herman H.; Payne, Charles T.
The project's second year (1967/68) was devoted to upgrading the computer operating software and programs to increase versatility and reliability. General conclusions about the program after 24 months of operation are that the project's objectives are sound and that effective utilization of computer-aided bibliographic data processing is essential…
Compression of auditory space during forward self-motion.
Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti
2012-01-01
Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.
An Overview of the NASA Sounding Rockets and Balloon Programs
NASA Technical Reports Server (NTRS)
Flowers, Bobby J.; Needleman, Harvey C.
1999-01-01
The U.S. National Aeronautics and Space Administration (NASA) Sounding Rockets and Balloon Programs conduct a combined total of approximately fifty to sixty missions per year in support of the NASA scientific community. These missions are provided in support of investigations sponsored by NASA'S Offices of Space Science, Life and Microgravity Sciences & Applications, and Earth Science. The Goddard Space Flight Center has management and implementation responsibility for these programs. The NASA Sounding Rockets Program has continued to su,pport the science community by integrating their experiments into the sounding rocket payload and providing the rocket vehicle and launch operations necessary to provide the altitude/time required obtain the science objectives. The sounding rockets continue to provide a cost-effective way to make in situ observations from 50 to 1500 km in the near-earth environment and to uniquely cover the altitude regime between 50 km and 130 km above the Earth's surface, which is physically inaccessible to either balloons or satellites. A new architecture for providing this support has been introduced this year with the establishment of the NASA Sounding Rockets Contract. The Program has continued to introduce improvements into their operations and ground and flight systems. An overview of the NASA Sounding Rockets Program with special emphasis on the new support contract will be presented. The NASA Balloon Program continues to make advancements and developments in its capabilities for support of the scientific ballooning community. Long duration balloon (LDB) is a prominent aspect of the program with two campaigns scheduled for this calendar year. Two flights are scheduled in the Northern Hemisphere from Fairbanks, Alaska, in June and two flights are scheduled from McMurdo, Antarctica, in the Southern Hemisphere in December. The comprehensive balloon research and development (R&D) effort has continued with advances being made across the spectrum of balloon related disciplines. As a result of these technology advancements a new ultra long duration balloon project (ULDB) for the development of a 100- day duration balloon capability has been initiated. The ULDB will rely upon new balloon materials and designs to accomplish its goals. The Program has also continued to introduce new technology and improvements into flights systems, ground systems and operational techniques. An overview of the various aspects of the NASA Balloon Program will be presented.
NASA Astrophysics Data System (ADS)
Miller, Robert E. (Robin)
2005-04-01
In acoustic spaces that are played as extensions of musical instruments, tonality is a major contributor to the experience of reality. Tonality is described as a process of integration in our consciousness over the reverberation time of the room of many sonic arrivals in three dimensions, each directionally coded in a learned response by the listeners unique head-related transfer function (HRTF). Preserving this complex 3D directionality is key to lifelike reproduction of a recording. Conventional techniques such as stereo or 5.1-channel surround sound position the listener at the apex of a triangle or the center of a circle, not the center of the sphere of lifelike hearing. A periphonic reproduction system for music and movie entertainment, Virtual Reality, and Training Simulation termed PerAmbio 3D/2D (Pat. pending) is described in theory and subjective tests that capture the 3D sound field with a microphone array and transform the periphonic signals into ordinary 6-channel media for either decoderless 2D replay on 5.1 systems, or lossless 3D replay with decoder and five additional speakers. PerAmbio 3D/2D is described as a practical approach to preserving the spatial perception of reality, where the listening room and speakers disappear, leaving the acoustical impression of the original venue.
Heart sounds: are you listening? Part 1.
Reimer-Kent, Jocelyn
2013-01-01
All nurses should have an understanding of heart sounds and be proficient in cardiac auscultation. Unfortunately, this skill is not part of many nursing school curricula, nor is it necessarily a required skillfor employment. Yet, being able to listen and accurately describe heart sounds has tangible benefits to the patient, as it is an integral part of a complete cardiac assessment. In this two-part article, I will review the fundamentals of cardiac auscultation, how cardiac anatomy and physiology relate to heart sounds, and describe the various heart sounds. Whether you are a beginner or a seasoned nurse, it is never too early or too late to add this important diagnostic skill to your assessment tool kit.
The effect of frequency-specific sound signals on the germination of maize seeds.
Vicient, Carlos M
2017-07-25
The effects of sound treatments on the germination of maize seeds were determined. White noise and bass sounds (300 Hz) had a positive effect on the germination rate. Only 3 h treatment produced an increase of about 8%, and 5 h increased germination in about 10%. Fast-green staining shows that at least part of the effects of sound are due to a physical alteration in the integrity of the pericarp, increasing the porosity of the pericarp and facilitating oxygen availability and water and oxygen uptake. Accordingly, by removing the pericarp from the seeds the positive effect of the sound on the germination disappeared.
Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe
2014-01-01
Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce “StorySense”, an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children’s motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage “low-motor” interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child’s gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism. PMID:25954336
Sound source localization on an axial fan at different operating points
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes
2016-08-01
A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.
Imai, Mutsumi; Kita, Sotaro
2014-09-19
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Integrating Satellite, Radar and Surface Observation with Time and Space Matching
NASA Astrophysics Data System (ADS)
Ho, Y.; Weber, J.
2015-12-01
The Integrated Data Viewer (IDV) from Unidata is a Java™-based software framework for analyzing and visualizing geoscience data. It brings together the ability to display and work with satellite imagery, gridded data, surface observations, balloon soundings, NWS WSR-88D Level II and Level III RADAR data, and NOAA National Profiler Network data, all within a unified interface. Applying time and space matching on the satellite, radar and surface observation datasets will automatically synchronize the display from different data sources and spatially subset to match the display area in the view window. These features allow the IDV users to effectively integrate these observations and provide 3 dimensional views of the weather system to better understand the underlying dynamics and physics of weather phenomena.
CANSAT: Design of a Small Autonomous Sounding Rocket Payload
NASA Technical Reports Server (NTRS)
Berman, Joshua; Duda, Michael; Garnand-Royo, Jeff; Jones, Alexa; Pickering, Todd; Tutko, Samuel
2009-01-01
CanSat is an international student design-build-launch competition organized by the American Astronautical Society (AAS) and American Institute of Aeronautics and Astronautics (AIAA). The competition is also sponsored by the Naval Research Laboratory (NRL), the National Aeronautics and Space Administration (NASA), AGI, Orbital Sciences Corporation, Praxis Incorporated, and SolidWorks. Specifically, the 2009 Virginia Tech CanSat Team is funded by BAE Systems, Incorporated of Manassas, Virginia. The objective of the 2009 CanSat competition is to complete remote sensing missions by designing a small autonomous sounding rocket payload. The payload designed will follow and perform to a specific set of mission requirements for the 2009 competition. The competition encompasses a complete life-cycle of one year which includes all phases of design, integration, testing, reviews, and launch.
NASA Astrophysics Data System (ADS)
Alhroob, M.; Battistin, M.; Berry, S.; Bitadze, A.; Bonneau, P.; Boyd, G.; Crespo-Lopez, O.; Degeorge, C.; Deterre, C.; Di Girolamo, B.; Doubek, M.; Favre, G.; Hallewell, G.; Katunin, S.; Lombard, D.; Madsen, A.; McMahon, S.; Nagai, K.; O'Rourke, A.; Pearson, B.; Robinson, D.; Rossi, C.; Rozanov, A.; Stanecka, E.; Strauss, M.; Vacek, V.; Vaglio, R.; Young, J.; Zwalinski, L.
2017-01-01
The development of custom ultrasonic instrumentation was motivated by the need for continuous real-time monitoring of possible leaks and mass flow measurement in the evaporative cooling systems of the ATLAS silicon trackers. The instruments use pairs of ultrasonic transducers transmitting sound bursts and measuring transit times in opposite directions. The gas flow rate is calculated from the difference in transit times, while the sound velocity is deduced from their average. The gas composition is then evaluated by comparison with a molar composition vs. sound velocity database, based on the direct dependence between sound velocity and component molar concentration in a gas mixture at a known temperature and pressure. The instrumentation has been developed in several geometries, with five instruments now integrated and in continuous operation within the ATLAS Detector Control System (DCS) and its finite state machine. One instrument monitors C3F8 coolant leaks into the Pixel detector N2 envelope with a molar resolution better than 2ṡ 10-5, and has indicated a level of 0.14 % when all the cooling loops of the recently re-installed Pixel detector are operational. Another instrument monitors air ingress into the C3F8 condenser of the new C3F8 thermosiphon coolant recirculator, with sub-percent precision. The recent effect of the introduction of a small quantity of N2 volume into the 9.5 m3 total volume of the thermosiphon system was clearly seen with this instrument. Custom microcontroller-based readout has been developed for the instruments, allowing readout into the ATLAS DCS via Modbus TCP/IP on Ethernet. The instrumentation has many potential applications where continuous binary gas composition is required, including in hydrocarbon and anaesthetic gas mixtures.
Neurobiology of Everyday Communication: What Have We Learned From Music?
Kraus, Nina; White-Schwoch, Travis
2016-06-09
Sound is an invisible but powerful force that is central to everyday life. Studies in the neurobiology of everyday communication seek to elucidate the neural mechanisms underlying sound processing, their stability, their plasticity, and their links to language abilities and disabilities. This sound processing lies at the nexus of cognitive, sensorimotor, and reward networks. Music provides a powerful experimental model to understand these biological foundations of communication, especially with regard to auditory learning. We review studies of music training that employ a biological approach to reveal the integrity of sound processing in the brain, the bearing these mechanisms have on everyday communication, and how these processes are shaped by experience. Together, these experiments illustrate that music works in synergistic partnerships with language skills and the ability to make sense of speech in complex, everyday listening environments. The active, repeated engagement with sound demanded by music making augments the neural processing of speech, eventually cascading to listening and language. This generalization from music to everyday communications illustrates both that these auditory brain mechanisms have a profound potential for plasticity and that sound processing is biologically intertwined with listening and language skills. A new wave of studies has pushed neuroscience beyond the traditional laboratory by revealing the effects of community music training in underserved populations. These community-based studies reinforce laboratory work highlight how the auditory system achieves a remarkable balance between stability and flexibility in processing speech. Moreover, these community studies have the potential to inform health care, education, and social policy by lending a neurobiological perspective to their efficacy. © The Author(s) 2016.
Transmission loss of orthogonally rib-stiffened double-panel structures with cavity absorption.
Xin, F X; Lu, T J
2011-04-01
The transmission loss of sound through infinite orthogonally rib-stiffened double-panel structures having cavity-filling fibrous sound absorptive materials is theoretically investigated. The propagation of sound across the fibrous material is characterized using an equivalent fluid model, and the motions of the rib-stiffeners are described by including all possible vibrations, i.e., flexural displacements, bending, and torsional rotations. The effects of fluid-structure coupling are account for by enforcing velocity continuity conditions at fluid-panel interfaces. By taking full advantage of the periodic nature of the double-panel, the space-harmonic approach and virtual work principle are applied to solve the sets of resultant governing equations, which are eventually truncated as a finite system of simultaneous algebraic equations and numerically solved insofar as the solution converges. To validate the proposed model, a comparison between the present model predictions and existing numerical and experimental results for a simplified version of the double-panel structure is carried out, with overall agreement achieved. The model is subsequently employed to explore the influence of the fluid-structure coupling between fluid in the cavity and the two panels on sound transmission across the orthogonally rib-stiffened double-panel structure. Obtained results demonstrate that this fluid-structure coupling affects significantly sound transmission loss (STL) at low frequencies and cannot be ignored when the rib-stiffeners are sparsely distributed. As a highlight of this research, an integrated optimal algorithm toward lightweight, high-stiffness and superior sound insulation capability is proposed, based on which a preliminary optimal design of the double-panel structure is performed.
Multichannel sound reinforcement systems at work in a learning environment
NASA Astrophysics Data System (ADS)
Malek, John; Campbell, Colin
2003-04-01
Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.
A Forensically Sound Adversary Model for Mobile Devices.
Do, Quang; Martini, Ben; Choo, Kim-Kwang Raymond
2015-01-01
In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device.
A Forensically Sound Adversary Model for Mobile Devices
Choo, Kim-Kwang Raymond
2015-01-01
In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device. PMID:26393812
NASA Astrophysics Data System (ADS)
Moore, Brian C. J.
Psychoacoustics
An Algorithm for Controlled Integration of Sound and Text.
ERIC Educational Resources Information Center
Wohlert, Harry S.; McCormick, Martin
1985-01-01
A serious drawback in introducing sound into computer programs for teaching foreign language speech has been the lack of an algorithm to turn off the cassette recorder immediately to keep screen text and audio in synchronization. This article describes a program which solves that problem. (SED)
Modeling sound due to over-snow vehicles in Yellowstone and Grand Teton National Parks
DOT National Transportation Integrated Search
2006-10-01
A modified version of the FAAs Integrated Noise Model (INM) Version 6.2 was used to : model the sound of over-snow vehicles (OSVs) (snowmobiles and snowcoaches) in : Yellowstone and Grand Teton National Parks for ten modeling scenarios provided by...
The Central Role of Recognition in Auditory Perception: A Neurobiological Model
ERIC Educational Resources Information Center
McLachlan, Neil; Wilson, Sarah
2010-01-01
The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…
Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua
2015-04-01
We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.
Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization
2018-01-01
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it. PMID:29440556
The power of product integrity.
Clark, K B; Fujimoto, T
1990-01-01
In the dictionary, integrity means wholeness, completeness, soundness. In products, integrity is the source of sustainable competitive advantage. Products with integrity perform superbly, provide good value, and satisfy customers' expectations in every respect, including such intangibles as their look and feel. Consider this example from the auto industry. In 1987, Mazda put a racy four-wheel steering system in a five-door family hatchback. Honda introduced a comparable system in the Prelude, a sporty, two-door coupe. Most of Honda's customers installed the new technology; Mazda's system sold poorly. Potential customers felt the fit--or misfit--between the car and the new component, and they responded accordingly. Companies that consistently develop products with integrity are coherent, integrated organizations. This internal integrity is visible at the level of strategy and structure, in management and organization, and in the skills, attitudes, and behavior of individual designers, engineers, and operators. Moreover, these companies are integrated externally: customers become part of the development organization. Integrity starts with a product concept that describes the new product from the potential customer's perspective--"pocket rocket" for a sporty, subcompact car, for example. Whether the final product has integrity will depend on two things: how well the concept satisfies potential customers' wants and needs and how completely the concept has been embodied in the product's details. In the most successful development organizations, "heavyweight" product managers are responsible for leading both tasks, as well as for guiding the creation of a strong product concept.
Door latching recognition apparatus and process
Eakle, Jr., Robert F.
2012-05-15
An acoustic door latch detector is provided in which a sound recognition sensor is integrated into a door or door lock mechanism. The programmable sound recognition sensor can be trained to recognize the acoustic signature of the door and door lock mechanism being properly engaged and secured. The acoustic sensor will signal a first indicator indicating that proper closure was detected or sound an alarm condition if the proper acoustic signature is not detected within a predetermined time interval.
Shallow Water UXO Technology Demonstration Site, Scoring Record Number 2
2006-09-01
The Sound Metrics Corporation High frequency Imaging Sonar ( HFIS ) (fig. 4) dual frequency imaging sonar operates at 1.1 and 1.8 MHz. For this...the HFIS unit was determined using a National Marine Electronics Association (NMEA) GPRMC string from a Leica GPS system antenna mounted directly...above the HFIS instrument. This permits the image data to be integrated with the Multiple Frequency Sub-Bottom Profiler (MFSBP) and MGS data during
Vocal characteristics of pygmy blue whales and their change over time.
Gavrilov, Alexander N; McCauley, Robert D; Salgado-Kent, Chandra; Tripovich, Joy; Burton, Chris
2011-12-01
Vocal characteristics of pygmy blue whales of the eastern Indian Ocean population were analyzed using data from a hydroacoustic station deployed off Cape Leeuwin in Western Australia as part of the Comprehensive Nuclear-Test-Ban Treaty monitoring network, from two acoustic observatories of the Australian Integrated Marine Observing System, and from individual sea noise loggers deployed in the Perth Canyon. These data have been collected from 2002 to 2010, inclusively. It is shown that the themes of pygmy blue whale songs consist of ether three or two repeating tonal sounds with harmonics. The most intense sound of the tonal theme was estimated to correspond to a source level of 179 ± 2 dB re 1 μPa at 1 m measured for 120 calls from seven different animals. Short-duration calls of impulsive downswept sound from pygmy blue whales were weaker with the source level estimated to vary between 168 to 176 dB. A gradual decrease in the call frequency with a mean rate estimated to be 0.35 ± 0.3 Hz/year was observed over nine years in the frequency of the third harmonic of tonal sound 2 in the whale song theme, which corresponds to a negative trend of about 0.12 Hz/year in the call fundamental frequency. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Longmore, S. P.; Knaff, J. A.; Schumacher, A.; Dostalek, J.; DeMaria, R.; Chirokova, G.; Demaria, M.; Powell, D. C.; Sigmund, A.; Yu, W.
2014-12-01
The Colorado State University (CSU) Cooperative Institute for Research in the Atmosphere (CIRA) has recently deployed a tropical cyclone (TC) intensity and surface wind radii estimation algorithm that utilizes Suomi National Polar-orbiting Partnership (S-NPP) satellite Advanced Technology Microwave Sounder (ATMS) and Advanced Microwave Sounding Unit (AMSU) from the NOAA18, NOAA19 and METOPA polar orbiting satellites for testing, integration and operations for the Product System Development and Implementation (PSDI) projects at NOAA's National Environmental Satellite, Data, and Information Service (NESDIS). This presentation discusses the evolution of the CIRA NPP/AMSU TC algorithms internally at CIRA and its migration and integration into the NOAA Data Exploitation (NDE) development and testing frameworks. The discussion will focus on 1) the development cycle of internal NPP/AMSU TC algorithms components by scientists and software engineers, 2) the exchange of these components into the NPP/AMSU TC software systems using the subversion version control system and other exchange methods, 3) testing, debugging and integration of the NPP/AMSU TC systems both at CIRA/NESDIS and 4) the update cycle of new releases through continuous integration. Lastly, a discussion of the methods that were effective and those that need revision will be detailed for the next iteration of the NPP/AMSU TC system.
Spiesberger, John L
2013-02-01
The hypothesis tested is that internal gravity waves limit the coherent integration time of sound at 1346 km in the Pacific ocean at 133 Hz and a pulse resolution of 0.06 s. Six months of continuous transmissions at about 18 min intervals are examined. The source and receiver are mounted on the bottom of the ocean with timing governed by atomic clocks. Measured variability is only due to fluctuations in the ocean. A model for the propagation of sound through fluctuating internal waves is run without any tuning with data. Excellent resemblance is found between the model and data's probability distributions of integration time up to five hours.
NASA Astrophysics Data System (ADS)
Probst, Ron N.; Rypka, Dann
2005-09-01
Pre-engineered and manufactured sound isolation rooms were developed to ensure guaranteed sound isolation while offering the unique ability to be disassembled and relocated without loss of acoustic performance. Case studies of pre-engineered sound isolation rooms used for music practice and various radio broadcast purposes are highlighted. Three prominent universities wrestle with the challenges of growth and expansion while responding to the specialized acoustic requirements of these spaces. Reduced state funding for universities requires close examination of all options while ensuring sound isolation requirements are achieved. Changing curriculum, renovation, and new construction make pre-engineered and manufactured rooms with guaranteed acoustical performance good investments now and for the future. An added benefit is the optional integration of active acoustics to provide simulations of other spaces or venues along with the benefit of sound isolation.
49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 of this...
Statistical properties of Chinese phonemic networks
NASA Astrophysics Data System (ADS)
Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan
2011-04-01
The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.
Coupled auralization and virtual video for immersive multimedia displays
NASA Astrophysics Data System (ADS)
Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian
2003-04-01
The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.
Lamichhane, Jay Ram; Devos, Yann; Beckie, Hugh J; Owen, Micheal D K; Tillie, Pascal; Messéan, Antoine; Kudsk, Per
2017-06-01
Conventionally bred (CHT) and genetically modified herbicide-tolerant (GMHT) crops have changed weed management practices and made an important contribution to the global production of some commodity crops. However, a concern is that farm management practices associated with the cultivation of herbicide-tolerant (HT) crops further deplete farmland biodiversity and accelerate the evolution of herbicide-resistant (HR) weeds. Diversification in crop systems and weed management practices can enhance farmland biodiversity, and reduce the risk of weeds evolving herbicide resistance. Therefore, HT crops are most effective and sustainable as a component of an integrated weed management (IWM) system. IWM advocates the use of multiple effective strategies or tactics to manage weed populations in a manner that is economically and environmentally sound. In practice, however, the potential benefits of IWM with HT crops are seldom realized because a wide range of technical and socio-economic factors hamper the transition to IWM. Here, we discuss the major factors that limit the integration of HT crops and their associated farm management practices in IWM systems. Based on the experience gained in countries where CHT or GMHT crops are widely grown and the increased familiarity with their management, we propose five actions to facilitate the integration of HT crops in IWM systems within the European Union.
NASA Astrophysics Data System (ADS)
Grossman, E. E.; Rosenbauer, R. J.; Takesue, R. K.; Gelfenbaum, G.; Reisenbichler, R.; Paulson, A.; Sexton, N. R.; Labiosa, B.; Beamer, E. M.; Hood, G.; Wyllie-Echeverria, S.
2006-12-01
Historic land use, ongoing resource extraction, and population expansion throughout Puget Sound have scientists and managers rapidly seeking effective restoration strategies to recover salmon (a cultural icon, as well as, a host of other endangered species and threatened habitats. Of principal concern is the reduction of salmon (Oncorhynchus spp.) and diminished carrying capacity of critical habitat in deltaic regions. Delta habitats, essential to salmon survival, have lost 70 to 80 % area since ~1850 and are now adjusting to a new suite of environmental changes associated with land use practices, including wetland restoration, and regional climate change. The USGS Coastal Habitats in Puget Sound Project, in collaboration with partners from the Skagit River System Cooperative, University of Washington, and other federal, state, and local agencies, is integrating geologic, biologic, hydrologic, and socioeconomic information to quantify changes in the distribution and function of deltaic-estuarine nearshore habitats and better predict "possible futures". We are combining detailed geologic and geochemical analyses of sedimentary environments, plant biomarkers (n-alkanes, PAHs, fatty-acids, and sterols), and compound-specific isotopes to estimate historic habitat coverage, eelgrass (Zostera marina) abundance and modern characteristics of nutrient cycling. Hydrologic and sediment transport processes are being measured to characterize physical processes shaping modern habitats including sediment transport and freshwater mixing that control the temporal and spatial pattern of substrate and water column conditions available as habitat. We are using geophysical, remote sensing, and modeling techniques to determine large-scale coastal morphologic and land-use change and characterize how alteration of physical, hydrologic, and biogeochemical processes influence the dynamics of freshwater mixing, and sediment and nutrient transport in the nearshore. To assist restoration planning, we are integrating a Geographic Information System of land use, ecologic, and hydrodynamic attributes with a hydrodynamic process model to (1) quantitatively estimate land-use impacts on ecologic functions and (2) to provide decision-support tools to help develop and implement effective restoration strategies that will balance socioeconomic demands and ecologic function of the Puget Sound lowlands.
NASA Technical Reports Server (NTRS)
Brooks, T. F.
1977-01-01
The Kirchhoff integral formulation is evaluated for its effectiveness in quantitatively predicting the sound radiated from an oscillating airfoil whose chord length is comparable with the acoustic wavelength. A rigid airfoil section was oscillated at samll amplitude in a medium at rest to produce the sound field. Simultaneous amplitude and phase measurements were made of surface pressure and surface velocity distributions and the acoustic free field. Measured surface pressure and motion are used in applying the theory, and airfoil thickness and contour are taken into account. The result was that the theory overpredicted the sound pressure level by 2 to 5, depending on direction. Differences are also noted in the sound field phase behavior.
Tuning the cognitive environment: Sound masking with 'natural' sounds in open-plan offices
NASA Astrophysics Data System (ADS)
DeLoach, Alana
With the gain in popularity of open-plan office design and the engineering efforts to achieve acoustical comfort for building occupants, a majority of workers still report dissatisfaction in their workplace environment. Office acoustics influence organizational effectiveness, efficiency, and satisfaction through meeting appropriate requirements for speech privacy and ambient sound levels. Implementing a sound masking system is one tried-and-true method of achieving privacy goals. Although each sound masking system is tuned for its specific environment, the signal -- random steady state electronic noise, has remained the same for decades. This research work explores how `natural' sounds may be used as an alternative to this standard masking signal employed so ubiquitously in sound masking systems in the contemporary office environment. As an unobtrusive background sound, possessing the appropriate spectral characteristics, this proposed use of `natural' sounds for masking challenges the convention that masking sounds should be as meaningless as possible. Through the pilot study presented in this work, we hypothesize that `natural' sounds as sound maskers will be as effective at masking distracting background noise as the conventional masking sound, will enhance cognitive functioning, and increase participant (worker) satisfaction.
Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.
Wirtssohn, Sarah; Ronacher, Bernhard
2015-04-01
Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.
A Space Based Internet Protocol System for Launch Vehicle Tracking and Control
NASA Technical Reports Server (NTRS)
Bull, Barton; Grant, Charles; Morgan, Dwayne; Streich, Ron; Bauer, Frank (Technical Monitor)
2001-01-01
Personnel from the Goddard Space Flight Center Wallops Flight Facility (GSFC/WFF) in Virginia are responsible for the overall management of the NASA Sounding Rocket and Scientific Balloon Programs. Payloads are generally in support of NASA's Space Science Enterprise's missions and return a variety of scientific data as well as providing a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft. Sounding rockets used by NASA can carry payloads of various weights to altitudes from 50 km to more than 1,300 km. Scientific balloons can carry a payload weighing as much as 3,630 Kg to an altitude of 42 km. Launch activities for both are conducted not only from established ranges, but also from remote locations worldwide requiring mobile tracking and command equipment to be transported and set up at considerable expense. The advent of low earth orbit (LEO) commercial communications satellites provides an opportunity to dramatically reduce tracking and control costs of these launch vehicles and Unpiloted Aerial Vehicles (UAVs) by reducing or eliminating this ground infrastructure. Additionally, since data transmission is by packetized Internet Protocol (IP), data can be received and commands initiated from practically any location. A low cost Commercial Off The Shelf (COTS) system is currently under development for sounding rockets that also has application to UAVs and scientific balloons. Due to relatively low data rate (9600 baud) currently available, the system will first be used to provide GPS data for tracking and vehicle recovery. Range safety requirements for launch vehicles usually stipulate at least two independent tracking sources. Most sounding rockets flown by NASA now carry GP receivers that output position data via the payload telemetry system to the ground station. The Flight Modem can be configured as a completely separate link thereby eliminating the requirement for tracking radar. The system architecture that integrates antennas, GPS receiver, commercial satellite packet data modem, and a single board computer with custom software is described along with the technical challenges and the plan for their resolution. These include antenna development, high Doppler rates, reliability, environmental ruggedness, hand over between satellites, and data security. An aggressive test plan is included which, in addition to environmental testing, measures bit error rate, latency and antenna patterns. Actual launches on a sounding rocket and various aircraft flights have taken place. Flight tests are planned for the near future on aircraft, long duration balloons and sounding rockets. These results, as well as the current status of the project, are reported.
The perception of microsound and its musical implications.
Roads, Curtis
2003-11-01
Sound particles or microsounds last only a few milliseconds, near the threshold of auditory perception. We can easily analyze the physical properties of sound particles either individually or in masses. However, correlating these properties with human perception remains complicated. One cannot speak of a single time frame, or a "time constant" for the auditory system. The hearing mechanism involves many different agents, each of which operates on its own timescale. The signals being sent by diverse hearing agents are integrated by the brain into a coherent auditory picture. The pioneer of "sound quanta," Dennis Gabor (1900-1979), suggested that at least two mechanisms are at work in microevent detection: one that isolates events, and another that ascertains their pitch. Human hearing imposes a certain minimum duration in order to establish a firm sense of pitch, amplitude, and timbre. This paper traces disparate strands of literature on the topic and summarizes their meaning. Specifically, we examine the perception of intensity and pitch of microsounds, the phenomena of tone fusion and fission, temporal auditory acuity, and preattentive perception. The final section examines the musical implications of microsonic analysis, synthesis, and transformation.
Automated analysis of blood pressure measurements (Korotkov sound)
NASA Technical Reports Server (NTRS)
Golden, D. P.; Hoffler, G. W.; Wolthuis, R. A.
1972-01-01
Automatic system for noninvasive measurements of arterial blood pressure is described. System uses Korotkov sound processor logic ratios to identify Korotkov sounds. Schematic diagram of system is provided to show components and method of operation.
Second Sound in Systems of One-Dimensional Fermions
Matveev, K. A.; Andreev, A. V.
2017-12-27
We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies the second sound mode is damped, and the propagation of heat is diffusive.
Second Sound in Systems of One-Dimensional Fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matveev, K. A.; Andreev, A. V.
We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies the second sound mode is damped, and the propagation of heat is diffusive.
Second Sound in Systems of One-Dimensional Fermions
NASA Astrophysics Data System (ADS)
Matveev, K. A.; Andreev, A. V.
2017-12-01
We study sound in Galilean invariant systems of one-dimensional fermions. At low temperatures, we find a broad range of frequencies in which in addition to the waves of density there is a second sound corresponding to the ballistic propagation of heat in the system. The damping of the second sound mode is weak, provided the frequency is large compared to a relaxation rate that is exponentially small at low temperatures. At lower frequencies, the second sound mode is damped, and the propagation of heat is diffusive.
Auditory scene analysis in school-aged children with developmental language disorders
Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.
2014-01-01
Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430
40 CFR 205.54-2 - Sound data acquisition system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... meets the “fast” dynamic requirement of a precision sound level meter indicating meter system for the... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Sound data acquisition system. 205.54... data acquisition system. (a) Systems employing tape recorders and graphic level recorders may be...
49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 5 2012-10-01 2012-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...
49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 5 2013-10-01 2013-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...
49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...
2015-09-30
phone: +44 1334 462624 fax: +44 1334 463443 e-mail: markjohnson@st-andrews.ac.uk Todd Lindstrom Wildlife Computers 8345 154th Avenue NE...in situ processing algorithms for sound and motion data. In a parallel project Dr. Andrews at the Alaska SeaLife Center teamed with Wildlife ...from Wildlife Computers to produce a highly integrated Sound and Motion Recording and Telemetry (SMRT) tag. The complete tag development is expected
Lung sound analysis for wheeze episode detection.
Jain, Abhishek; Vepa, Jithendra
2008-01-01
Listening and interpreting lung sounds by a stethoscope had been an important component of screening and diagnosing lung diseases. However this practice has always been vulnerable to poor audibility, inter-observer variations (between different physicians) and poor reproducibility. Thus computerized analysis of lung sounds for objective diagnosis of lung diseases is seen as a probable aid. In this paper we aim at automatic analysis of lung sounds for wheeze episode detection and quantification. The proposed algorithm integrates and analyses the set of parameters based on ATS (American Thoracic Society) definition of wheezes. It is very robust, computationally simple and yielded sensitivity of 84% and specificity of 86%.
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that they may be used in any aeroacoustics problem.
Computation of Sound Propagation by Boundary Element Method
NASA Technical Reports Server (NTRS)
Guo, Yueping
2005-01-01
This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.
Intensity-invariant coding in the auditory system.
Barbour, Dennis L
2011-11-01
The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.
Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.
Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof
2014-11-01
Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.
Music and language: relations and disconnections.
Kraus, Nina; Slater, Jessica
2015-01-01
Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.
Mechanics aspects of NDE by sound and ultrasound
NASA Technical Reports Server (NTRS)
Fu, L. S.
1982-01-01
Nondestructive evaluation (NDE) is considered as a means to detect the energy release mechanism of defects and the interaction of microstructures within materials with sound waves and/or ultrasonic waves. Ultrasonic inspection involves the frequency range 20 kHz-1 GHz with amplitudes depending on the sensitivity of the test instrumentation. Pulse echo systems are most frequently used in NDE. Information is extracted from the signals through measurements of the signal velocity, attenuation, the acoustic emission when stress is applied, and calculation of the acoustoelastic coefficients. Fracture properties, tensile and shear strengths, the interlaminar shear strength, the cohesive strength, yield and impact strengths, the hardness, and the residual stress can be assayed by ultrasonic methods. Finally, attention is given to analytical treatment of the derived data, with mention given to transition matrix, integral equation, and eigenstrain approaches.
Mayo, L.R.; Trabant, D.C.; March, Rod; Haeberli, Wilfried
1979-01-01
A 1 year data-collection program on Columbia Glacier, Alaska has produced a data set consisting of near-surface ice kinematics, mass balance, and altitude change at 57 points and 34 ice radar soundings. These data presented in two tables, are part of the basic data required for glacier dynamic analysis, computer models, and predictions of the number and size of icebergs which Columbia Glacier will calve into shipping lanes of eastern Prince William Sound. A metric, sea-level coordinate system was developed for use in surveying throughout the basin. Its use is explained and monument coordinates listed. A series of seven integrated programs for calculators were used in both the field and office to reduce the surveying data. These programs are thoroughly documented and explained in the report. (Kosco-USGS)
The Electronic Flight Bag: A Multi-Function Tool for the Modern Cockpit
2002-08-01
56K Modem , Sound Card, Touchscreen USB, PCMCIA, IR, 56K Modem , Sound Card, Touchscreen USB, IrDA, PCMCIA, Wireless LAN, Touchscreen, Integrated...card, internal modem and augmented internal battery. It is designed to complement the PID for use in the classroom at home or on the road.59
Ideophones in Japanese modulate the P2 and late positive complex responses
Lockwood, Gwilym; Tuomainen, Jyrki
2015-01-01
Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response than arbitrary adverbs, as well as a sustained late positive complex. Our results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of arbitrary words in comparison to ideophones. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds. PMID:26191031
A Space Based Internet Protocol System for Sub-Orbital Tracking and Control
NASA Technical Reports Server (NTRS)
Bull, Barton; Grant, Charles; Morgan, Dwayne; Streich, Ron; Bauer, Frank (Technical Monitor)
2001-01-01
Personnel from the Goddard Space Flight Center Wallops Flight Facility (GSFC/WFF) in Virginia are responsible for the overall management of the NASA Sounding Rocket Program. Payloads are generally in support of NASA's Space Science Enterprise's missions and return a variety of scientific data as well as providing a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft. The fifteen types of sounding rockets used by NASA can carry payloads of various weights to altitudes from 50 km to more than 1,300 km. Launch activities are conducted not only from established missile ranges, but also from remote locations worldwide requiring mobile tracking and command equipment to be transported and set up at considerable expense. The advent of low earth orbit (LEO) commercial communications satellites provides an opportunity to dramatically reduce tracking and control costs of launch vehicles and Unpiloted Aerial Vehicles (UAVs) by reducing or eliminating this ground infrastructure. Additionally, since data transmission is by packetized Internet Protocol (IP), data can be received and commands initiated from practically any location. A low cost Commercial Off The Shelf (COTS) system is currently under development for sounding rockets which also has application to UAVs and scientific balloons. Due to relatively low data rate (9600 baud) currently available, the system will first be used to provide GPS data for tracking and vehicle recovery. Range safety requirements for launch vehicles usually stipulate at least two independent tracking sources. Most sounding rockets flown by NASA now carry GPS receivers that output position data via the payload telemetry system to the ground station. The Flight Modem can be configured as a completely separate link thereby eliminating requirement for tracking radar. The system architecture which integrates antennas, GPS receiver, commercial satellite packet data modem, and a single board computer with custom software is described along with the technical challenges and the plan for their resolution. These include antenna development, high Doppler rates, reliability, environmental ruggedness, hand over between satellites and data security. An aggressive test plan is included which in addition to environmental testing measures bit error rate, latency and antenna patterns. Actual flight tests are planned for the near future on aircraft, long duration balloons and sounding rockets and these results as well as the current status of the project are reported.
D'Souza, Dean; D'Souza, Hana; Johnson, Mark H; Karmiloff-Smith, Annette
2016-08-01
Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants. We probed for the McGurk effect, which occurs when the auditory component of one sound (/ba/) is paired with the visual component of another sound (/ga/), leading to the perception of an illusory third sound (/da/ or /tha/). We measured AV integration in 95 infants/toddlers with Down, fragile X, or Williams syndrome, whom we matched on Chronological and Mental Age to 25 TD infants. We also assessed a more basic AV perceptual ability: sensitivity to matching vs. mismatching AV speech stimuli. Infants with Williams syndrome failed to demonstrate a McGurk effect, indicating poor AV speech integration. Moreover, while the TD children discriminated between matching and mismatching AV stimuli, none of the other groups did, hinting at a basic deficit or delay in AV speech processing, which is likely to constrain subsequent language development. Copyright © 2016 Elsevier Inc. All rights reserved.
2014-09-30
repeating pulse-like signals were investigated. Software prototypes were developed and integrated into distinct streams of reseach ; projects...to study complex sound archives spanning large spatial and temporal scales. A new post processing method for detection and classifcation was also...false positive rates. HK-ANN was successfully tested for a large minke whale dataset, but could easily be used on other signal types. Various
Sound radiation from a flanged inclined duct.
McAlpine, Alan; Daymond-King, Alex P; Kempton, Andrew J
2012-12-01
A simple method to calculate sound radiation from a flanged inclined duct is presented. An inclined annular duct is terminated by a rigid vertical plane. The duct termination is representative of a scarfed exit. The concept of a scarfed duct has been examined in turbofan aero-engines as a means to, potentially, shield a portion of the radiated sound from being transmitted directly to the ground. The sound field inside the annular duct is expressed in terms of spinning modes. Exterior to the duct, the radiated sound field owing to each mode can be expressed in terms of its directivity pattern, which is found by evaluating an appropriate form of Rayleigh's integral. The asymmetry is shown to affect the amplitude of the principal lobe of the directivity pattern, and to alter the proportion of the sound power radiated up or down. The methodology detailed in this article provides a simple engineering approach to investigate the sound radiation for a three-dimensional problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edgeworth R. Westwater; Kenneth S. Gage; Yong Han
1996-09-06
From January 6 to February 28, 1993, the second phase of the Prototype Radiation Observation Experiment (PROBE) was conducted in Kavieng, Papua New Guinea. Data taken during PROBE included frequent radiosondes, 915 MHz Wind profiler/Radio Acoustic Sounding System (RASS) observations of winds and temperatures, and lidar measurements of cloud-base heights. In addition, a dual-channel Microwave Water Substance Radiometer (MWSR) at 23.87 and 31.65 GHz and a Fourier Transform Infrared Radiometer (FTIR) were operated. The FTIR operated between 500 and 2000 cm{sup -1} and measured some of the first high spectral resolution (1 cm{sup -1}) radiation data taken in the tropics.more » The microwave radiometer provided continuous measurements with 30-second resolution of precipitable water vapor (PWV) and integrated cloud liquid (ICL), the RASS measured virtual temperature profiles every 30 minutes, and the cloud lidar provided episodic measurements of clouds every minute. The RASS, MWSR, and FTIR data taken during PROBE were compared with radiosonde data. Broadband longwave and shortwave irradiance data and lidar data were used to identify the presence of cirrus clouds and clear conditions. Comparisons were made between measured and calculated radiance during clear conditions, using radiosonde data as input to a Line-By-Line Radiative Transfer Model. Comparisons of RASS-measured virtual temperature with radiosonde data revealed a significant cold bias below 500 m.« less
Poppe, Lawrence J.; McMullen, Katherine Y.; Danforth, William W.; Blankenship, Mark R.; Clos, Andrew R.; Glomb, Kimberly A.; Lewit, Peter G.; Nadeau, Megan A.; Wood, Douglas A.; Parker, Castleton E.
2014-01-01
Detailed bathymetric maps of the sea floor in Rhode Island and Block Island Sounds are of great interest to the New York, Rhode Island, and Massachusetts research and management communities because of this area's ecological, recreational, and commercial importance. Geologically interpreted digital terrain models from individual surveys provide important benthic environmental information, yet many applications of this information require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 14 contiguous multibeam bathymetric datasets that were produced by the National Oceanic and Atmospheric Administration during charting operations into one digital terrain model that covers much of Block Island Sound and extends eastward across Rhode Island Sound. The new dataset, which covers over 1244 square kilometers, is adjusted to mean lower low water, gridded to 4-meter resolution, and provided in Universal Transverse Mercator Zone 19, North American Datum of 1983 and geographic World Geodetic Survey of 1984 projections. This resolution is adequate for sea-floor feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the data include boulder lag deposits of winnowed Pleistocene strata, sand-wave fields, and scour depressions that reflect the strength of oscillating tidal currents and scour by storm-induced waves. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic features visible in the data include shipwrecks and dredged channels. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental framework for research and resource management activities offshore of Rhode Island.
Goal-Directed Behavior and Instrumental Devaluation: A Neural System-Level Computational Model
Mannella, Francesco; Mirolli, Marco; Baldassarre, Gianluca
2016-01-01
Devaluation is the key experimental paradigm used to demonstrate the presence of instrumental behaviors guided by goals in mammals. We propose a neural system-level computational model to address the question of which brain mechanisms allow the current value of rewards to control instrumental actions. The model pivots on and shows the computational soundness of the hypothesis for which the internal representation of instrumental manipulanda (e.g., levers) activate the representation of rewards (or “action-outcomes”, e.g., foods) while attributing to them a value which depends on the current internal state of the animal (e.g., satiation for some but not all foods). The model also proposes an initial hypothesis of the integrated system of key brain components supporting this process and allowing the recalled outcomes to bias action selection: (a) the sub-system formed by the basolateral amygdala and insular cortex acquiring the manipulanda-outcomes associations and attributing the current value to the outcomes; (b) three basal ganglia-cortical loops selecting respectively goals, associative sensory representations, and actions; (c) the cortico-cortical and striato-nigro-striatal neural pathways supporting the selection, and selection learning, of actions based on habits and goals. The model reproduces and explains the results of several devaluation experiments carried out with control rats and rats with pre- and post-training lesions of the basolateral amygdala, the nucleus accumbens core, the prelimbic cortex, and the dorso-medial striatum. The results support the soundness of the hypotheses of the model and show its capacity to integrate, at the system-level, the operations of the key brain structures underlying devaluation. Based on its hypotheses and predictions, the model also represents an operational framework to support the design and analysis of new experiments on the motivational aspects of goal-directed behavior. PMID:27803652
Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.
Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael
2014-04-01
The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.
[Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].
Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng
2008-12-01
In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.
NASA Technical Reports Server (NTRS)
Lorenz, E.
1999-01-01
This report comprises the Electrical, Electronic, and Electromechanical (EEE) As Designed Parts List to be used in the Integrated Advanced Microwave Sounding Unit-A (AMSU-A) instrument. The purpose of the EEE As-Designed Parts List is to provide a listing of EEE parts identified for use on the Integrated AMSU-A. All EEE parts used on the AMSU-A must meet the parts control requirements as defined in the Parts Control Plan (POP). All part applications are reviewed by the Parts Control Board (PCB) and granted approval if POP requirements are met. The "As Designed Parts Lists" indicates PCB approval status, and thus also serves as the Program Approved Parts List.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; McNair, Wade; Sukumar, Sreenivas R
2014-01-01
In the last three decades, there has been an exponential growth in the area of information technology providing the information processing needs of data-driven businesses in government, science, and private industry in the form of capturing, staging, integrating, conveying, analyzing, and transferring data that will help knowledge workers and decision makers make sound business decisions. Data integration across enterprise warehouses is one of the most challenging steps in the big data analytics strategy. Several levels of data integration have been identified across enterprise warehouses: data accessibility, common data platform, and consolidated data model. Each level of integration has its ownmore » set of complexities that requires a certain amount of time, budget, and resources to implement. Such levels of integration are designed to address the technical challenges inherent in consolidating the disparate data sources. In this paper, we present a methodology based on industry best practices to measure the readiness of an organization and its data sets against the different levels of data integration. We introduce a new Integration Level Model (ILM) tool, which is used for quantifying an organization and data system s readiness to share data at a certain level of data integration. It is based largely on the established and accepted framework provided in the Data Management Association (DAMA-DMBOK). It comprises several key data management functions and supporting activities, together with several environmental elements that describe and apply to each function. The proposed model scores the maturity of a system s data governance processes and provides a pragmatic methodology for evaluating integration risks. The higher the computed scores, the better managed the source data system and the greater the likelihood that the data system can be brought in at a higher level of integration.« less
NASA Astrophysics Data System (ADS)
Olama, Mohammed M.; McNair, Allen W.; Sukumar, Sreenivas R.; Nutaro, James J.
2014-05-01
In the last three decades, there has been an exponential growth in the area of information technology providing the information processing needs of data-driven businesses in government, science, and private industry in the form of capturing, staging, integrating, conveying, analyzing, and transferring data that will help knowledge workers and decision makers make sound business decisions. Data integration across enterprise warehouses is one of the most challenging steps in the big data analytics strategy. Several levels of data integration have been identified across enterprise warehouses: data accessibility, common data platform, and consolidated data model. Each level of integration has its own set of complexities that requires a certain amount of time, budget, and resources to implement. Such levels of integration are designed to address the technical challenges inherent in consolidating the disparate data sources. In this paper, we present a methodology based on industry best practices to measure the readiness of an organization and its data sets against the different levels of data integration. We introduce a new Integration Level Model (ILM) tool, which is used for quantifying an organization and data system's readiness to share data at a certain level of data integration. It is based largely on the established and accepted framework provided in the Data Management Association (DAMADMBOK). It comprises several key data management functions and supporting activities, together with several environmental elements that describe and apply to each function. The proposed model scores the maturity of a system's data governance processes and provides a pragmatic methodology for evaluating integration risks. The higher the computed scores, the better managed the source data system and the greater the likelihood that the data system can be brought in at a higher level of integration.
Temporal signatures of processing voiceness and emotion in sound
Gunter, Thomas C.
2017-01-01
Abstract This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance. PMID:28338796
Temporal signatures of processing voiceness and emotion in sound.
Schirmer, Annett; Gunter, Thomas C
2017-06-01
This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance. © The Author (2017). Published by Oxford University Press.
Radiated BPF sound measurement of centrifugal compressor
NASA Astrophysics Data System (ADS)
Ohuchida, S.; Tanaka, K.
2013-12-01
A technique to measure radiated BPF sound from an automotive turbocharger compressor impeller is proposed in this paper. Where there are high-level background noises in the measurement environment, it is difficult to discriminate the target component from the background. Since the effort of measuring BPF sound was taken in a room with such condition in this study, no discrete BPF peak was initially found on the sound spectrum. Taking its directionality into consideration, a microphone covered with a parabolic cone was selected and using this technique, the discrete peak of BPF was clearly observed. Since the level of measured sound was amplified due to the area-integration effect, correction was needed to obtain the real level. To do so, sound measurements with and without a parabolic cone were conducted for the fixed source and their level differences were used as correction factors. Consideration is given to the sound propagation mechanism utilizing measured BPF as well as the result of a simple model experiment. The present method is generally applicable to sound measurements conducted with a high level of background noise.
Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean
2018-05-01
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Controlling sound radiation through an opening with secondary loudspeakers along its boundaries.
Wang, Shuping; Tao, Jiancheng; Qiu, Xiaojun
2017-10-17
We propose a virtual sound barrier system that blocks sound transmission through openings without affecting access, light and air circulation. The proposed system applies active control technique to cancel sound transmission with a double layered loudspeaker array at the edge of the opening. Unlike traditional transparent glass windows, recently invented double-glazed ventilation windows and planar active sound barriers or any other metamaterials designed to reduce sound transmission, secondary loudspeakers are put only along the boundaries of the opening, which provides the possibility to make it invisible. Simulation and experimental results demonstrate its feasibility for broadband sound control, especially for low frequency sound which is usually hard to attenuate with existing methods.
Impairments of Multisensory Integration and Cross-Sensory Learning as Pathways to Dyslexia
Hahn, Noemi; Foxe, John J.; Molholm, Sophie
2014-01-01
Two sensory systems are intrinsic to learning to read. Written words enter the brain through the visual system and associated sounds through the auditory system. The task before the beginning reader is quite basic. She must learn correspondences between orthographic tokens and phonemic utterances, and she must do this to the point that there is seamless automatic ‘connection’ between these sensorially distinct units of language. It is self-evident then that learning to read requires formation of cross-sensory associations to the point that deeply encoded multisensory representations are attained. While the majority of individuals manage this task to a high degree of expertise, some struggle to attain even rudimentary capabilities. Why do dyslexic individuals, who learn well in myriad other domains, fail at this particular task? Here, we examine the literature as it pertains to multisensory processing in dyslexia. We find substantial support for multisensory deficits in dyslexia, and make the case that to fully understand its neurological basis, it will be necessary to thoroughly probe the integrity of auditory-visual integration mechanisms. PMID:25265514
Development of Virtual Auditory Interfaces
2001-03-01
reference to compare the sound in the VE with the real 4. Lessons from the Entertainment Industry world experience. The entertainment industry has...systems are currently being evaluated. even though we have the technology to create astounding The first system uses a portable Sony TCD-D8 DAT audio...data set created a system called "Fantasound" which wrapped the including sound recordings and sound measurements musical compositions and sound
Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.
Maryn, Youri; Zarowski, Andrzej
2015-11-01
Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.
NASA Technical Reports Server (NTRS)
Bayliss, A.
1978-01-01
The scattering of the sound of a jet engine by an airplane fuselage is modeled by solving the axially symmetric Helmholtz equation exterior to a long thin ellipsoid. The integral equation method based on the single layer potential formulation is used. A family of coordinate systems on the body is introduced and an algorithm is presented to determine the optimal coordinate system. Numerical results verify that the optimal choice enables the solution to be computed with a grid that is coarse relative to the wavelength.
[Should disease management be feared? (1): hospital care].
Gaspoz, J M; Rutschmann, O
2005-11-23
The goals of disease management are: (1) an integrated health care delivery system; (2) knowledge-based care; (3) elaborate information systems; (4) continuous quality improvement. In-hospital disease management and, more specifically, critical pathways, establish standardized care plans, set goals and time actions to reach these goals. They can reduce variations in practice patterns and resource utilization without compromising quality of care. Such strategies participate to quality improvement programs in hospitals when they involve and empower all actors of a given process of care, are not imposed from outside, and use sound and rigorous development and evaluation methods.
Structural integrity test and assessment.
NASA Technical Reports Server (NTRS)
Suggs, F.; Poe, R.; Sannicandro, R.
1972-01-01
The feasibility of using an ultrasonic system on board the Space Shuttle Orbiter to facilitate structural evaluation and assessment was studied. Two factors are considered that could limit the capability of an ultrasonic system: (1) the effect of structure configuration and (2) the noise generated during vehicle launch. Results of the study indicate that although the structural configuration has direct bearing on sound propagation, strategic location of transducers will still permit flaw detection. The ultrasonic response data show that a severe acoustic environment does not interfere significantly with either propagation and reflection of surface waves or detection of crack-like flaws in the structure.
Karipidis, Iliana I; Pleisch, Georgette; Brandeis, Daniel; Roth, Alexander; Röthlisberger, Martina; Schneebeli, Maya; Walitza, Susanne; Brem, Silvia
2018-05-08
During reading acquisition, neural reorganization of the human brain facilitates the integration of letters and speech sounds, which enables successful reading. Neuroimaging and behavioural studies have established that impaired audiovisual integration of letters and speech sounds is a core deficit in individuals with developmental dyslexia. This longitudinal study aimed to identify neural and behavioural markers of audiovisual integration that are related to future reading fluency. We simulated the first step of reading acquisition by performing artificial-letter training with prereading children at risk for dyslexia. Multiple logistic regressions revealed that our training provides new precursors of reading fluency at the beginning of reading acquisition. In addition, an event-related potential around 400 ms and functional magnetic resonance imaging activation patterns in the left planum temporale to audiovisual correspondences improved cross-validated prediction of future poor readers. Finally, an exploratory analysis combining simultaneously acquired electroencephalography and hemodynamic data suggested that modulation of temporoparietal brain regions depended on future reading skills. The multimodal approach demonstrates neural adaptations to audiovisual integration in the developing brain that are related to reading outcome. Despite potential limitations arising from the restricted sample size, our results may have promising implications both for identifying poor-reading children and for monitoring early interventions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...
Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter
2002-12-01
Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.
Simulation of Jet Noise with OVERFLOW CFD Code and Kirchhoff Surface Integral
NASA Technical Reports Server (NTRS)
Kandula, M.; Caimi, R.; Voska, N. (Technical Monitor)
2002-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
NASA Technical Reports Server (NTRS)
Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)
2001-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
Experimental implementation of acoustic impedance control by a 2D network of distributed smart cells
NASA Astrophysics Data System (ADS)
David, P.; Collet, M.; Cote, J.-M.
2010-03-01
New miniaturization and integration capabilities available from emerging microelectromechanical system (MEMS) technology will allow silicon-based artificial skins involving thousands of elementary actuators to be developed in the near future. Smart structures combining large arrays of elementary motion pixels are thus being studied so that fundamental properties could be dynamically adjusted. This paper investigates the acoustical capabilities of a network of distributed transducers connected with a suitable controlling strategy. The research aims at designing an integrated active interface for sound attenuation by using suitable changes of acoustical impedance. The control strategy is based on partial differential equations (PDE) and the multiscaled physics of electromechanical elements. Specific techniques based on PDE control theory have provided a simple boundary control equation able to annihilate the reflections of acoustic waves. To experimentally implement the method, the control strategy is discretized as a first order time-space operator. The obtained quasi-collocated architecture, composed of a large number of sensors and actuators, provides high robustness and stability. The experimental results demonstrate how a well controlled active skin can substantially modify sound reflectivity of the acoustical interface and reduce the propagation of acoustic waves.
Blind subjects construct conscious mental images of visual scenes encoded in musical form.
Cronly-Dillon, J; Persaud, K C; Blore, R
2000-01-01
Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637
Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington
Uhrich, M.A.; McGrath, T.S.
1997-01-01
Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.
Computer-aided auscultation learning system for nursing technique instruction.
Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih
2008-01-01
Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.
Evidence for Diminished Multisensory Integration in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.
2014-01-01
Individuals with autism spectrum disorders (ASD) exhibit alterations in sensory processing, including changes in the integration of information across the different sensory modalities. In the current study, we used the sound-induced flash illusion to assess multisensory integration in children with ASD and typically-developing (TD) controls.…
Microwave-excited ultrasound and thermoacoustic dual imaging
NASA Astrophysics Data System (ADS)
Ding, Wenzheng; Ji, Zhong; Xing, Da
2017-05-01
We designed a microwave-excited ultrasound (MUI) and thermoacoustic dual imaging system. Under the pulsed microwave excitation, the piezoelectric transducer used for thermoacoustic signal detection will also emit a highly directional ultrasonic beam based on the inverse piezoelectric effect. With this beam, the ultrasonic transmitter circuitry of the traditional ultrasound imaging (TUI) system can be replaced by a microwave source. In other words, TUI can be fully integrated into the thermoacoustic imaging system by sharing the microwave excitation source and the transducer. Moreover, the signals of the two imaging modalities do not interfere with each other due to the existence of the sound path difference, so that MUI can be performed simultaneously with microwave-induced thermoacoustic imaging. In the study, the performance characteristics and imaging capabilities of this hybrid system are demonstrated. The results indicate that our design provides one easy method for low-cost platform integration and has the potential to offer a clinically useful dual-modality tool for the detection of accurate diseases.
Possibilities of psychoacoustics to determine sound quality
NASA Astrophysics Data System (ADS)
Genuit, Klaus
For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.
Georgoulas, George; Georgopoulos, Voula C; Stylios, Chrysostomos D
2006-01-01
This paper proposes a novel integrated methodology to extract features and classify speech sounds with intent to detect the possible existence of a speech articulation disorder in a speaker. Articulation, in effect, is the specific and characteristic way that an individual produces the speech sounds. A methodology to process the speech signal, extract features and finally classify the signal and detect articulation problems in a speaker is presented. The use of support vector machines (SVMs), for the classification of speech sounds and detection of articulation disorders is introduced. The proposed method is implemented on a data set where different sets of features and different schemes of SVMs are tested leading to satisfactory performance.
Reproduction of a higher-order circular harmonic field using a linear array of loudspeakers.
Lee, Jung-Min; Choi, Jung-Woo; Kim, Yang-Hann
2015-03-01
This paper presents a direct formula for reproducing a sound field consisting of higher-order circular harmonics with polar phase variation. Sound fields with phase variation can be used for synthesizing various spatial attributes, such as the perceived width or the location of a virtual sound source. To reproduce such a sound field using a linear loudspeaker array, the driving function of the array is derived in the format of an integral formula. The proposed function shows fewer reproduction errors than a conventional formula focused on magnitude variations. In addition, analysis of the sweet spot reveals that its shape can be asymmetric, depending on the order of harmonics.
Nordahl, Rolf; Turchet, Luca; Serafin, Stefania
2011-09-01
We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.
Third Graders Explore Sound Concepts through Online Research Compared to Making Musical Instruments
ERIC Educational Resources Information Center
Borsay, Kyrie D.; Foss, Page
2016-01-01
This study is an exploration of several lessons on sound taught to third grade students using one of the Next Generation Science Standards (3-5-ETS1) and arts integration. A counterbalanced, pretest- posttest- distal posttest design experiment was conducted to compare student knowledge and attitudes between the control and experimental conditions.…
Using Sound Knowledge to Teach about Noise-Induced Hearing Loss
ERIC Educational Resources Information Center
McDonnough, Jacqueline T.; Matkins, Juanita Jo
2007-01-01
Throughout our lives we are surrounded by sounds in our environment. Our ability to hear plays an essential part in our everyday existence. Students should develop an understanding of the role technology plays in personal and social decisions. If we are to meet these goals we need to integrate aspects of responsible behavior toward hearing health…
Range of sound levels in the outdoor environment
Lewis S. Goodfriend
1977-01-01
Current methods of measuring and rating noise in a metropolitan area are examined, including real-time spectrum analysis and sound-level integration, producing a single-number value representing the noise impact for each hour or each day. Methods of noise rating for metropolitan areas are reviewed, and the various measures from multidimensional rating methods such as...
An automated computerized auscultation and diagnostic system for pulmonary diseases.
Abbas, Ali; Fahim, Atef
2010-12-01
Respiratory sounds are of significance as they provide valuable information on the health of the respiratory system. Sounds emanating from the respiratory system are uneven, and vary significantly from one individual to another and for the same individual over time. In and of themselves they are not a direct proof of an ailment, but rather an inference that one exists. Auscultation diagnosis is an art/skill that is acquired and honed by practice; hence it is common to seek confirmation using invasive and potentially harmful imaging diagnosis techniques like X-rays. This research focuses on developing an automated auscultation diagnostic system that overcomes the limitations inherent in traditional auscultation techniques. The system uses a front end sound signal filtering module that uses adaptive Neural Networks (NN) noise cancellation to eliminate spurious sound signals like those from the heart, intestine, and ambient noise. To date, the core diagnosis module is capable of identifying lung sounds from non-lung sounds, normal lung sounds from abnormal ones, and identifying wheezes from crackles as indicators of different ailments.
Hyperspectral Remote Sensing of Atmospheric Profiles from Satellites and Aircraft
NASA Technical Reports Server (NTRS)
Smith, W. L.; Zhou, D. K.; Harrison, F. W.; Revercomb, H. E.; Larar, A. M.; Huang, H. L.; Huang, B.
2001-01-01
A future hyperspectral resolution remote imaging and sounding system, called the GIFTS (Geostationary Imaging Fourier Transform Spectrometer), is described. An airborne system, which produces the type of hyperspectral resolution sounding data to be achieved with the GIFTS, has been flown on high altitude aircraft. Results from simulations and from the airborne measurements are presented to demonstrate the revolutionary remote sounding capabilities to be realized with future satellite hyperspectral remote imaging/sounding systems.
NASA Astrophysics Data System (ADS)
Folmer, M. J.; Berndt, E.; Malloy, K.; Mazur, K.; Sienkiewicz, J. M.; Phillips, J.; Goldberg, M.
2017-12-01
The Joint Polar Satellite System (JPSS) was added to the Satellite Proving Ground for Marine, Precipitation, and Satellite Analysis in late 2012, just in time to introduce forecasters to the very high-resolution imagery available from the Suomi-National Polar Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) instrument when observing and forecasting Hurricane Sandy (2012). Since that time, more polar products have been introduced to the forecast routines at the National Weather Service (NWS) Ocean Prediction Center (OPC), Weather Prediction Center (WPC), Tropical Analysis and Forecast Branch (TAFB) of the National Hurricane Center (NHC), and the Satellite Analysis Branch (SAB) of the National Environmental Satellite, Data, and Information Service (NESDIS). These new data sets have led to research projects at the OPC and TAFB that have specifically been looking into the early identification of stratospheric intrusions that lead to explosive cyclogenesis or extratropical transition of tropical cyclones. Currently NOAA Unique CrIS/ATMS Processing System (NUCAPS) temperature and moisture soundings are available in AWIPS-II as a point-based display. Traditionally soundings are used to anticipate and forecast severe convection, however unique and valuable information can be gained from soundings for other forecasting applications, such as extratropical transition, especially in data sparse regions. Additional research has been conducted to look at how JPSS CrIS/ATMS NUCAPS soundings might help forecasters identify the pre-extratropical transition or pre-explosive cyclogenesis environments, leading to earlier diagnosis and better public advisories. CrIS/ATMS NUCAPS soundings, IASI and NUCAPS ozone products, NOAA G-IV GPS dropwindsondes, the Air Mass RGB, and single water vapor channels have been analyzed to look for the precursors to these high impact events. This presentation seeks to show some early analysis and potential uses of the polar-orbiting datasets to compliment the geostationary imagery and therefore lead to earlier identification and possible warnings.
On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
Weninger, Felix; Eyben, Florian; Schuller, Björn W.; Mortillaro, Marcello; Scherer, Klaus R.
2013-01-01
Without doubt, there is emotional information in almost any kind of sound received by humans every day: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow’s pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of “the sound that something makes,” in order to evaluate the system’s auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects. PMID:23750144
Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M
2017-06-01
The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.
EPA Science Matters Newsletter: Volume 1, Number 3
The term 'scientific integrity' is often used to describe an essential pillar of our work. It reflects our understanding that sound science is an irreplaceable necessity in ensuring the integrity of our actions and our decisions.
EPA Science Matters Newsletter: Volume 1, Number 3
2017-02-14
The term 'scientific integrity' is often used to describe an essential pillar of our work. It reflects our understanding that sound science is an irreplaceable necessity in ensuring the integrity of our actions and our decisions.
Development of Prototype of Whistling Sound Counter based on Piezoelectric Bone Conduction
NASA Astrophysics Data System (ADS)
Mori, Mikio; Ogihara, Mitsuhiro; Kyuu, Ten; Taniguchi, Shuji; Kato, Shozo; Araki, Chikahiro
Recently, some professional whistlers have set up music schools that teach musical whistling. Similar to singing, in musical whistling, the whistling sound should not be break, even when the whistling goes on for more than 3 min. For this, it is advisable to practice whistling the “Pii” sound, which involves whistling the “Pii” sound continuously 100 times with the same pitch. However, when practicing alone, a whistler finds it difficult to count his/her own whistling sounds. In this paper, we propose a whistling sound counter based on piezoelectric bone conduction. This system consists of five parts. The gain of the amplifier section of this counter is variable, and the center frequency (f0) of the BPF part is also variable. In this study, we developed a prototype of the system and tested it. For this, we simultaneously counted the whistling sounds of nine people using the proposed system. The proposed system showed a good performance in a noisy environment. We also propose an examination system for awarding grades in musical whistling, which enforces the license examination in musical whistling on the personal computer. The proposed system can be used to administer the 5th grade exam for musical whistling.
IASI Radiance Data Assimilation in Local Ensemble Transform Kalman Filter
NASA Astrophysics Data System (ADS)
Cho, K.; Hyoung-Wook, C.; Jo, Y.
2016-12-01
Korea institute of Atmospheric Prediction Systems (KIAPS) is developing NWP model with data assimilation systems. Local Ensemble Transform Kalman Filter (LETKF) system, one of the data assimilation systems, has been developed for KIAPS Integrated Model (KIM) based on cubed-sphere grid and has successfully assimilated real data. LETKF data assimilation system has been extended to 4D- LETKF which considers time-evolving error covariance within assimilation window and IASI radiance data assimilation using KPOP (KIAPS package for observation processing) with RTTOV (Radiative Transfer for TOVS). The LETKF system is implementing semi operational prediction including conventional (sonde, aircraft) observation and AMSU-A (Advanced Microwave Sounding Unit-A) radiance data from April. Recently, the semi operational prediction system updated radiance observations including GPS-RO, AMV, IASI (Infrared Atmospheric Sounding Interferometer) data at July. A set of simulation of KIM with ne30np4 and 50 vertical levels (of top 0.3hPa) were carried out for short range forecast (10days) within semi operation prediction LETKF system with ensemble forecast 50 members. In order to only IASI impact, our experiments used only conventional and IAIS radiance data to same semi operational prediction set. We carried out sensitivity test for IAIS thinning method (3D and 4D). IASI observation number was increased by temporal (4D) thinning and the improvement of IASI radiance data impact on the forecast skill of model will expect.
NASA Technical Reports Server (NTRS)
1981-01-01
The Space Transportation System (STS) is discussed, including the launch processing system, the thermal protection subsystem, meteorological research, sound supression water system, rotating service structure, improved hypergol or removal systems, fiber optics research, precision positioning, remote controlled solid rocket booster nozzle plugs, ground operations for Centaur orbital transfer vehicle, parachute drying, STS hazardous waste disposal and recycle, toxic waste technology and control concepts, fast analytical densitometry study, shuttle inventory management system, operational intercommunications system improvement, and protective garment ensemble. Terrestrial applications are also covered, including LANDSAT applications to water resources, satellite freeze forecast system, application of ground penetrating radar to soil survey, turtle tracking, evaluating computer drawn ground cover maps, sparkless load pulsar, and coupling a microcomputer and computing integrator with a gas chromatograph.
[A focused sound field measurement system by LabVIEW].
Jiang, Zhan; Bai, Jingfeng; Yu, Ying
2014-05-01
In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.
SAFT-assisted sound beam focusing using phased arrays (PA-SAFT) for non-destructive evaluation
NASA Astrophysics Data System (ADS)
Nanekar, Paritosh; Kumar, Anish; Jayakumar, T.
2015-04-01
Focusing of sound has always been a subject of interest in ultrasonic non-destructive evaluation. An integrated approach to sound beam focusing using phased array and synthetic aperture focusing technique (PA-SAFT) has been developed in the authors' laboratory. The approach involves SAFT processing on ultrasonic B-scan image collected by a linear array transducer using a divergent sound beam. The objective is to achieve sound beam focusing using fewer elements than the ones required using conventional phased array. The effectiveness of the approach is demonstrated on aluminium blocks with artificial flaws and steel plate samples with embedded volumetric weld flaws, such as slag and clustered porosities. The results obtained by the PA-SAFT approach are found to be comparable to those obtained by conventional phased array and full matrix capture - total focusing method approaches.
Arctic Ocean Model Intercomparison Using Sound Speed
NASA Astrophysics Data System (ADS)
Dukhovskoy, D. S.; Johnson, M. A.
2002-05-01
The monthly and annual means from three Arctic ocean - sea ice climate model simulations are compared for the period 1979-1997. Sound speed is used to integrate model outputs of temperature and salinity along a section between Barrow and Franz Josef Land. A statistical approach is used to test for differences among the three models for two basic data subsets. We integrated and then analyzed an upper layer between 2 m - 50 m, and also a deep layer from 500 m to the bottom. The deep layer is characterized by low time-variability. No high-frequency signals appear in the deep layer having been filtered out in the upper layer. There is no seasonal signal in the deep layer and the monthly means insignificantly oscillate about the long-period mean. For the deep ocean the long-period mean can be considered quasi-constant, at least within the 19 year period of our analysis. Thus we assumed that the deep ocean would be the best choice for comparing the means of the model outputs. The upper (mixed) layer was chosen to contrast the deep layer dynamics. There are distinct seasonal and interannual signals in the sound speed time series in this layer. The mixed layer is a major link in the ocean - air interaction mechanism. Thus, different mean states of the upper layer in the models might cause different responses in other components of the Arctic climate system. The upper layer also strongly reflects any differences in atmosphere forcing. To compare data from the three models we have used a one-way t-test for the population mean, the Wilcoxon one-sample signed-rank test (when the requirement of normality of tested data is violated), and one-way ANOVA method and F-test to verify our hypothesis that the model outputs have the same mean sound speed. The different statistical approaches have shown that all models have different mean characteristics of the deep and upper layers of the Arctic Ocean.
The biomechanics of one-footed vertical jump performance in unilateral trans-tibial amputees.
Strike, S C; Diss, C
2005-04-01
This study investigated vertical jumps from single support for two trans-tibial amputees from a standing position. The mechanisms used to achieve flight and the compensatory mechanisms used in the production of force in the absence of plantarflexors are detailed. Two participants completed countermovement maximum vertical jumps from the prosthetic and the sound limbs. The jumps were recorded by a 7-camera 512 VICON motion analysis system integrated with a Kistler forceplate. Flight height was 5 cm jumping from the prosthetic side and 18-19 cm from the sound side. The countermovement was shallower and its duration was less on the prosthetic side compared to the sound side. The reduced and passive range of motion at the prosthesis resulted in an asymmetrical countermovement for both participants with the knee and ankle joints most affected. The duration of the push-off phase was not consistently affected. At take-off the joints on the sound side reached close to full extension while on the prosthetic side they remained more flexed. Joint extension velocity in the push-off phase was similar for both participants on the sound side, though the timing for participant 2 illustrated earlier peaks. The pattern of joint extension velocity was not a smooth proximal to distal sequence on the prosthetic side. The magnitude and timing of the inter-segment extensor moments were asymmetrical for both subjects. The power pattern was asymmetrical in both the countermovement and push-off phases; the lack of power generation at the ankle affected that produced at the remaining joints.
49 CFR 325.25 - Calibration of measurement systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Standard Institute Standard Methods for Measurements of Sound Pressure Levels (ANSI S1.13-1971) for field... sound level measurement system must be calibrated and appropriately adjusted at one or more frequencies... 5-15 minutes thereafter, until it has been determined that the sound level measurement system has...
49 CFR 325.25 - Calibration of measurement systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Standard Institute Standard Methods for Measurements of Sound Pressure Levels (ANSI S1.13-1971) for field... sound level measurement system must be calibrated and appropriately adjusted at one or more frequencies... 5-15 minutes thereafter, until it has been determined that the sound level measurement system has...
33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...
33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...
33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004, each...
Dance, Stephen; Backus, Bradford; Morales, Lorenzo
2018-01-01
Introduction: The effect of a sound reinforcement system, in terms of speech intelligibility, has been systematically determined under realistic conditions. Different combinations of ambient and reverberant conditions representative of a classroom environment have been investigated. Materials and Methods: By comparing the measured speech transmission index metric with and without the system in the same space under different room acoustics conditions, it was possible to determine when the system was most effective. A new simple criterion, equivalent noise reduction (ENR), was introduced to determine the effectiveness of the sound reinforcement system which can be used to predict the speech transmission index based on the ambient sound pressure and reverberation time with and without amplification. Results: This criterion had a correlation, R2 > 0.97. It was found that sound reinforcement provided no benefit if the competing noise level was less than 40 dBA. However, the maximum benefit of such a system was equivalent to a 7.7 dBA noise reduction. Conclusion: Using the ENR model, it would be possible to determine the suitability of implementing sound reinforcement systems in any room, thus providing a tool to determine if natural acoustic treatment or sound field amplification would be of most benefit to the occupants of any particular room. PMID:29785972
Dance, Stephen; Backus, Bradford; Morales, Lorenzo
2018-01-01
The effect of a sound reinforcement system, in terms of speech intelligibility, has been systematically determined under realistic conditions. Different combinations of ambient and reverberant conditions representative of a classroom environment have been investigated. By comparing the measured speech transmission index metric with and without the system in the same space under different room acoustics conditions, it was possible to determine when the system was most effective. A new simple criterion, equivalent noise reduction (ENR), was introduced to determine the effectiveness of the sound reinforcement system which can be used to predict the speech transmission index based on the ambient sound pressure and reverberation time with and without amplification. This criterion had a correlation, R 2 > 0.97. It was found that sound reinforcement provided no benefit if the competing noise level was less than 40 dBA. However, the maximum benefit of such a system was equivalent to a 7.7 dBA noise reduction. Using the ENR model, it would be possible to determine the suitability of implementing sound reinforcement systems in any room, thus providing a tool to determine if natural acoustic treatment or sound field amplification would be of most benefit to the occupants of any particular room.
Virtual targeting in three-dimensional space with sound and light interference
NASA Astrophysics Data System (ADS)
Chua, Florence B.; DeMarco, Robert M.; Bergen, Michael T.; Short, Kenneth R.; Servatius, Richard J.
2006-05-01
Law enforcement and the military are critically concerned with the targeting and firing accuracy of opponents. Stimuli which impede opponent targeting and firing accuracy can be incorporated into defense systems. An automated virtual firing range was developed to assess human targeting accuracy under conditions of sound and light interference, while avoiding dangers associated with live fire. This system has the ability to quantify sound and light interference effects on targeting and firing accuracy in three dimensions. This was achieved by development of a hardware and software system that presents the subject with a sound or light target, preceded by a sound or light interference. SonyXplod. TM 4-way speakers present sound interference and sound targeting. The Martin ® MiniMAC TM Profile operates as a source of light interference, while a red laser light serves as a target. A tracking system was created to monitor toy gun movement and firing in three-dimensional space. Data are collected via the Ascension ® Flock of Birds TM tracking system and a custom National Instrument ® LabVIEW TM 7.0 program to monitor gun movement and firing. A test protocol examined system parameters. Results confirm that the system enables tracking of virtual shots from a fired simulation gun to determine shot accuracy and location in three dimensions.
Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.
Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang
2007-01-01
Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.
Riede, Tobias; Goller, Franz
2010-10-01
Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weng, Yu-Chi, E-mail: clyde.weng@gmail.com; Fujiwara, Takeshi
2011-06-15
In order to develop a sound material-cycle society, cost-effective municipal solid waste (MSW) management systems are required for the municipalities in the context of the integrated accounting system for MSW management. Firstly, this paper attempts to establish an integrated cost-benefit analysis (CBA) framework for evaluating the effectiveness of MSW management systems. In this paper, detailed cost/benefit items due to waste problems are particularly clarified. The stakeholders of MSW management systems, including the decision-makers of the municipalities and the citizens, are expected to reconsider the waste problems in depth and thus take wise actions with the aid of the proposed CBAmore » framework. Secondly, focusing on the financial cost, this study develops a generalized methodology to evaluate the financial cost-effectiveness of MSW management systems, simultaneously considering the treatment technological levels and policy effects. The impacts of the influencing factors on the annual total and average financial MSW operation and maintenance (O and M) costs are analyzed in the Taiwanese case study with a demonstrative short-term future projection of the financial costs under scenario analysis. The established methodology would contribute to the evaluation of the current policy measures and to the modification of the policy design for the municipalities.« less
Gover, Bradford N; Ryan, James G; Stinson, Michael R
2002-11-01
A measurement system has been developed that is capable of analyzing the directional and spatial variations in a reverberant sound field. A spherical, 32-element array of microphones is used to generate a narrow beam that is steered in 60 directions. Using an omnidirectional loudspeaker as excitation, the sound pressure arriving from each steering direction is measured as a function of time, in the form of pressure impulse responses. By subsequent analysis of these responses, the variation of arriving energy with direction is studied. The directional diffusion and directivity index of the arriving sound can be computed, as can the energy decay rate in each direction. An analysis of the 32 microphone responses themselves allows computation of the point-to-point variation of reverberation time and of sound pressure level, as well as the spatial cross-correlation coefficient, over the extent of the array. The system has been validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively from the measures and qualitatively from plots of the arriving energy versus direction. It is anticipated that the system will be of value in evaluating the directional distribution of arriving energy and the degree and diffuseness of sound fields in rooms.
Intrinsic motivation and attentional capture from gamelike features in a visual search task.
Miranda, Andrew T; Palmer, Evan M
2014-03-01
In psychology research studies, the goals of the experimenter and the goals of the participants often do not align. Researchers are interested in having participants who take the experimental task seriously, whereas participants are interested in earning their incentive (e.g., money or course credit) as quickly as possible. Creating experimental methods that are pleasant for participants and that reward them for effortful and accurate data generation, while not compromising the scientific integrity of the experiment, would benefit both experimenters and participants alike. Here, we explored a gamelike system of points and sound effects that rewarded participants for fast and accurate responses. We measured participant engagement at both cognitive and perceptual levels and found that the point system (which invoked subtle, anonymous social competition between participants) led to positive intrinsic motivation, while the sound effects (which were pleasant and arousing) led to attentional capture for rewarded colors. In a visual search task, points were awarded after each trial for fast and accurate responses, accompanied by short, pleasant sound effects. We adapted a paradigm from Anderson, Laurent, and Yantis (Proceedings of the National Academy of Sciences 108(25):10367-10371, 2011b), in which participants completed a training phase during which red and green targets were probabilistically associated with reward (a point bonus multiplier). During a test phase, no points or sounds were delivered, color was irrelevant to the task, and previously rewarded targets were sometimes presented as distractors. Significantly longer response times on trials in which previously rewarded colors were present demonstrated attentional capture, and positive responses to a five-question intrinsic-motivation scale demonstrated participant engagement.
Experiments on active isolation using distributed PVDF error sensors
NASA Technical Reports Server (NTRS)
Lefebvre, S.; Guigou, C.; Fuller, C. R.
1992-01-01
A control system based on a two-channel narrow-band LMS algorithm is used to isolate periodic vibration at low frequencies on a structure composed of a rigid top plate mounted on a flexible receiving plate. The control performance of distributed PVDF error sensors and accelerometer point sensors is compared. For both sensors, high levels of global reduction, up to 32 dB, have been obtained. It is found that, by driving the PVDF strip output voltage to zero, the controller may force the structure to vibrate so that the integration of the strain under the length of the PVDF strip is zero. This ability of the PVDF sensors to act as spatial filters is especially relevant in active control of sound radiation. It is concluded that the PVDF sensors are flexible, nonfragile, and inexpensive and can be used as strain sensors for active control applications of vibration isolation and sound radiation.
Li, Wei; Torres, David; Díaz, Ramón; Wang, Zhengjun; Wu, Changsheng; Wang, Chuan; Lin Wang, Zhong; Sepúlveda, Nelson
2017-05-16
Ferroelectret nanogenerators were recently introduced as a promising alternative technology for harvesting kinetic energy. Here we report the device's intrinsic properties that allow for the bidirectional conversion of energy between electrical and mechanical domains; thus extending its potential use in wearable electronics beyond the power generation realm. This electromechanical coupling, combined with their flexibility and thin film-like form, bestows dual-functional transducing capabilities to the device that are used in this work to demonstrate its use as a thin, wearable and self-powered loudspeaker or microphone patch. To determine the device's performance and applicability, sound pressure level is characterized in both space and frequency domains for three different configurations. The confirmed device's high performance is further validated through its integration in three different systems: a music-playing flag, a sound recording film and a flexible microphone for security applications.
NASA Astrophysics Data System (ADS)
Li, Wei; Torres, David; Díaz, Ramón; Wang, Zhengjun; Wu, Changsheng; Wang, Chuan; Lin Wang, Zhong; Sepúlveda, Nelson
2017-05-01
Ferroelectret nanogenerators were recently introduced as a promising alternative technology for harvesting kinetic energy. Here we report the device's intrinsic properties that allow for the bidirectional conversion of energy between electrical and mechanical domains; thus extending its potential use in wearable electronics beyond the power generation realm. This electromechanical coupling, combined with their flexibility and thin film-like form, bestows dual-functional transducing capabilities to the device that are used in this work to demonstrate its use as a thin, wearable and self-powered loudspeaker or microphone patch. To determine the device's performance and applicability, sound pressure level is characterized in both space and frequency domains for three different configurations. The confirmed device's high performance is further validated through its integration in three different systems: a music-playing flag, a sound recording film and a flexible microphone for security applications.
Li, Wei; Torres, David; Díaz, Ramón; Wang, Zhengjun; Wu, Changsheng; Wang, Chuan; Lin Wang, Zhong; Sepúlveda, Nelson
2017-01-01
Ferroelectret nanogenerators were recently introduced as a promising alternative technology for harvesting kinetic energy. Here we report the device's intrinsic properties that allow for the bidirectional conversion of energy between electrical and mechanical domains; thus extending its potential use in wearable electronics beyond the power generation realm. This electromechanical coupling, combined with their flexibility and thin film-like form, bestows dual-functional transducing capabilities to the device that are used in this work to demonstrate its use as a thin, wearable and self-powered loudspeaker or microphone patch. To determine the device's performance and applicability, sound pressure level is characterized in both space and frequency domains for three different configurations. The confirmed device's high performance is further validated through its integration in three different systems: a music-playing flag, a sound recording film and a flexible microphone for security applications. PMID:28508862
The Neural Representation of Consonant-Vowel Transitions in Adults Who Wear Hearing Aids
Tremblay, Kelly L.; Kalstein, Laura; Billings, Cuttis J.; Souza, Pamela E.
2006-01-01
Hearing aids help compensate for disorders of the ear by amplifying sound; however, their effectiveness also depends on the central auditory system's ability to represent and integrate spectral and temporal information delivered by the hearing aid. The authors report that the neural detection of time-varying acoustic cues contained in speech can be recorded in adult hearing aid users using the acoustic change complex (ACC). Seven adults (50–76 years) with mild to severe sensorineural hearing participated in the study. When presented with 2 identifiable consonant-vowel (CV) syllables (“shee” and “see”), the neural detection of CV transitions (as indicated by the presence of a P1-N1-P2 response) was different for each speech sound. More specifically, the latency of the evoked neural response coincided in time with the onset of the vowel, similar to the latency patterns the authors previously reported in normal-hearing listeners. PMID:16959736
Development of an International Research Project of Monsoon Asia Integrated Regional Study (MAIRS)
NASA Astrophysics Data System (ADS)
Fu, C.
2006-05-01
Monson Asia has been recommended as one of the critical regions of integrated study of global change. Among a number of reasons, the most significant features of Monsoon Asia is that this is a region where the major features of landscape, such as vegetation, soil and water system are mainly developed under the most representative monsoon climate. On the other hand, the Monsoon Asia is a region with the most active human development. It has more than 5000 years long history of civilization and highest population density of the world, reaching 57 percent of word population. It also had the most rapid development in last decades and is projected to maintain its high growth rates in the foreseeable future. The human-monsoon system interactions and their linkages with the earth system dynamics could be a challenge issue of global change research and a sustainable Asia . A science plan of MAIRS is under drafting by SSC of MAIRS under the guidance of START and an international project office of MAIRS was formally opened in IAP/Chinese Academy of Sciences under the support of Chinese government. The overall objectives of the MAIRS that will combine field experiments, process studies, and modeling components are: 1) To better understand how human activities in regions are interacting with and altering natural regional variability of the atmospheric, terrestrial, and marine components of the environment; 2) To contribute to the provision of a sound scientific basis for sustainable regional development; 3) To develop a predictive capability of estimating changes in global-regional linkages in the Earth System and to recognize on a sound scientific basis the future consequences of such changes.
Is health systems integration being advanced through Local Health District planning?
Saunders, Carla; Carter, David J
2017-05-01
Objective Delivering genuine integrated health care is one of three strategic directions in the New South Wales (NSW) Government State Health Plan: Towards 2021. This study investigated the current key health service plan of each NSW Local Health District (LHD) to evaluate the extent and nature of health systems integration strategies that are currently planned. Methods A scoping review was conducted to identify common key principles and practices for successful health systems integration to enable the development of an appraisal tool to content assess LHD strategic health service plans. Results The strategies that are planned for health systems integration across LHDs focus most often on improvements in coordination, health care access and care delivery for complex at-risk patients across the care continuum by both state- and commonwealth-funded systems, providers and agencies. The most common reasons given for integrated activities were to reduce avoidable hospitalisation, avoid inappropriate emergency department attendance and improve patient care. Conclusions Despite the importance of health systems integration and finding that all NSW LHDs have made some commitment towards integration in their current strategic health plans, this analysis suggests that health systems integration is in relatively early development across NSW. What is known about the topic? Effective approaches to managing complex chronic diseases have been found to involve health systems integration, which necessitates sound communication and connection between healthcare providers across community and hospital settings. Planning based on current health systems integration knowledge to ensure the efficient use of scarce resources is a responsibility of all health systems. What does this paper add? Appropriate planning and implementation of health systems integration is becoming an increasingly important expectation and requirement of effective health systems. The present study is the first of its kind to assess the planned activity in health systems integration in the NSW public health system. NSW health districts play a central role in health systems integration; each health service plan outlines the strategic directions for the development and delivery of all state-funded services across each district for the coming years, equating to hundreds of millions of dollars in health sector funding. The inclusion of effective health systems integration strategies allows Local Health Districts to lay the foundation for quality patient outcomes and long-term financial sustainability despite projected increases in demand for health services. What are the implications for practice? Establishing robust ongoing mechanisms for effective health systems integration is now a necessary part of health planning. The present study identifies several key areas and strategies that are wide in scope and indicative of efforts towards health systems integration, which may support Local Health Districts and other organisations in systematic planning and implementation.
49 CFR 325.23 - Type of measurement systems which may be used.
Code of Federal Regulations, 2011 CFR
2011-10-01
... may be used. The sound level measurement system must meet or exceed the requirements of American National Standard Specification for Sound Level Meters (ANSI S1.4-1971), approved April 27, 1971, issued by..., New York, New York, 10018. (a) A Type 1 sound level meter; (b) A Type 2 sound level meter; or (c) A...
Integration of upper air data in the MeteoSwiss Data Warehouse
NASA Astrophysics Data System (ADS)
Musa, M.; Haeberli, Ch.; Ruffieux, D.
2010-09-01
Over the last 10 years MeteoSwiss established a Data Warehouse in order to get one single, integrated data platform for all kinds of meteorological and climatological data. In the MeteoSwiss Data Warehouse data and metadata are hold in a metadata driven relational database. To reach this goal, we started with the integration of the actual and historical data from our surface stations in a first step, including routines for aggregation and calculation and the implementation of enhanced Quality Control tools. In 2008 we started with the integration of actual and historical upper air data like soundings (PTU, Wind and Ozone), any kind of profilers like wind profiler or radiometer, profiles calculated from numerical weather models and AMDAR data in the Data Warehouse. The dataset includes also high resolution sounding data from the station Payerne and TEMP data from 20 European stations since 1942. A critical point was to work out a concept for the general architecture which could deal with all different types of data. While integrating the data itself all metadata of the aerological station Payerne was transferred and imported in the central metadata repository of the Data Warehouse. The implementation of the real time and daily QC tools as well as the routines for aggregation and calculation were realized in an analog way as for the surface data. The Quality Control tools include plausibility tests like limit tests, consistency tests in the same level and vertical consistency tests. From the beginning it was the aim to support the MeteoSwiss integration strategy which deals with all aspects of integration like various observing technologies and platforms, observing systems outside MeteoSwiss and the data and metadata itself. This kind of integration comprises all aspects of "Enterprise Data Integration". After the integration, the historical as well as the actual upper air data are now available for the climatologists and meteorologists with standardized access for data retrieving and visualization. We are convinced making these data accessible for the scientist is a good contribution to a better understanding of high resolution climatology.
Near-infrared image-guided laser ablation of dental decay
NASA Astrophysics Data System (ADS)
Tao, You-Chen; Fried, Daniel
2009-09-01
Image-guided laser ablation systems are now feasible for dentistry with the recent development of nondestructive high-contrast imaging modalities such as near-IR (NIR) imaging and optical coherence tomography (OCT) that are capable of discriminating between sound and demineralized dental enamel at the early stages of development. Our objective is to demonstrate that images of demineralized tooth surfaces have sufficient contrast to be used to guide a CO2 laser for the selective removal of natural and artificial caries lesions. NIR imaging and polarization-sensitive optical coherence tomography (PS-OCT) operating at 1310-nm are used to acquire images of natural lesions on extracted human teeth and highly patterned artificial lesions produced on bovine enamel. NIR and PS-OCT images are analyzed and converted to binary maps designating the areas on the samples to be removed by a CO2 laser to selectively remove the lesions. Postablation NIR and PS-OCT images confirmed preferential removal of demineralized areas with minimal damage to sound enamel areas. These promising results suggest that NIR and PS-OCT imaging systems can be integrated with a CO2 laser ablation system for the selective removal of dental caries.
Near-infrared image-guided laser ablation of dental decay
Tao, You-Chen; Fried, Daniel
2009-01-01
Image-guided laser ablation systems are now feasible for dentistry with the recent development of nondestructive high-contrast imaging modalities such as near-IR (NIR) imaging and optical coherence tomography (OCT) that are capable of discriminating between sound and demineralized dental enamel at the early stages of development. Our objective is to demonstrate that images of demineralized tooth surfaces have sufficient contrast to be used to guide a CO2 laser for the selective removal of natural and artificial caries lesions. NIR imaging and polarization-sensitive optical coherence tomography (PS-OCT) operating at 1310-nm are used to acquire images of natural lesions on extracted human teeth and highly patterned artificial lesions produced on bovine enamel. NIR and PS-OCT images are analyzed and converted to binary maps designating the areas on the samples to be removed by a CO2 laser to selectively remove the lesions. Postablation NIR and PS-OCT images confirmed preferential removal of demineralized areas with minimal damage to sound enamel areas. These promising results suggest that NIR and PS-OCT imaging systems can be integrated with a CO2 laser ablation system for the selective removal of dental caries. PMID:19895146
Near-infrared image-guided laser ablation of dental decay.
Tao, You-Chen; Fried, Daniel
2009-01-01
Image-guided laser ablation systems are now feasible for dentistry with the recent development of nondestructive high-contrast imaging modalities such as near-IR (NIR) imaging and optical coherence tomography (OCT) that are capable of discriminating between sound and demineralized dental enamel at the early stages of development. Our objective is to demonstrate that images of demineralized tooth surfaces have sufficient contrast to be used to guide a CO(2) laser for the selective removal of natural and artificial caries lesions. NIR imaging and polarization-sensitive optical coherence tomography (PS-OCT) operating at 1310-nm are used to acquire images of natural lesions on extracted human teeth and highly patterned artificial lesions produced on bovine enamel. NIR and PS-OCT images are analyzed and converted to binary maps designating the areas on the samples to be removed by a CO(2) laser to selectively remove the lesions. Postablation NIR and PS-OCT images confirmed preferential removal of demineralized areas with minimal damage to sound enamel areas. These promising results suggest that NIR and PS-OCT imaging systems can be integrated with a CO(2) laser ablation system for the selective removal of dental caries.
Petrini, Karin; McAleer, Phil; Pollick, Frank
2010-04-06
In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.
Integration: Dirty Word or Golden Key?
ERIC Educational Resources Information Center
Kerry, Trevor
2007-01-01
This article examines the notion of integrated studies as a way of organising curriculum in schools. Drawing on the insights of educational philosophy, curriculum theory and learning theory it establishes the soundness of a theoretical case for integration. It examines what this view means for the art and science of teaching, and notes examples of…
Apprenez la Science en Francais?
ERIC Educational Resources Information Center
Edmonds, Juliet; Jacobs, Pippa
2011-01-01
The authors explain why integrating the teaching of science and French is not as ridiculous as it may at first sound. They describe how this innovative integrated approach works in a primary school in Oxfordshire. The project involves Content and Language Integrated Learning (CLIL). CLIL is a method whereby a curriculum subject is planned and…
Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J.
2018-01-01
Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface. PMID:29515494
The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.
Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin
2017-01-18
Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.
Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J
2018-01-01
Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.
Excessive exposure of sick neonates to sound during transport
Buckland, L; Austin, N; Jackson, A; Inder, T
2003-01-01
Objective: To determine the levels of sound to which infants are exposed during routine transport by ambulance, aircraft, and helicopter. Design: Sound levels during 38 consecutive journeys from a regional level III neonatal intensive care unit were recorded using a calibrated data logging sound meter (Quest 2900). The meter was set to record "A" weighted slow response integrated sound levels, which emulates the response of the human ear, and "C" weighted response sound levels as a measure of total sound level exposure for all frequencies. The information was downloaded to a computer using MS HyperTerminal. The resulting data were stored, and a graphical profile was generated for each journey using SigmaPlot software. Setting: Eight journeys involved ambulance transport on country roads, 24 involved fixed wing aircraft, and four were by helicopter. Main outcome measures: Relations between decibel levels and events or changes in transport mode were established by correlating the time logged on the sound meter with the standard transport documentation sheet. Results: The highest sound levels were recorded during air transport. However, mean sound levels for all modes of transport exceeded the recommended levels for neonatal intensive care. The maximum sound levels recorded were extremely high at greater than 80 dB in the "A" weighted hearing range and greater than 120 dB in the total frequency range. Conclusions: This study raises major concerns about the excessive exposure of the sick newborn to sound during transportation. PMID:14602701
NASA Technical Reports Server (NTRS)
Mehitretter, R.
1996-01-01
Stress analysis of the primary structure of the Meteorological Satellites Project (METSAT) Advanced Microwave Sounding Units-A, A1 Module performed using the Meteorological Operational (METOP) Qualification Level 9.66 grms Random Vibration PSD Spectrum is presented. The random vibration structural margins of safety and natural frequency predictions are summarized.
Comparison of sound speed measurements on two different ultrasound tomography devices
NASA Astrophysics Data System (ADS)
Sak, Mark; Duric, Neb; Littrup, Peter; Bey-Knight, Lisa; Sherman, Mark; Gierach, Gretchen; Malyarenko, Antonina
2014-03-01
Ultrasound tomography (UST) employs sound waves to produce three-dimensional images of breast tissue and precisely measures the attenuation of sound speed secondary to breast tissue composition. High breast density is a strong breast cancer risk factor and sound speed is directly proportional to breast density. UST provides a quantitative measure of breast density based on three-dimensional imaging without compression, thereby overcoming the shortcomings of many other imaging modalities. The quantitative nature of the UST breast density measures are tied to an external standard, so sound speed measurement in breast tissue should be independent of specific hardware. The work presented here compares breast sound speed measurement obtained with two different UST devices. The Computerized Ultrasound Risk Evaluation (CURE) system located at the Karmanos Cancer Institute in Detroit, Michigan was recently replaced with the SoftVue ultrasound tomographic device. Ongoing clinical trials have used images generated from both sets of hardware, so maintaining consistency in sound speed measurements is important. During an overlap period when both systems were in the same exam room, a total of 12 patients had one or both of their breasts imaged on both systems on the same day. There were 22 sound speed scans analyzed from each system and the average breast sound speeds were compared. Images were either reconstructed using saved raw data (for both CURE and SoftVue) or were created during the image acquisition (saved in DICOM format for SoftVue scans only). The sound speed measurements from each system were strongly and positively correlated with each other. The average difference in sound speed between the two sets of data was on the order of 1-2 m/s and this result was not statistically significant. The only sets of images that showed a statistical difference were the DICOM images created during the SoftVue scan compared to the SoftVue images reconstructed from the raw data. However, the discrepancy between the sound speed values could be easily handled by uniformly increasing the DICOM sound speed by approximately 0.5 m/s. These results suggest that there is no fundamental difference in sound speed measurement for the two systems and support combining data generated with these instruments in future studies.
Levings, C D; Varela, D E; Mehlenbacher, N M; Barry, K L; Piercey, G E; Guo, M; Harrison, P J
2005-12-01
We investigated the effect of acid mine drainage (AMD) from an abandoned copper mine at Britannia Beach (Howe Sound, BC, Canada) on primary productivity and chlorophyll a levels in the receiving waters of Howe Sound before, during, and after freshet from the Squamish River. Elevated concentrations of copper (integrated average through the water column >0.050 mgl(-1)) in nearshore waters indicated that under some conditions a small gyre near the mouth of Britannia Creek may have retained the AMD from Britannia Creek and from a 30-m deep water outfall close to shore. Regression and correlation analyses indicated that copper negatively affected primary productivity during April (pre-freshet) and November (post-freshet). Negative effects of copper on primary productivity were not supported statistically for July (freshet), possibly because of additional effects such as turbidity from the Squamish River. Depth-integrated average and surface chlorophyll a were correlated to copper concentrations in April. During this short study we demonstrated that copper concentrations from the AMD discharge can negatively affect both primary productivity and the standing stock of primary producers in Howe Sound.
Bahloul, Amel; Simmler, Marie-Christine; Michel, Vincent; Leibovici, Michel; Perfettini, Isabelle; Roux, Isabelle; Weil, Dominique; Nouaille, Sylvie; Zuo, Jian; Zadro, Cristina; Licastro, Danilo; Gasparini, Paolo; Avan, Paul; Hardelin, Jean-Pierre; Petit, Christine
2009-01-01
Loud sound exposure is a significant cause of hearing loss worldwide. We asked whether a lack of vezatin, an ubiquitous adherens junction protein, could result in noise-induced hearing loss. Conditional mutant mice bearing non-functional vezatin alleles only in the sensory cells of the inner ear (hair cells) indeed exhibited irreversible hearing loss after only one minute exposure to a 105 dB broadband sound. In addition, mutant mice spontaneously underwent late onset progressive hearing loss and vestibular dysfunction related to substantial hair cell death. We establish that vezatin is an integral membrane protein with two adjacent transmembrane domains, and cytoplasmic N- and C-terminal regions. Late recruitment of vezatin at junctions between MDCKII cells indicates that the protein does not play a role in the formation of junctions, but rather participates in their stability. Moreover, we show that vezatin directly interacts with radixin in its actin-binding conformation. Accordingly, we provide evidence that vezatin associates with actin filaments at cell–cell junctions. Our results emphasize the overlooked role of the junctions between hair cells and their supporting cells in the auditory epithelium resilience to sound trauma. PMID:20049712
Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations
Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia
2016-01-01
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. PMID:26545618
Interior and exterior sound field control using general two-dimensional first-order sources.
Poletti, M A; Abhayapala, T D
2011-01-01
Reproduction of a given sound field interior to a circular loudspeaker array without producing an undesirable exterior sound field is an unsolved problem over a broadband of frequencies. At low frequencies, by implementing the Kirchhoff-Helmholtz integral using a circular discrete array of line-source loudspeakers, a sound field can be recreated within the array and produce no exterior sound field, provided that the loudspeakers have azimuthal polar responses with variable first-order responses which are a combination of a two-dimensional (2D) monopole and a radially oriented 2D dipole. This paper examines the performance of circular discrete arrays of line-source loudspeakers which also include a tangential dipole, providing general variable-directivity responses in azimuth. It is shown that at low frequencies, the tangential dipoles are not required, but that near and above the Nyquist frequency, the tangential dipoles can both improve the interior accuracy and reduce the exterior sound field. The additional dipoles extend the useful range of the array by around an octave.
Portable system for auscultation and lung sound analysis.
Nabiev, Rustam; Glazova, Anna; Olyinik, Valery; Makarenkova, Anastasiia; Makarenkov, Anatolii; Rakhimov, Abdulvosid; Felländer-Tsai, Li
2014-01-01
A portable system for auscultation and lung sound analysis has been developed, including the original electronic stethoscope coupled with mobile devices and special algorithms for the automated analysis of pulmonary sound signals. It's planned that the developed system will be used for monitoring of health status of patients with various pulmonary diseases.
An Improved Theoretical Aerodynamic Derivatives Computer Program for Sounding Rockets
NASA Technical Reports Server (NTRS)
Barrowman, J. S.; Fan, D. N.; Obosu, C. B.; Vira, N. R.; Yang, R. J.
1979-01-01
The paper outlines a Theoretical Aerodynamic Derivatives (TAD) computer program for computing the aerodynamics of sounding rockets. TAD outputs include normal force, pitching moment and rolling moment coefficient derivatives as well as center-of-pressure locations as a function of the flight Mach number. TAD is applicable to slender finned axisymmetric vehicles at small angles of attack in subsonic and supersonic flows. TAD improvement efforts include extending Mach number regions of applicability, improving accuracy, and replacement of some numerical integration algorithms with closed-form integrations. Key equations used in TAD are summarized and typical TAD outputs are illustrated for a second-stage Tomahawk configuration.
Sound Transmission Loss Through a Corrugated-Core Sandwich Panel with Integrated Acoustic Resonators
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Allen, Albert R.; Zalewski, Bart F; Beck, Benjamin S.
2014-01-01
The goal of this study is to better understand the effect of structurally integrated resonators on the transmission loss of a sandwich panel. The sandwich panel has facesheets over a corrugated core, which creates long aligned chambers that run parallel to the facesheets. When ports are introduced through the facesheet, the long chambers within the core can be used as low-frequency acoustic resonators. By integrating the resonators within the structure they contribute to the static load bearing capability of the panel while also attenuating noise. An analytical model of a panel with embedded resonators is derived and compared with numerical simulations. Predictions show that acoustic resonators can significantly improve the transmission loss of the sandwich panel around the natural frequency of the resonators. In one configuration with 0.813 m long internal chambers, the diffuse field transmission loss is improved by more than 22 dB around 104 Hz. The benefit is achieved with no added mass or volume relative to the baseline structure. The embedded resonators are effective because they radiate sound out-of-phase with the structure. This results in destructive interference, which leads to less transmitted sound power.
The impact of sound-field systems on learning and attention in elementary school classrooms.
Dockrell, Julie E; Shield, Bridget
2012-08-01
The authors evaluated the installation and use of sound-field systems to investigate the impact of these systems on teaching and learning in elementary school classrooms. Methods The evaluation included acoustic surveys of classrooms, questionnaire surveys of students and teachers, and experimental testing of students with and without the use of sound-field systems. In this article, the authors report students' perceptions of classroom environments and objective data evaluating change in performance on cognitive and academic assessments with amplification over a 6-month period. Teachers were positive about the use of sound-field systems in improving children's listening and attention to verbal instructions. Over time, students in amplified classrooms did not differ from those in nonamplified classrooms in their reports of listening conditions, nor did their performance differ in measures of numeracy, reading, or spelling. Use of sound-field systems in the classrooms resulted in significantly larger gains in performance in the number of correct items on the nonverbal measure of speed of processing and the measure of listening comprehension. Analysis controlling for classroom acoustics indicated that students' listening comprehension scores improved significantly in amplified classrooms with poorer acoustics but not in amplified classrooms with better acoustics. Both teacher ratings and student performance on standardized tests indicated that sound-field systems improved performance on children's understanding of spoken language. However, academic attainments showed no benefits from the use of sound-field systems. Classroom acoustics were a significant factor influencing the efficacy of sound-field systems; children in classes with poorer acoustics benefited in listening comprehension, whereas there was no additional benefit for children in classrooms with better acoustics.
INTEGRATED HUMAN EXPOSURE SOURCE-TO-DOSE MODELING
The NERL human exposure research program is designed to provide a sound, scientifically-based approach to understanding how people are actually exposed to pollutants and the factors and pathways influencing exposure and dose. This research project serves to integrate and incorpo...
Classification of biological cells using a sound wave based flow cytometer
NASA Astrophysics Data System (ADS)
Strohm, Eric M.; Gnyawali, Vaskar; Van De Vondervoort, Mia; Daghighi, Yasaman; Tsai, Scott S. H.; Kolios, Michael C.
2016-03-01
A flow cytometer that uses sound waves to determine the size of biological cells is presented. In this system, a microfluidic device made of polydimethylsiloxane (PDMS) was developed to hydrodynamically flow focus cells in a single file through a target area. Integrated into the microfluidic device was an ultrasound transducer with a 375 MHz center frequency, aligned opposite the transducer was a pulsed 532 nm laser focused into the device by a 10x objective. Each passing cell was insonfied with a high frequency ultrasound pulse, and irradiated with the laser. The resulting ultrasound and photoacoustic waves from each cell were analyzed using signal processing methods, where features in the power spectra were compared to theoretical models to calculate the cell size. Two cell lines with different size distributions were used to test the system: acute myeloid leukemia cells (AML) and melanoma cells. Over 200 cells were measured using this system. The average calculated diameter of the AML cells was 10.4 +/- 2.5 μm using ultrasound, and 11.4 +/- 2.3 μm using photoacoustics. The average diameter of the melanoma cells was 16.2 +/- 2.9 μm using ultrasound, and 18.9 +/- 3.5 μm using photoacoustics. The cell sizes calculated using ultrasound and photoacoustic methods agreed with measurements using a Coulter Counter, where the AML cells were 9.8 +/- 1.8 μm and the melanoma cells were 16.0 +/- 2.5 μm. These results demonstrate a high speed method of assessing cell size using sound waves, which is an alternative method to traditional flow cytometry techniques.
Heart Sound Biometric System Based on Marginal Spectrum Analysis
Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin
2013-01-01
This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515
Integrated System Health Management: Foundational Concepts, Approach, and Implementation
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2009-01-01
A sound basis to guide the community in the conception and implementation of ISHM (Integrated System Health Management) capability in operational systems was provided. The concept of "ISHM Model of a System" and a related architecture defined as a unique Data, Information, and Knowledge (DIaK) architecture were described. The ISHM architecture is independent of the typical system architecture, which is based on grouping physical elements that are assembled to make up a subsystem, and subsystems combine to form systems, etc. It was emphasized that ISHM capability needs to be implemented first at a low functional capability level (FCL), or limited ability to detect anomalies, diagnose, determine consequences, etc. As algorithms and tools to augment or improve the FCL are identified, they should be incorporated into the system. This means that the architecture, DIaK management, and software, must be modular and standards-based, in order to enable systematic augmentation of FCL (no ad-hoc modifications). A set of technologies (and tools) needed to implement ISHM were described. One essential tool is a software environment to create the ISHM Model. The software environment encapsulates DIaK, and an infrastructure to focus DIaK on determining health (detect anomalies, determine causes, determine effects, and provide integrated awareness of the system to the operator). The environment includes gateways to communicate in accordance to standards, specially the IEEE 1451.1 Standard for Smart Sensors and Actuators.
Sound source localization inspired by the ears of the Ormia ochracea
NASA Astrophysics Data System (ADS)
Kuntzman, Michael L.; Hall, Neal A.
2014-07-01
The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.
Noise and stress: a comprehensive approach.
Westman, J C; Walters, J R
1981-01-01
The fundamental purposes of hearing are to alert and to warn. As a result sound directly evokes emotions and actions. The processing of sound by the brain is outlined to provide a biological and psychological basis for understanding the way in which sound can become a human stressor. The auditory orienting response, startle reflex and defensive response translate sound stimuli into action and sometimes into stress induced bodily changes through "fight or flight" neural mechanisms. The literature on the health and mental health effects of noise then is reviewed in the context of an integrated model that offers a holistic approach to noise research and public policy formulation. The thesis of this paper is that research upon, and efforts to prevent or minimize the harmful effects of noise have suffered from the lack of a full appreciation of the ways in which humans process and react to sound. PMID:7333243
Determination of equivalent sound speed profiles for ray tracing in near-ground sound propagation.
Prospathopoulos, John M; Voutsinas, Spyros G
2007-09-01
The determination of appropriate sound speed profiles in the modeling of near-ground propagation using a ray tracing method is investigated using a ray tracing model which is capable of performing axisymmetric calculations of the sound field around an isolated source. Eigenrays are traced using an iterative procedure which integrates the trajectory equations for each ray launched from the source at a specific direction. The calculation of sound energy losses is made by introducing appropriate coefficients to the equations representing the effect of ground and atmospheric absorption and the interaction with the atmospheric turbulence. The model is validated against analytical and numerical predictions of other methodologies for simple cases, as well as against measurements for nonrefractive atmospheric environments. A systematic investigation for near-ground propagation in downward and upward refractive atmosphere is made using experimental data. Guidelines for the suitable simulation of the wind velocity profile are derived by correlating predictions with measurements.
Effect of ultrasonic cavitation on measurement of sound pressure using hydrophone
NASA Astrophysics Data System (ADS)
Thanh Nguyen, Tam; Asakura, Yoshiyuki; Okada, Nagaya; Koda, Shinobu; Yasuda, Keiji
2017-07-01
Effect of ultrasonic cavitation on sound pressure at the fundamental, second harmonic, and first ultraharmonic frequencies was investigated from low to high ultrasonic intensities. The driving frequencies were 22, 304, and 488 kHz. Sound pressure was measured using a needle-type hydrophone and ultrasonic cavitation was estimated from the broadband integrated pressure (BIP). With increasing square root of electric power applied to a transducer, the sound pressure at the fundamental frequency linearly increased initially, dropped at approximately the electric power of cavitation inception, and afterward increased again. The sound pressure at the second harmonic frequency was detected just below the electric power of cavitation inception. The first ultraharmonic component appeared at around the electric power of cavitation inception at 304 and 488 kHz. However, at 22 kHz, the first ultraharmonic component appeared at a higher electric power than that of cavitation inception.
Chulach, Teresa; Gagnon, Marilou
2016-03-01
Nurse practitioners (NPs), as advanced practice nurses, have evolved over the years to become recognized as an important and growing trend in Canada and worldwide. In spite of sound evidence as to the effectiveness of NPs in primary care and other care settings, role implementation and integration continue to pose significant challenges. This article utilizes postcolonial theory, as articulated by Homi Bhabha, to examine and challenge traditional ideologies and structures that have shaped the development, implementation and integration of the NP role to this day. Specifically, we utilize Bhabha's concepts of third space, hybridity, identity and agency in order to further conceptualize the nurse practitioner role, to examine how the role challenges some of the inherent assumptions within the healthcare system and to explore how development of each to these concepts may prove useful in integration of nurse practitioners within the healthcare system. Our analysis casts light on the importance of a broader, power structure analysis and illustrates how colonial assumptions operating within our current healthcare system entrench, expand and re-invent, as well as mask the structures and practices that serve to impede nurse practitioner full integration and contributions. Suggestions are made for future analysis and research. © 2015 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Bhat, Biliyar N.
2008-01-01
Ares I Crew Launch Vehicle Upper Stage is designed and developed based on sound systems engineering principles. Systems Engineering starts with Concept of Operations and Mission requirements, which in turn determine the launch system architecture and its performance requirements. The Ares I-Upper Stage is designed and developed to meet these requirements. Designers depend on the support from materials, processes and manufacturing during the design, development and verification of subsystems and components. The requirements relative to reliability, safety, operability and availability are also dependent on materials availability, characterization, process maturation and vendor support. This paper discusses the roles and responsibilities of materials and manufacturing engineering during the various phases of Ares IUS development, including design and analysis, hardware development, test and verification. Emphasis is placed how materials, processes and manufacturing support is integrated over the Upper Stage Project, both horizontally and vertically. In addition, the paper describes the approach used to ensure compliance with materials, processes, and manufacturing requirements during the project cycle, with focus on hardware systems design and development.
NASA Astrophysics Data System (ADS)
Koike, T.; Lawford, R. G.; Cripe, D.
2012-12-01
It is critically important to recognize and co-manage the fundamental linkages across the water-dependent domains; land use, including deforestation; ecosystem services; and food-, energy- and health-securities. Sharing coordinated, comprehensive and sustained observations and information for sound decision-making is a first step; however, to take full advantage of these opportunities, we need to develop an effective collaboration mechanism for working together across different disciplines, sectors and agencies, and thereby gain a holistic view of the continuity between environmentally sustainable development, climate change adaptation and enhanced resilience. To promote effective multi-sectoral, interdisciplinary collaboration based on coordinated and integrated efforts, the Global Earth Observation System of Systems (GEOSS) is now developing a "GEOSS Water Cycle Integrator (WCI)", which integrates "Earth observations", "modeling", "data and information", "management systems" and "education systems". GEOSS/WCI sets up "work benches" by which partners can share data, information and applications in an interoperable way, exchange knowledge and experiences, deepen mutual understanding and work together effectively to ultimately respond to issues of both mitigation and adaptation. (A work bench is a virtual geographical or phenomenological space where experts and managers collaborate to use information to address a problem within that space). GEOSS/WCI enhances the coordination of efforts to strengthen individual, institutional and infrastructure capacities, especially for effective interdisciplinary coordination and integration. GEO has established the GEOSS Asian Water Cycle Initiative (AWCI) and GEOSS African Water Cycle Coordination Initiative (AfWCCI). Through regional, inter-disciplinary, multi-sectoral integration and inter-agency coordination in Asia and Africa, GEOSS/WCI is now leading to effective actions and public awareness in support of water security and sustainable development.
Night-day-night sleep-wakefulness monitoring by ambulatory integrated circuit memories.
Yamamoto, M; Nakao, M; Katayama, N; Waku, M; Suzuki, K; Irokawa, K; Abe, M; Ueno, T
1999-04-01
A medium-sized portable digital recorder with fully integrated circuit (IC) memories for sleep monitoring has been developed. It has five amplifiers for EEG, EMG, EOG, ECG, and a signal of body acceleration or respiration sound, four event markers, an 8 ch A/D converter, a digital signal processor (DSP), 192 Mbytes IC flash memories, and batteries. The whole system weighs 1200 g including batteries and is put into a small bag worn on the subject's waist or carried in their hand. The sampling rate for each input channel is programmable through the DSP. This apparatus is valuable for continuously monitoring the states of sleep-wakefulness over 24 h, making a night-day-night recording possible in a hospital, home, or car.
Simulation of prenatal maternal sounds in NICU incubators: a pilot safety and feasibility study.
Panagiotidis, John; Lahav, Amir
2010-10-01
This pilot study evaluated the safety and feasibility of an innovative audio system for transmitting maternal sounds to NICU incubators. A sample of biological sounds, consisting of voice and heartbeat, were recorded from a mother of a premature infant admitted to our unit. The maternal sounds were then played back inside an unoccupied incubator via a specialized audio system originated and compiled in our lab. We performed a series of evaluations to determine the safety and feasibility of using this system in NICU incubators. The proposed audio system was found to be safe and feasible, meeting criteria for humidity and temperature resistance, as well as for safe noise levels. Simulation of maternal sounds using this system seems achievable and applicable and received local support from medical staff. Further research and technology developments are needed to optimize the design of the NICU incubators to preserve the acoustic environment of the womb.
Fundamental plasma emission involving ion sound waves
NASA Technical Reports Server (NTRS)
Cairns, Iver H.
1987-01-01
The theory for fundamental plasma emission by the three-wave processes L + or - S to T (where L, S and T denote Langmuir, ion sound and transverse waves, respectively) is developed. Kinematic constraints on the characteristics and growth lengths of waves participating in the wave processes are identified. In addition the rates, path-integrated wave temperatures, and limits on the brightness temperature of the radiation are derived.
Saito, Kaoru; Nakamura, Kazuhiko; Ueta, Mutsuyuki; Kurosawa, Reiko; Fujiwara, Akio; Kobayashi, Hill Hiroki; Nakayama, Masaya; Toko, Ayako; Nagahama, Kazuyo
2015-11-01
We have developed a system that streams and archives live sound from remote areas across Japan via an unmanned automatic camera. The system was used to carry out pilot bird censuses in woodland; this allowed us to examine the use of live sound transmission and the role of social media as a mediator in remote scientific monitoring. The system has been streaming sounds 8 h per day for more than five years. We demonstrated that: (1) the transmission of live sound from a remote woodland could be used effectively to monitor birds in a remote location; (2) the simultaneous involvement of several participants via Internet Relay Chat to listen to live sound transmissions could enhance the accuracy of census data collection; and (3) interactions through Twitter allowed members of the public to engage or help with the remote monitoring of birds and experience inaccessible nature through the use of novel technologies.
Low-cost compact ECG with graphic LCD and phonocardiogram system design.
Kara, Sadik; Kemaloğlu, Semra; Kirbaş, Samil
2006-06-01
Till today, many different ECG devices are made in developing countries. In this study, low cost, small size, portable LCD screen ECG device, and phonocardiograph were designed. With designed system, heart sounds that take synchronously with ECG signal are heard as sensitive. Improved system consist three units; Unit 1, ECG circuit, filter and amplifier structure. Unit 2, heart sound acquisition circuit. Unit 3, microcontroller, graphic LCD and ECG signal sending unit to computer. Our system can be used easily in different departments of the hospital, health institution and clinics, village clinic and also in houses because of its small size structure and other benefits. In this way, it is possible that to see ECG signal and hear heart sounds as synchronously and sensitively. In conclusion, heart sounds are heard on the part of both doctor and patient because sounds are given to environment with a tiny speaker. Thus, the patient knows and hears heart sounds him/herself and is acquainted by doctor about healthy condition.
Sayles, Jesse S; Baggio, Jacopo A
2017-01-15
Governance silos are settings in which different organizations work in isolation and avoid sharing information and strategies. Siloes are a fundamental challenge for environmental planning and problem solving, which generally requires collaboration. Siloes can be overcome by creating governance networks. Studying the structure and function of these networks is important for understanding how to create institutional arrangements that can respond to the biophysical dynamics of a specific natural resource system (i.e., social-ecological, or institutional fit). Using the case of salmon restoration in a sub-basin of Puget Sound, USA, we assess network integration, considering three different reasons for network collaborations (i.e., mandated, funded, and shared interest relationships) and analyze how these different collaboration types relate to productivity based on practitioner's assessments. We also illustrate how specific and targeted network interventions might enhance the network. To do so, we use a mixed methods approach that combines quantitative social network analysis (SNA) and qualitative interview analysis. Overall, the sub-basin's governance network is fairly well integrated, but several concerning gaps exist. Funded, mandated, and shared interest relationships lead to different network patterns. Mandated relationships are associated with lower productivity than shared interest relationships, highlighting the benefit of genuine collaboration in collaborative watershed governance. Lastly, quantitative and qualitative data comparisons strengthen recent calls to incorporate geographic space and the role of individual actors versus organizational culture into natural resource governance research using SNA. Copyright © 2016 Elsevier Ltd. All rights reserved.
Senkowski, Daniel; Saint-Amour, Dave; Kelly, Simon P; Foxe, John J
2007-07-01
In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects in motion as reflected in event-related potentials (ERPs). A randomized stream of naturalistic multisensory-audiovisual (AV), unisensory-auditory (A), and unisensory-visual (V) "splash" clips (i.e., a drop falling and hitting a water surface) was presented among non-naturalistic abstract motion stimuli. The visual clip onset preceded the "splash" onset by 100 ms for multisensory stimuli. For naturalistic objects early multisensory integration effects beginning 120-140 ms after sound onset were observed over posterior scalp, with distributed sources localized to occipital cortex, temporal lobule, insular, and medial frontal gyrus (MFG). These effects, together with longer latency interactions (210-250 and 300-350 ms) found in a widespread network of occipital, temporal, and frontal areas, suggest that naturalistic objects in motion are processed at multiple stages of multisensory integration. The pattern of integration effects differed considerably for non-naturalistic stimuli. Unlike naturalistic objects, no early interactions were found for non-naturalistic objects. The earliest integration effects for non-naturalistic stimuli were observed 210-250 ms after sound onset including large portions of the inferior parietal cortex (IPC). As such, there were clear differences in the cortical networks activated by multisensory motion stimuli as a consequence of the semantic relatedness (or lack thereof) of the constituent sensory elements.
Differential neural contributions to native- and foreign-language talker identification
Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C.M.
2009-01-01
Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system’s ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies describing the language-familiarity effect implicate functionally integrated neural systems for speech and voice perception, yet specific neuroscientific evidence demonstrating the basis for such integration has not yet been shown. Listeners in the present study learned to identify voices speaking a familiar (native) or unfamiliar (foreign) language. The talker-identification performance of neural circuitry in each cerebral hemisphere was assessed using dichotic listening. To determine the relative contribution of circuitry in each hemisphere to ecological (binaural) talker identification abilities, we compared the predictive capacity of dichotic performance on binaural performance across languages. We found listeners’ right-ear (left hemisphere) performance to be a better predictor of overall accuracy in their native language than a foreign one. The enhanced predictive capacity of the classically language-dominant left-hemisphere on overall talker-identification accuracy demonstrates functionally integrated neural systems for speech and voice perception during natural talker identification. PMID:19968445
Woodruff Carr, Kali; Fitzroy, Ahren B; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina
2017-01-01
Speech communication involves integration and coordination of sensory perception and motor production, requiring precise temporal coupling. Beat synchronization, the coordination of movement with a pacing sound, can be used as an index of this sensorimotor timing. We assessed adolescents' synchronization and capacity to correct asynchronies when given online visual feedback. Variability of synchronization while receiving feedback predicted phonological memory and reading sub-skills, as well as maturation of cortical auditory processing; less variable synchronization during the presence of feedback tracked with maturation of cortical processing of sound onsets and resting gamma activity. We suggest the ability to incorporate feedback during synchronization is an index of intentional, multimodal timing-based integration in the maturing adolescent brain. Precision of temporal coding across modalities is important for speech processing and literacy skills that rely on dynamic interactions with sound. Synchronization employing feedback may prove useful as a remedial strategy for individuals who struggle with timing-based language learning impairments. Copyright © 2016 Elsevier Inc. All rights reserved.
Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.
2017-04-01
The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.
49 CFR Appendix E to Part 222 - Requirements for Wayside Horns
Code of Federal Regulations, 2011 CFR
2011-10-01
..., indicates that the system is not operating as intended; 4. Horn system must provide a minimum sound level of... locomotive engineer to sound the locomotive horn for at least 15 seconds prior to arrival at the crossing in...; 5. Horn system must sound at a minimum of 15 seconds prior to the train's arrival at the crossing...
49 CFR Appendix E to Part 222 - Requirements for Wayside Horns
Code of Federal Regulations, 2010 CFR
2010-10-01
..., indicates that the system is not operating as intended; 4. Horn system must provide a minimum sound level of... locomotive engineer to sound the locomotive horn for at least 15 seconds prior to arrival at the crossing in...; 5. Horn system must sound at a minimum of 15 seconds prior to the train's arrival at the crossing...
The Integration and Applications of Organic Thin Film Transistors and Ferroelectric Polymers
NASA Astrophysics Data System (ADS)
Hsu, Yu-Jen
Organic thin film transistors and ferroelectric polymer (polyvinylidene difluoride) sheet material are integrated to form various sensors for stress/strain, acoustic wave, and Infrared (heat) sensing applications. Different from silicon-based transistors, organic thin film transistors can be fabricated and processed in room-temperature and integrated with a variety of substrates. On the other hand, polyvinylidene difluoride (PVDF) exhibits ferroelectric properties that are highly useful for sensor applications. The wide frequency bandwidth (0.001 Hz to 10 GHz), vast dynamic range (100n to 10M psi), and high elastic compliance (up to 3 percent) make PVDF a more suitable candidate over ceramic piezoelectric materials for thin and flexible sensor applications. However, the low Curie temperature may have impeded its integration with silicon technology. Organic thin film transistors, however, do not have the limitation of processing temperature, hence can serve as transimpedance amplifiers to convert the charge signal generated by PVDF into current signal that are more measurable and less affected by any downstream parasitics. Piezoelectric sensors are useful for a range of applications, but passive arrays suffer from crosstalk and signal attenuation which have complicated the development of array-based PVDF sensors. We have used organic field effect transistors, which are compatible with the low Curie temperature of a flexible piezoelectric polymer,PVDF, to monolithically fabricate transimpedance amplifiers directly on the sensor surface and convert the piezoelectric charge signal into a current signal which can be detected even in the presence of parasitic capacitances. The device couples the voltage generated by the PVDF film under strain into the gate of the organic thin film transistors (OFET) using an arrangement that allows the full piezoelectric voltage to couple to the channel, while also increasing the charge retention time. A bipolar detector is created by using a UV-Ozone treatment to shift the threshold voltage and increase the current of the transistor under both compressive and tensile strain. An array of strain sensors which maps the strain field on a PVDF film surface is demonstrated in this work. The strain sensor experience inspires a tone analyzer built using distributed resonator architecture on a tensioned piezoelectric PVDF sheet. This sheet is used as both the resonator and detection element. Two architectures are demonstrated; one uses distributed directly addressed elements as a proof of concept, and the other integrates organic thin film transistor-based transimpedance amplifiers monolithically with the PVDF sheet to convert the piezoelectric charge signal into a current signal for future applications such as sound field imaging. The PVDF sheet material is instrumented along its length and the amplitude response at 15 sites is recorded and analyzed as a function of the frequency of excitation. The determination of the dominant frequency component of an incoming sound is demonstrated using linear system decomposition of the time-averaged response of the sheet using no time domain detection. Our design allows for the determination of the spectral composition of a sound using the mechanical signal processing provided by the amplitude response and eliminates the need for time-domain electronic signal processing of the incoming signal. The concepts of the PVDF strain sensor and the tone analyzer trigger the idea of an active matrix microphone through the integration of organic thin film transistors with a freestanding piezoelectric polymer sheet. Localized acoustic pressure detection is enabled by switch transistors and local transimpedance amplification built into the active matrix architecture. The frequency of detection ranges from DC to 15KHz; the bandwidth is extended using an architecture that provides for virtually zero gate/source and gate/drain capacitance at the sensing transistors and low overlap capacitance at the switch transistors. A series of measurements are taken to demonstrate localized acoustic wave detection, high pitch sound diffraction pattern mapping, and directional listening. This system permits the direct visualization of a two dimensional sound field in a format that was previously inaccessible. In addition to the piezoelectric property, pyroelectricity is also exhibited by PVDF and is essential in the world of sensors. An integration of PVDF and OFET for the IR heat sensing is demonstrated to prove the concept of converting pyroelectric charge signal to a electric current signal. The basic pyroelectricity of PVDF sheet is first examined before making a organic transistor integrated IR sensor. Then, two types of architectures are designed and tested. The first one uses the structure similar to the PVDF strain sensor, and the second one uses a PVDF capacitor to gate the integrated OFETs. The conversion from pyroelectric signal to transistor current signal is observed and characterized. This design provides a flexible and gain-tunable version for IR heat sensors.
Integrating MPI and deduplication engines: a software architecture roadmap.
Baksi, Dibyendu
2009-03-01
The objective of this paper is to clarify the major concepts related to architecture and design of patient identity management software systems so that an implementor looking to solve a specific integration problem in the context of a Master Patient Index (MPI) and a deduplication engine can address the relevant issues. The ideas presented are illustrated in the context of a reference use case from Integrating the Health Enterprise Patient Identifier Cross-referencing (IHE PIX) profile. Sound software engineering principles using the latest design paradigm of model driven architecture (MDA) are applied to define different views of the architecture. The main contribution of the paper is a clear software architecture roadmap for implementors of patient identity management systems. Conceptual design in terms of static and dynamic views of the interfaces is provided as an example of platform independent model. This makes the roadmap applicable to any specific solutions of MPI, deduplication library or software platform. Stakeholders in need of integration of MPIs and deduplication engines can evaluate vendor specific solutions and software platform technologies in terms of fundamental concepts and can make informed decisions that preserve investment. This also allows freedom from vendor lock-in and the ability to kick-start integration efforts based on a solid architecture.
NASA Astrophysics Data System (ADS)
Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan
2017-10-01
It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.
Numerical Simulation of Noise from Supersonic Jets Passing Through a Rigid Duct
NASA Technical Reports Server (NTRS)
Kandula, Max
2012-01-01
The generation, propagation and radiation of sound from a perfectly expanded Mach 2.5 cold supersonic jet flowing through an enclosed rigid-walled duct with an upstream J-deflector have been numerically simulated with the aid of OVERFLOW Navier-Stokes CFD code. A one-equation turbulence model is considered. While the near-field sound sources are computed by the CFD code, the far-field sound is evaluated by Kirchhoff surface integral formulation. Predictions of the farfield directivity of the OASPL (Overall Sound Pressure Level) agree satisfactorily with the experimental data previously reported by the author. Calculations also suggest that there is significant entrainment of air into the duct, with the mass flow rate of entrained air being about three times the jet exit mass flow rate.
Recognition of Isolated Non-Speech Sounds.
1987-05-31
stapler could be presented within a set of paper shuffling sounds and within a set of sounds characteristic of entering a room. The former context...should act in a top down manner to suggest a stapler event for the sound whereas the latter context will suggest that a light has been switched on. Such...Moulton Street Cambridge, MA 02238 " Department of the Army Or. William B. Rouse School of Industrial and Systems Director, Organizations and Systems
Goverman, I L
1994-11-01
Group Health Cooperative of Puget Sound (GHC), a large staff-model health maintenance organization based in Seattle, is redesigning its information systems to provide the systems and information needed to support its quality agenda. Long-range planning for GHC's information resources was done in three phases. In assessment, interviews, surveys, and a benchmarking effort identified strengths and weaknesses of the existing information systems. We concluded that we needed to improve clinical care and patient management systems and enhance health plan applications. In direction setting, we developed six objectives (for example, approach information systems in a way that is consistent with quality improvement principles). Detailed planning was used to define projects, timing, and resource allocations. Some of the most important efforts in the resulting five-year plan include the development of (1) a computerized patient record; (2) a provider-based clinical workstation for access to patient information, order entry, results reporting, guidelines, and reminders; (3) a comprehensive set of patient management and service quality systems; (4) reengineered structures, policies, and processes within the health plan, supported by a complete set of integrated information systems; (5) a standardized, high-capacity communications network to provide linkages both within GHC and among its business partners; and (6) a revised oversight structure for information services, which forms partnerships with users. A quality focus ensured that each project not only produced its own benefits but also supported the larger organizational goals associated with "total" quality.
50 CFR 27.71 - Motion or sound pictures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 6 2010-10-01 2010-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...
50 CFR 27.71 - Motion or sound pictures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 9 2012-10-01 2012-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...
50 CFR 27.71 - Motion or sound pictures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 8 2011-10-01 2011-10-01 false Motion or sound pictures. 27.71 Section 27... (CONTINUED) THE NATIONAL WILDLIFE REFUGE SYSTEM PROHIBITED ACTS Disturbing Violations: Light and Sound Equipment § 27.71 Motion or sound pictures. The taking or filming of any motion or sound pictures on a...
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829
NASA Astrophysics Data System (ADS)
Gover, Bradford Noel
The problem of hands-free speech pick-up is introduced, and it is identified how details of the spatial properties of the reverberant field may be useful for enhanced design of microphone arrays. From this motivation, a broadly-applicable measurement system has been developed for the analysis of the directional and spatial variations in reverberant sound fields. Two spherical, 32-element arrays of microphones are used to generate narrow beams over two different frequency ranges, together covering 300--3300 Hz. Using an omnidirectional loudspeaker as excitation in a room, the pressure impulse response in each of 60 steering directions is measured. Through analysis of these responses, the variation of arriving energy with direction is studied. The system was first validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively through numerical descriptors and qualitatively from plots of the arriving energy versus direction. The system was then used to measure the sound fields in several actual rooms. Through both qualitative and quantitative output, these sound fields were seen to be highly anisotropic, influenced greatly by the direct sound and early-arriving reflections. Furthermore, the rate of sound decay was not independent of direction, sound being absorbed more rapidly in some directions than in others. These results are discussed in the context of the original motivation, and methods for their application to enhanced speech pick-up using microphone arrays are proposed.
AUDIS wear: a smartwatch based assistive device for ubiquitous awareness of environmental sounds.
Mielke, Matthias; Bruck, Rainer
2016-08-01
A multitude of assistive devices is available for deaf people (i.e. deaf, deafened, and hard of hearing). Besides hearing and communication aids, devices to access environmental sounds are available commercially. But the devices have two major drawbacks: 1. they are targeted at indoor environments (e.g. home or work), and 2. only specific events are supported (e.g. the doorbell or telephone). Recent research shows that important sounds can occur in all contexts and that the interests in sounds are diverse. These drawbacks can be tackled by using modern information and communication technology that enables the development of new and improved assistive devices. The smartwatch, a new computing platform in the form of a wristwatch, offers new potential for assistive technology. Its design promises a perfect integration into various different social contexts and thus blends perfectly into the user's life. Based on a smartwatch and algorithms from pattern recognition, a prototype for awareness of environmental sounds is presented here. It observes the acoustic environment of the user and detects environmental sounds. A vibration is triggered when a sound is detected and the type of sound is shown on the display. The design of the prototype was discussed with deaf people in semi-structured interviews, leading to a set of implications for the design of such a device.
Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.
Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia
2016-01-01
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.
Integrated risk framework for onsite wastewater treatment systems.
Carroll, Steven; Goonetilleke, Ashantha; Thomas, Evan; Hargreaves, Megan; Frost, Ray; Dawes, Les
2006-08-01
Onsite wastewater treatment systems (OWTS) are becoming increasingly important for the treatment and dispersal of effluent in new urbanised developments that are not serviced by centralised wastewater collection and treatment systems. However, the current standards and guidelines adopted by many local authorities for assessing suitable site and soil conditions for OWTS are increasingly coming under scrutiny due to the public health and environmental impacts caused by poorly performing systems, in particular septic tank-soil adsorption systems. In order to achieve sustainable onsite wastewater treatment with minimal impacts on the environment and public health, more appropriate means of assessment are required. This paper highlights an integrated risk based approach for assessing the inherent hazards associated with OWTS in order to manage and mitigate the environmental and public health risks inherent with onsite wastewater treatment. In developing a sound and cohesive integrated risk framework for OWTS, several key issues must be recognised. These include the inclusion of relevant stakeholders throughout framework development, the integration of scientific knowledge, data and analysis with risk assessment and management ideals, and identification of the appropriate performance goals for successful management and mitigation of associated risks. These issues were addressed in the development of the risk framework to provide a generic approach to assessing risk from OWTS. The utilisation of the developed risk framework for achieving more appropriate assessment and management techniques for OWTS is presented in a case study for the Gold Coast region, Queensland State, Australia.
Integrated Risk Framework for Onsite Wastewater Treatment Systems
NASA Astrophysics Data System (ADS)
Carroll, Steven; Goonetilleke, Ashantha; Thomas, Evan; Hargreaves, Megan; Frost, Ray; Dawes, Les
2006-08-01
Onsite wastewater treatment systems (OWTS) are becoming increasingly important for the treatment and dispersal of effluent in new urbanised developments that are not serviced by centralised wastewater collection and treatment systems. However, the current standards and guidelines adopted by many local authorities for assessing suitable site and soil conditions for OWTS are increasingly coming under scrutiny due to the public health and environmental impacts caused by poorly performing systems, in particular septic tank-soil adsorption systems. In order to achieve sustainable onsite wastewater treatment with minimal impacts on the environment and public health, more appropriate means of assessment are required. This paper highlights an integrated risk based approach for assessing the inherent hazards associated with OWTS in order to manage and mitigate the environmental and public health risks inherent with onsite wastewater treatment. In developing a sound and cohesive integrated risk framework for OWTS, several key issues must be recognised. These include the inclusion of relevant stakeholders throughout framework development, the integration of scientific knowledge, data and analysis with risk assessment and management ideals, and identification of the appropriate performance goals for successful management and mitigation of associated risks. These issues were addressed in the development of the risk framework to provide a generic approach to assessing risk from OWTS. The utilisation of the developed risk framework for achieving more appropriate assessment and management techniques for OWTS is presented in a case study for the Gold Coast region, Queensland State, Australia.
The Audible Human Project: Modeling Sound Transmission in the Lungs and Torso
NASA Astrophysics Data System (ADS)
Dai, Zoujun
Auscultation has been used qualitatively by physicians for hundreds of years to aid in the monitoring and diagnosis of pulmonary diseases. Alterations in the structure and function of the pulmonary system that occur in disease or injury often give rise to measurable changes in lung sound production and transmission. Numerous acoustic measurements have revealed the differences of breath sounds and transmitted sounds in the lung under normal and pathological conditions. Compared to the extensive cataloging of lung sound measurements, the mechanism of sound transmission in the pulmonary system and how it changes with alterations of lung structural and material properties has received less attention. A better understanding of sound transmission and how it is altered by injury and disease might improve interpretation of lung sound measurements, including new lung imaging modalities that are based on an array measurement of the acoustic field on the torso surface via contact sensors or are based on a 3-dimensional measurement of the acoustic field throughout the lungs and torso using magnetic resonance elastography. A long-term goal of the Audible Human Project (AHP ) is to develop a computational acoustic model that would accurately simulate generation, transmission and noninvasive measurement of sound and vibration within the pulmonary system and torso caused by both internal (e.g. respiratory function) and external (e.g. palpation) sources. The goals of this dissertation research, fitting within the scope of the AHP, are to develop specific improved theoretical understandings, computational algorithms and experimental methods aimed at transmission and measurement. The research objectives undertaken in this dissertation are as follows. (1) Improve theoretical modeling and experimental identification of viscoelasticity in soft biological tissues. (2) Develop a poroviscoelastic model for lung tissue vibroacoustics. (3) Improve lung airway acoustics modeling and its coupling to the lung parenchyma; and (4) Develop improved techniques in array acoustic measurement on the torso surface of sound transmitted through the pulmonary system and torso. Tissue Viscoelasticity. Two experimental identification approaches of shear viscoelasticity were used. The first approach is to directly estimate the frequency-dependent surface wave speed and then to optimize the coefficients in an assumed viscoelastic model type. The second approach is to measure the complex-valued frequency response function (FRF) between the excitation location and points at known radial distances. The FRF has embedded in it frequency-dependent information about both surface wave phase speed and attenuation that can be used to directly estimate the complex shear modulus. The coefficients in an assumed viscoelastic tissue model type can then be optimized. Poroviscoelasticity Model for Lung Vibro-acoustics. A poroviscoelastic model based on Biot theory of wave propagation in porous media was used for compression waves in the lungs. This model predicts a fast compression wave speed close to the one predicted by the effective medium theory at low frequencies and an additional slow compression wave due to the out of phase motion of the air and the lung parenchyma. Both compression wave speeds vary with frequency. The fast compression wave speed and attenuation were measured on an excised pig lung under two different transpulmonary pressures. Good agreement was achieved between the experimental observation and theoretical predictions. Sound Transmission in Airways and Coupling to Lung Parenchyma. A computer generated airway tree was simplified to 255 segments and integrated into the lung geometry from the Visible Human Male for numerical simulations. Acoustic impedance boundary conditions were applied at the ends of the terminal segments to represent the unmodeled downstream airway segments. Experiments were also carried out on a preserved pig lung and similar trends of lung surface velocity distribution were observed between the experiments and simulations. This approach provides a feasible way of simplifying the airway tree and greatly reduces the computation time. Acoustic Measurements of Sound Transmission in Human Subjects. Scanning laser Doppler vibrometry (SLDV) was used as a gold standard for transmitted sound measurements on a human subject. A low cost piezodisk sensor array was also constructed as an alternative to SLDV. The advantages and disadvantages of each technique are discussed.
The TAME Project: Towards improvement-oriented software environments
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Rombach, H. Dieter
1988-01-01
Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture.
SPAIDE: A Real-time Research Platform for the Clarion CII/90K Cochlear Implant
NASA Astrophysics Data System (ADS)
Van Immerseel, L.; Peeters, S.; Dykmans, P.; Vanpoucke, F.; Bracke, P.
2005-12-01
SPAIDE ( sound-processing algorithm integrated development environment) is a real-time platform of Advanced Bionics Corporation (Sylmar, Calif, USA) to facilitate advanced research on sound-processing and electrical-stimulation strategies with the Clarion CII and 90K implants. The platform is meant for testing in the laboratory. SPAIDE is conceptually based on a clear separation of the sound-processing and stimulation strategies, and, in specific, on the distinction between sound-processing and stimulation channels and electrode contacts. The development environment has a user-friendly interface to specify sound-processing and stimulation strategies, and includes the possibility to simulate the electrical stimulation. SPAIDE allows for real-time sound capturing from file or audio input on PC, sound processing and application of the stimulation strategy, and streaming the results to the implant. The platform is able to cover a broad range of research applications; from noise reduction and mimicking of normal hearing, over complex (simultaneous) stimulation strategies, to psychophysics. The hardware setup consists of a personal computer, an interface board, and a speech processor. The software is both expandable and to a great extent reusable in other applications.
49 CFR 210.25 - Measurement criteria and procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... American National Standard Institute Standards, “Method for Measurement of Sound Pressure Levels,” (ANSI S1... measurement indicating a violation. (ii) The sound level measurement system shall be checked not less than... calibrator of the microphone coupler type designed for the sound level measurement system in use shall be...
Yes, You Can Learn Foreign Language Pronunciation by Sight!
ERIC Educational Resources Information Center
Richmond, Edmun B.; And Others
1979-01-01
Describes the Envelope Vowel Approximation System (EVAS), a foreign language pronunciation learning system which allows students to see as well as hear a pedagogical model of a sound, and to compare their own utterances of that sound to the model as they pronounce the same sound. (Author/CMV)
Poppe, L.J.; Danforth, W.W.; McMullen, K.Y.; Parker, Castle E.; Doran, E.F.
2011-01-01
Detailed bathymetric maps of the sea floor in Long Island Sound are of great interest to the Connecticut and New York research and management communities because of this estuary's ecological, recreational, and commercial importance. The completed, geologically interpreted digital terrain models (DTMs), ranging in area from 12 to 293 square kilometers, provide important benthic environmental information, yet many applications require a geographically broader perspective. For example, individual surveys are of limited use for the planning and construction of cross-sound infrastructure, such as cables and pipelines, or for the testing of regional circulation models. To address this need, we integrated 12 multibeam and 2 LIDAR (Light Detection and Ranging) contiguous bathymetric DTMs, produced by the National Oceanic and Atmospheric Administration during charting operations, into one dataset that covers much of eastern Long Island Sound and extends into westernmost Block Island Sound. The new dataset is adjusted to mean lower low water, is gridded to 4-meter resolution, and is provided in UTM Zone 18 NAD83 and geographic WGS84 projections. This resolution is adequate for sea floor-feature and process interpretation but is small enough to be queried and manipulated with standard Geographic Information System programs and to allow for future growth. Natural features visible in the grid include exposed bedrock outcrops, boulder lag deposits of submerged moraines, sand-wave fields, and scour depressions that reflect the strength of the oscillating and asymmetric tidal currents. Bedform asymmetry allows interpretations of net sediment transport. Anthropogenic artifacts visible in the bathymetric data include a dredged channel, shipwrecks, dredge spoils, mooring anchors, prop-scour depressions, buried cables, and bridge footings. Together the merged data reveal a larger, more continuous perspective of bathymetric topography than previously available, providing a fundamental framework for research and resource management activities in this major east-coast estuary.
Weng, Yu-Chi; Fujiwara, Takeshi
2011-06-01
In order to develop a sound material-cycle society, cost-effective municipal solid waste (MSW) management systems are required for the municipalities in the context of the integrated accounting system for MSW management. Firstly, this paper attempts to establish an integrated cost-benefit analysis (CBA) framework for evaluating the effectiveness of MSW management systems. In this paper, detailed cost/benefit items due to waste problems are particularly clarified. The stakeholders of MSW management systems, including the decision-makers of the municipalities and the citizens, are expected to reconsider the waste problems in depth and thus take wise actions with the aid of the proposed CBA framework. Secondly, focusing on the financial cost, this study develops a generalized methodology to evaluate the financial cost-effectiveness of MSW management systems, simultaneously considering the treatment technological levels and policy effects. The impacts of the influencing factors on the annual total and average financial MSW operation and maintenance (O&M) costs are analyzed in the Taiwanese case study with a demonstrative short-term future projection of the financial costs under scenario analysis. The established methodology would contribute to the evaluation of the current policy measures and to the modification of the policy design for the municipalities. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Uses of Integrated Media Instruction in a Self-Contained Class for Children with Mild Disabilities.
ERIC Educational Resources Information Center
Narita, Shigeru
This conference paper describes the use of integrated media-oriented instruction in a self-contained class at Yokohama Municipal Elementary School in Japan. Three students with mild disabilities, in grades 5 and 6, participated in the project. Integrated media (IM) is defined as the linkage of text, sound, video, graphics, and the computer in such…
The Developmental Process of Vowel Integration as Found in Children in Grades 1-3.
ERIC Educational Resources Information Center
Bentz, Darrell; Szymczuk, Mike
A study was designed to investigate the auditory-visual integrative abilities of primary grade children for five long vowels and five short vowels. The Vowel Integration Test (VIT), composed of 35 nonsense words having all the long and short vowel sounds, was administered to students in 64 schools over a period of two years. Students' indications…
Engineering Internship Program Report
NASA Technical Reports Server (NTRS)
Bosch, Brian Y.
1994-01-01
Towards the end of the summer, I prepared for a presentation to the chief of the Flight Crew Support Division to obtain funding for Phase 1 of the project. I presented information on the tracking systems, David Ray presented on the POGO and PABF and the integration of the virtual reality systems, and Mike Van Chau talked about other hardware issues such as head-mounted display, 3-D sound, gloves, graphics platforms, and other peripherals. The funding was approved, and work was to begin at the end of August in evaluating a couple of the tracking systems, to integrate the graphics platform and video equipment with the POGO, and to build a larger gantry for the POGO. This tour I learned how to effectively gather information and present them in a convincible form to gain funding. I explored a entirely new area of technology, that being virtual reality from the most general form down to finer details in its tracking systems. The experiences over the summer have added a lot of detail to work at the Johnson Space Center, life within NASA, and to the many possibilities for becoming involved with the space program.
Refraction of Sound Emitted Near Solid Boundaries from a Sheared Jet
NASA Technical Reports Server (NTRS)
Dill, Loren H.; Oyedrian, Ayo A.; Krejsa, Eugene A.
1998-01-01
A mathematical model is developed to describe the sound emitted from an arbitrary point within a turbulent flow near solid boundaries. A unidirectional, transversely sheared mean flow is assumed, and the cross-section of the cold jet is of arbitrary shape. The analysis begins with Lilley's formulation of aerodynamic noise and, depending upon the specific model of turbulence used, leads via Fourier analysis to an expression for the spectral density of the intensity of the far-field sound emitted from a unit volume of turbulence. The expressions require solution of a reduced Green's function of Lilley's equation as well as certain moving axis velocity correlations of the turbulence. Integration over the entire flow field is required in order to predict the sound emitted by the complete flow. Calculations are presented for sound emitted from a plugflow jet exiting a semi-infinite flat duct. Polar plots of the far-field directivity show the dependence upon frequency and source position within the duct. Certain model problems are suggested to investigate the effect of duct termination, duct geometry, and mean flow shear upon the far-field sound.
NASA Astrophysics Data System (ADS)
Roozen, N. B.; Muellner, H.; Labelle, L.; Rychtáriková, M.; Glorieux, C.
2015-06-01
Structural details and workmanship can cause considerable differences in sound insulation properties of timber frame partitions. In this study, the influence of panel fastening is investigated experimentally by means of standardized sound reduction index measurements, supported by detailed scanning laser Doppler vibrometry. In particular the effect of the number of screws used to fasten the panels to the studs, and the tightness of the screws, is studied using seven different configurations of lightweight timber frame building elements. In the frequency range from 300 to 4000 Hz, differences in the weighted sound reduction index RW as large as 10 dB were measured, suggesting that the method of fastening can have a large impact on the acoustic performance of building elements. Using the measured vibrational responses of the element, its acoustic radiation efficiency was computed numerically by means of a Rayleigh integral. The increased radiation efficiency partly explains the reduced sound reduction index. Loosening the screws, or reducing the number of screws, lowers the radiation efficiency, and significantly increases the sound reduction index of the partition.
A Selective Deficit in Phonetic Recalibration by Text in Developmental Dyslexia.
Keetels, Mirjam; Bonte, Milene; Vroomen, Jean
2018-01-01
Upon hearing an ambiguous speech sound, listeners may adjust their perceptual interpretation of the speech input in accordance with contextual information, like accompanying text or lipread speech (i.e., phonetic recalibration; Bertelson et al., 2003). As developmental dyslexia (DD) has been associated with reduced integration of text and speech sounds, we investigated whether this deficit becomes manifest when text is used to induce this type of audiovisual learning. Adults with DD and normal readers were exposed to ambiguous consonants halfway between /aba/ and /ada/ together with text or lipread speech. After this audiovisual exposure phase, they categorized auditory-only ambiguous test sounds. Results showed that individuals with DD, unlike normal readers, did not use text to recalibrate their phoneme categories, whereas their recalibration by lipread speech was spared. Individuals with DD demonstrated similar deficits when ambiguous vowels (halfway between /wIt/ and /wet/) were recalibrated by text. These findings indicate that DD is related to a specific letter-speech sound association deficit that extends over phoneme classes (vowels and consonants), but - as lipreading was spared - does not extend to a more general audio-visual integration deficit. In particular, these results highlight diminished reading-related audiovisual learning in addition to the commonly reported phonological problems in developmental dyslexia.
Tolaymat, Thabet; El Badawy, Amro; Sequeira, Reynold; Genaidy, Ash
2015-04-01
There is an urgent need for a trans-disciplinary approach for the collective evaluation of engineered nanomaterial (ENM) benefits and risks. Currently, research studies are mostly focused on examining effects at individual endpoints with emphasis on ENM risk effects. Less research work is pursuing the integration needed to advance the science of sustainable ENMs. Therefore, the primary objective of this article is to discuss the system-of-systems (SoS) approach as a broad and integrated paradigm to examine ENM benefits and risks to society, environment, and economy (SEE) within a sustainability context. The aims are focused on: (a) current approaches in the scientific literature and the need for a broad and integrated approach, (b) documentation of ENM SoS in terms of architecture and governing rules and practices within sustainability context, and (c) implementation plan for the road ahead. In essence, the SoS architecture is a communication vehicle offering the opportunity to track benefits and risks in an integrated fashion so as to understand the implications and make decisions about advancing the science of sustainable ENMs. In support of the SoS architecture, we propose using an analytic-based decision support system consisting of a knowledge base and analytic engine along the benefit and risk informatics routes in the SEE system to build sound decisions on what constitutes sustainable and unsustainable ENMs in spite of the existing uncertainties and knowledge gaps. The work presented herein is neither a systematic review nor a critical appraisal of the scientific literature. Rather, it is a position paper that largely expresses the views of the authors based on their expert opinion drawn from industrial and academic experience. Copyright © 2014. Published by Elsevier B.V.
Revision of civil aircraft noise data for the Integrated Noise Model (INM)
DOT National Transportation Integrated Search
1986-09-30
This report provides noise data for the Integrated Noise Model (INM) and is referred to as data base number nine. Air-to-ground sound level versus distance data for civil (and some military) aircraft in a form useful for airport noise contour computa...
NASA Astrophysics Data System (ADS)
Christ, A. J.; Marchant, D. R.
2017-12-01
During the LGM, grounded glacier ice filled the Ross Embayment and deposited glacial drift on volcanic islands and peninsulas in McMurdo Sound, as well as along coastal regions of the Transantarctic Mountains (TAM), including the McMurdo Dry Valleys and Royal Society Range. The flow geometry and retreat history of this ice remains debated, with contrasting views yielding divergent implications for both the fundamental cause of Antarctic ice expansion as well as the interaction and behavior of ice derived from East and West Antarctica during late Quaternary time. We present terrestrial geomorphologic evidence that enables the reconstruction of former ice elevations, ice-flow paths, and ice-marginal environments in McMurdo Sound. Radiocarbon dates of fossil algae interbedded with ice-marginal sediments provide a coherent timeline for local ice retreat. These data are integrated with marine-sediment records and multi-beam data to reconstruct late glacial dynamics of grounded ice in McMurdo Sound and the western Ross Sea. The combined dataset suggest a dominance of ice flow toward the TAM in McMurdo Sound during all phases of glaciation, with thick, grounded ice at or near its maximum extent between 19.6 and 12.3 calibrated thousands of years before present (cal. ka). Our data show no significant advance of locally derived ice from the TAM into McMurdo Sound, consistent with the assertion that Late Pleistocene expansion of grounded ice in McMurdo Sound, and throughout the wider Ross Embayment, occurs in response to lower eustatic sea level and the resulting advance of marine-based outlet glaciers and ice streams (and perhaps also reduced oceanic heat flux), rather than local increases in precipitation and ice accumulation. Finally, when combined with allied data across the wider Ross Embayment, which show that widespread deglaciation outside McMurdo Sound did not commence until 13.1 ka, the implication is that retreat of grounded glacier ice in the Ross Embayment did not add significantly to SLR during Meltwater Pulse 1a (14.0-14.5 ka).
Virtual environment display for a 3D audio room simulation
NASA Technical Reports Server (NTRS)
Chapin, William L.; Foster, Scott H.
1992-01-01
The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.
Huh, Dongeun; Fujioka, Hideki; Tung, Yi-Chung; Futai, Nobuyuki; Paine, Robert; Grotberg, James B; Takayama, Shuichi
2007-11-27
We describe a microfabricated airway system integrated with computerized air-liquid two-phase microfluidics that enables on-chip engineering of human airway epithelia and precise reproduction of physiologic or pathologic liquid plug flows found in the respiratory system. Using this device, we demonstrate cellular-level lung injury under flow conditions that cause symptoms characteristic of a wide range of pulmonary diseases. Specifically, propagation and rupture of liquid plugs that simulate surfactant-deficient reopening of closed airways lead to significant injury of small airway epithelial cells by generating deleterious fluid mechanical stresses. We also show that the explosive pressure waves produced by plug rupture enable detection of the mechanical cellular injury as crackling sounds.
Real time sound analysis for medical remote monitoring.
Istrate, Dan; Binet, Morgan; Cheng, Sreng
2008-01-01
The increase of aging population in Europe involves more people living alone at home with an increased risk of home accidents or falls. In order to prevent or detect a distress situation in the case of an elderly people living alone, a remote monitoring system based on the sound environment analysis can be used. We have already proposed a system which monitors the sound environment, identifies everyday life sounds and distress expressions in order to participate to an alarm decision. This first system uses a classical sound card on a PC or embedded PC allowing only one channel monitor. In this paper, we propose a new architecture of the remote monitoring system, which relies on a real time multichannel implementation based on an USB acquisition card. This structure allows monitoring eight channels in order to cover all the rooms of an apartment. More than that, the SNR estimation leads currently to the adaptation of the recognition models to environment.
Makeyev, Oleksandr; Sazonov, Edward; Schuckers, Stephanie; Lopez-Meyer, Paulo; Melanson, Ed; Neuman, Michael
2007-01-01
In this paper we propose a sound recognition technique based on the limited receptive area (LIRA) neural classifier and continuous wavelet transform (CWT). LIRA neural classifier was developed as a multipurpose image recognition system. Previous tests of LIRA demonstrated good results in different image recognition tasks including: handwritten digit recognition, face recognition, metal surface texture recognition, and micro work piece shape recognition. We propose a sound recognition technique where scalograms of sound instances serve as inputs of the LIRA neural classifier. The methodology was tested in recognition of swallowing sounds. Swallowing sound recognition may be employed in systems for automated swallowing assessment and diagnosis of swallowing disorders. The experimental results suggest high efficiency and reliability of the proposed approach.
NASA Technical Reports Server (NTRS)
1998-01-01
An adaptive control algorithm with on-line system identification capability has been developed. One of the great advantages of this scheme is that an additional system identification mechanism such as an additional uncorrelated random signal generator as the source of system identification is not required. A time-varying plate-cavity system is used to demonstrate the control performance of this algorithm. The time-varying system consists of a stainless-steel plate which is bolted down on a rigid cavity opening where the cavity depth was changed with respect to time. For a given externally located harmonic sound excitation, the system identification and the control are simultaneously executed to minimize the transmitted sound in the cavity. The control performance of the algorithm is examined for two cases. First, all the water was drained, the external disturbance frequency is swept with 1 Hz/sec. The result shows an excellent frequency tracking capability with cavity internal sound suppression of 40 dB. For the second case, the water level is initially empty and then raised to 3/20 full in 60 seconds while the external sound excitation is fixed with a frequency. Hence, the cavity resonant frequency decreases and passes the external sound excitation frequency. The algorithm shows 40 dB transmitted noise suppression without compromising the system identification tracking capability.
NASA Technical Reports Server (NTRS)
Murray, B.
1976-01-01
The construction of a high resolution imaging telescope experiment payload suitable for launch on an Astrobee F sounding rocket was proposed. Also integration, launch, and subsequent data analysis effort were included. The payload utilizes major component subassemblies from the HEAO-B satellite program which were nonflight development units for that program. These were the X ray mirror and high resolution imager brassboard detector. The properties of the mirror and detector were discussed. The availability of these items for a sounding rocket experiment were explored with the HEAO-B project office.
Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian
2004-08-01
This paper explores acoustical (or time-dependent) radiosity--a geometrical-acoustics sound-field prediction method that assumes diffuse surface reflection. The literature of acoustical radiosity is briefly reviewed and the advantages and disadvantages of the method are discussed. A discrete form of the integral equation that results from meshing the enclosure boundaries into patches is presented and used in a discrete-time algorithm. Furthermore, an averaging technique is used to reduce computational requirements. To generalize to nonrectangular rooms, a spherical-triangle method is proposed as a means of evaluating the integrals over solid angles that appear in the discrete form of the integral equation. The evaluation of form factors, which also appear in the numerical solution, is discussed for rectangular and nonrectangular rooms. This algorithm and associated methods are validated by comparison of the steady-state predictions for a spherical enclosure to analytical solutions.
NASA Astrophysics Data System (ADS)
Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian
2004-08-01
This paper explores acoustical (or time-dependent) radiosity-a geometrical-acoustics sound-field prediction method that assumes diffuse surface reflection. The literature of acoustical radiosity is briefly reviewed and the advantages and disadvantages of the method are discussed. A discrete form of the integral equation that results from meshing the enclosure boundaries into patches is presented and used in a discrete-time algorithm. Furthermore, an averaging technique is used to reduce computational requirements. To generalize to nonrectangular rooms, a spherical-triangle method is proposed as a means of evaluating the integrals over solid angles that appear in the discrete form of the integral equation. The evaluation of form factors, which also appear in the numerical solution, is discussed for rectangular and nonrectangular rooms. This algorithm and associated methods are validated by comparison of the steady-state predictions for a spherical enclosure to analytical solutions.
Idrobo-Ávila, Ennio H.; Loaiza-Correa, Humberto; van Noorden, Leon; Muñoz-Bolaños, Flavio G.; Vargas-Cañas, Rubiel
2018-01-01
Background: For some time now, the effects of sound, noise, and music on the human body have been studied. However, despite research done through time, it is still not completely clear what influence, interaction, and effects sounds have on human body. That is why it is necessary to conduct new research on this topic. Thus, in this paper, a systematic review is undertaken in order to integrate research related to several types of sound, both pleasant and unpleasant, specifically noise and music. In addition, it includes as much research as possible to give stakeholders a more general vision about relevant elements regarding methodologies, study subjects, stimulus, analysis, and experimental designs in general. This study has been conducted in order to make a genuine contribution to this area and to perhaps to raise the quality of future research about sound and its effects over ECG signals. Methods: This review was carried out by independent researchers, through three search equations, in four different databases, including: engineering, medicine, and psychology. Inclusion and exclusion criteria were applied and studies published between 1999 and 2017 were considered. The selected documents were read and analyzed independently by each group of researchers and subsequently conclusions were established between all of them. Results: Despite the differences between the outcomes of selected studies, some common factors were found among them. Thus, in noise studies where both BP and HR increased or tended to increase, it was noted that HRV (HF and LF/HF) changes with both sound and noise stimuli, whereas GSR changes with sound and musical stimuli. Furthermore, LF also showed changes with exposure to noise. Conclusion: In many cases, samples displayed a limitation in experimental design, and in diverse studies, there was a lack of a control group. There was a lot of variability in the presented stimuli providing a wide overview of the effects they could produce in humans. In the listening sessions, there were numerous examples of good practice in experimental design, such as the use of headphones and comfortable positions for study subjects, while the listening sessions lasted 20 min in most of the studies. PMID:29872400
Idrobo-Ávila, Ennio H; Loaiza-Correa, Humberto; van Noorden, Leon; Muñoz-Bolaños, Flavio G; Vargas-Cañas, Rubiel
2018-01-01
Background: For some time now, the effects of sound, noise, and music on the human body have been studied. However, despite research done through time, it is still not completely clear what influence, interaction, and effects sounds have on human body. That is why it is necessary to conduct new research on this topic. Thus, in this paper, a systematic review is undertaken in order to integrate research related to several types of sound, both pleasant and unpleasant, specifically noise and music. In addition, it includes as much research as possible to give stakeholders a more general vision about relevant elements regarding methodologies, study subjects, stimulus, analysis, and experimental designs in general. This study has been conducted in order to make a genuine contribution to this area and to perhaps to raise the quality of future research about sound and its effects over ECG signals. Methods: This review was carried out by independent researchers, through three search equations, in four different databases, including: engineering, medicine, and psychology. Inclusion and exclusion criteria were applied and studies published between 1999 and 2017 were considered. The selected documents were read and analyzed independently by each group of researchers and subsequently conclusions were established between all of them. Results: Despite the differences between the outcomes of selected studies, some common factors were found among them. Thus, in noise studies where both BP and HR increased or tended to increase, it was noted that HRV (HF and LF/HF) changes with both sound and noise stimuli, whereas GSR changes with sound and musical stimuli. Furthermore, LF also showed changes with exposure to noise. Conclusion: In many cases, samples displayed a limitation in experimental design, and in diverse studies, there was a lack of a control group. There was a lot of variability in the presented stimuli providing a wide overview of the effects they could produce in humans. In the listening sessions, there were numerous examples of good practice in experimental design, such as the use of headphones and comfortable positions for study subjects, while the listening sessions lasted 20 min in most of the studies.
On the Possible Detection of Lightning Storms by Elephants
Kelley, Michael C.; Garstang, Michael
2013-01-01
Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406
NASA Technical Reports Server (NTRS)
Hagedorn, N. H.; Prokipius, P. R.
1977-01-01
A test program was conducted to evaluate the design of a heat and product-water removal system to be used with fuel cell having static water removal and evaporative cooling. The program, which was conducted on a breadboard version of the system, provided a general assessment of the design in terms of operational integrity and transient stability. This assessment showed that, on the whole, the concept appears to be inherently sound but that in refining this design, several facets will require additional study. These involve interactions between pressure regulators in the pumping loop that occur when they are not correctly matched and the question of whether an ejector is necessary in the system.
Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning
2016-08-26
The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Orban, David A; Soltis, Joseph; Perkins, Lori; Mellen, Jill D
2017-05-01
A clear need for evidence-based animal management in zoos and aquariums has been expressed by industry leaders. Here, we show how individual animal welfare monitoring can be combined with measurement of environmental conditions to inform science-based animal management decisions. Over the last several years, Disney's Animal Kingdom® has been undergoing significant construction and exhibit renovation, warranting institution-wide animal welfare monitoring. Animal care and science staff developed a model that tracked animal keepers' daily assessments of an animal's physical health, behavior, and responses to husbandry activity; these data were matched to different external stimuli and environmental conditions, including sound levels. A case study of a female giant anteater and her environment is presented to illustrate how this process worked. Associated with this case, several sound-reducing barriers were tested for efficacy in mitigating sound. Integrating daily animal welfare assessment with environmental monitoring can lead to a better understanding of animals and their sensory environment and positively impact animal welfare. © 2017 Wiley Periodicals, Inc.
Mission-Oriented Sensor Arrays and UAVs - a Case Study on Environmental Monitoring
NASA Astrophysics Data System (ADS)
Figueira, N. M.; Freire, I. L.; Trindade, O.; Simões, E.
2015-08-01
This paper presents a new concept of UAV mission design in geomatics, applied to the generation of thematic maps for a multitude of civilian and military applications. We discuss the architecture of Mission-Oriented Sensors Arrays (MOSA), proposed in Figueira et Al. (2013), aimed at splitting and decoupling the mission-oriented part of the system (non safety-critical hardware and software) from the aircraft control systems (safety-critical). As a case study, we present an environmental monitoring application for the automatic generation of thematic maps to track gunshot activity in conservation areas. The MOSA modeled for this application integrates information from a thermal camera and an on-the-ground microphone array. The use of microphone arrays technology is of particular interest in this paper. These arrays allow estimation of the direction-of-arrival (DOA) of the incoming sound waves. Information about events of interest is obtained by the fusion of the data provided by the microphone array, captured by the UAV, fused with information from the termal image processing. Preliminary results show the feasibility of the on-the-ground sound processing array and the simulation of the main processing module, to be embedded into an UAV in a future work. The main contributions of this paper are the proposed MOSA system, including concepts, models and architecture.
A biophysical model for modulation frequency encoding in the cochlear nucleus.
Eguia, Manuel C; Garcia, Guadalupe C; Romano, Sebastian A
2010-01-01
Encoding of amplitude modulated (AM) acoustical signals is one of the most compelling tasks for the mammalian auditory system: environmental sounds, after being filtered and transduced by the cochlea, become narrowband AM signals. Despite much experimental work dedicated to the comprehension of auditory system extraction and encoding of AM information, the neural mechanisms underlying this remarkable feature are far from being understood (Joris et al., 2004). One of the most accepted theories for this processing is the existence of a periodotopic organization (based on temporal information) across the more studied tonotopic axis (Frisina et al., 1990b). In this work, we will review some recent advances in the study of the mechanisms involved in neural processing of AM sounds, and propose an integrated model that runs from the external ear, through the cochlea and the auditory nerve, up to a sub-circuit of the cochlear nucleus (the first processing unit in the central auditory system). We will show that varying the amount of inhibition in our model we can obtain a range of best modulation frequencies (BMF) in some principal cells of the cochlear nucleus. This could be a basis for a synchronicity based, low-level periodotopic organization. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Cognitive neuroscience: integration of sight and sound outside of awareness?
Noel, Jean-Paul; Wallace, Mark; Blake, Randolph
2015-02-16
A recent study found that auditory and visual information can be integrated even when you are completely unaware of hearing or seeing the paired stimuli--but only if you have received prior, conscious exposure to the paired stimuli. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Macht, Konrad
1978-01-01
Discusses a "rational concept" of integration of audiovisual teaching aids into the foreign language teaching process that would be based on a positive evaluation of teacher-centered instruction. Offers a model for integration of human and technical media. (IFS/WGA)
Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words
ERIC Educational Resources Information Center
Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard
2016-01-01
Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…
2001-10-25
wavelet decomposition of signals and classification using neural network. Inputs to the system are the heart sound signals acquired by a stethoscope in a...Proceedings. pp. 415–418, 1990. [3] G. Ergun, “An intelligent diagnostic system for interpretation of arterpartum fetal heart rate tracings based on ANNs and...AN INTELLIGENT PATTERN RECOGNITION SYSTEM BASED ON NEURAL NETWORK AND WAVELET DECOMPOSITION FOR INTERPRETATION OF HEART SOUNDS I. TURKOGLU1, A
Functional Mobility Testing: A Novel Method to Establish Human System Interface Design Requirements
NASA Technical Reports Server (NTRS)
England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar
2008-01-01
Across all fields of human-system interface design it is vital to posses a sound methodology dictating the constraints on the system based on the capabilities of the human user. These limitations may be based on strength, mobility, dexterity, cognitive ability, etc. and combinations thereof. Data collected in an isolated environment to determine, for example, maximal strength or maximal range of motion would indeed be adequate for establishing not-to-exceed type design limitations, however these restraints on the system may be excessive over what is basally needed. Resources may potentially be saved by having a technique to determine the minimum measurements a system must accommodate. This paper specifically deals with the creation of a novel methodology for establishing mobility requirements for a new generation of space suit design concepts. Historically, the Space Shuttle and the International Space Station vehicle and space hardware design requirements documents such as the Man-Systems Integration Standards and International Space Station Flight Crew Integration Standard explicitly stated that the designers should strive to provide the maximum joint range of motion capabilities exhibited by a minimally clothed human subject. In the course of developing the Human-Systems Integration Requirements (HSIR) for the new space exploration initiative (Constellation), an effort was made to redefine the mobility requirements in the interest of safety and cost. Systems designed for manned space exploration can receive compounded gains from simplified designs that are both initially less expensive to produce and lighter, thereby, cheaper to launch.
NASA Astrophysics Data System (ADS)
Royston, Thomas J.; Yazicioglu, Yigit; Loth, Francis
2003-02-01
The response at the surface of an isotropic viscoelastic medium to buried fundamental acoustic sources is studied theoretically, computationally and experimentally. Finite and infinitesimal monopole and dipole sources within the low audible frequency range (40-400 Hz) are considered. Analytical and numerical integral solutions that account for compression, shear and surface wave response to the buried sources are formulated and compared with numerical finite element simulations and experimental studies on finite dimension phantom models. It is found that at low audible frequencies, compression and shear wave propagation from point sources can both be significant, with shear wave effects becoming less significant as frequency increases. Additionally, it is shown that simple closed-form analytical approximations based on an infinite medium model agree well with numerically obtained ``exact'' half-space solutions for the frequency range and material of interest in this study. The focus here is on developing a better understanding of how biological soft tissue affects the transmission of vibro-acoustic energy from biological acoustic sources below the skin surface, whose typical spectral content is in the low audible frequency range. Examples include sound radiated from pulmonary, gastro-intestinal and cardiovascular system functions, such as breath sounds, bowel sounds and vascular bruits, respectively.
Large Eddy Simulation of Sound Generation by Turbulent Reacting and Nonreacting Shear Flows
NASA Astrophysics Data System (ADS)
Najafi-Yazdi, Alireza
The objective of the present study was to investigate the mechanisms of sound generation by subsonic jets. Large eddy simulations were performed along with bandpass filtering of the flow and sound in order to gain further insight into the pole of coherent structures in subsonic jet noise generation. A sixth-order compact scheme was used for spatial discretization of the fully compressible Navier-Stokes equations. Time integration was performed through the use of the standard fourth-order, explicit Runge-Kutta scheme. An implicit low dispersion, low dissipation Runge-Kutta (ILDDRK) method was developed and implemented for simulations involving sources of stiffness such as flows near solid boundaries, or combustion. A surface integral acoustic analogy formulation, called Formulation 1C, was developed for farfield sound pressure calculations. Formulation 1C was derived based on the convective wave equation in order to take into account the presence of a mean flow. The formulation was derived to be easy to implement as a numerical post-processing tool for CFD codes. Sound radiation from an unheated, Mach 0.9 jet at Reynolds number 400, 000 was considered. The effect of mesh size on the accuracy of the nearfield flow and farfield sound results was studied. It was observed that insufficient grid resolution in the shear layer results in unphysical laminar vortex pairing, and increased sound pressure levels in the farfield. Careful examination of the bandpass filtered pressure field suggested that there are two mechanisms of sound radiation in unheated subsonic jets that can occur in all scales of turbulence. The first mechanism is the stretching and the distortion of coherent vortical structures, especially close to the termination of the potential core. As eddies are bent or stretched, a portion of their kinetic energy is radiated. This mechanism is quadrupolar in nature, and is responsible for strong sound radiation at aft angles. The second sound generation mechanism appears to be associated with the transverse vibration of the shear-layer interface within the ambient quiescent flow, and has dipolar characteristics. This mechanism is believed to be responsible for sound radiation along the sideline directions. Jet noise suppression through the use of microjets was studied. The microjet injection induced secondary instabilities in the shear layer which triggered the transition to turbulence, and suppressed laminar vortex pairing. This in turn resulted in a reduction of OASPL at almost all observer locations. In all cases, the bandpass filtering of the nearfield flow and the associated sound provides revealing details of the sound radiation process. The results suggest that circumferential modes are significant and need to be included in future wavepacket models for jet noise prediction. Numerical simulations of sound radiation from nonpremixed flames were also performed. The simulations featured the solution of the fully compressible Navier-Stokes equations. Therefore, sound generation and radiation were directly captured in the simulations. A thickened flamelet model was proposed for nonpremixed flames. The model yields artificially thickened flames which can be better resolved on the computational grid, while retaining the physically currect values of the total heat released into the flow. Combustion noise has monopolar characteristics for low frequencies. For high frequencies, the sound field is no longer omni-directional. Major sources of sound appear to be located in the jet shear layer within one potential core length from the jet nozzle.
Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea
NASA Astrophysics Data System (ADS)
Oshinsky, Michael Lee
A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.
Integrated farm sustainability assessment for the environmental management of rural activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stachetii Rodrigues, Geraldo, E-mail: stacheti@cnpma.embrapa.b; Aparecida Rodrigues, Izilda, E-mail: isis@cnpma.embrapa.b; Almeida Buschinelli, Claudio Cesar de, E-mail: buschi@cnpma.embrapa.b
2010-07-15
Farmers have been increasingly called upon to respond to an ongoing redefinition in consumers' demands, having as a converging theme the search for sustainable production practices. In order to satisfy this objective, instruments for the environmental management of agricultural activities have been sought out. Environmental impact assessment methods are appropriate tools to address the choice of technologies and management practices to minimize negative effects of agricultural development, while maximizing productive efficiency, sound usage of natural resources, conservation of ecological assets and equitable access to wealth generation means. The 'system for weighted environmental impact assessment of rural activities' (APOIA-NovoRural) presented inmore » this paper is organized to provide integrated farm sustainability assessment according to quantitative environmental standards and defined socio-economic benchmarks. The system integrates sixty-two objective indicators in five sustainability dimensions - (i) Landscape ecology, (ii) Environmental quality (atmosphere, water and soil), (iii) Sociocultural values, (iv) Economic values, and (v) Management and administration. Impact indices are expressed in three integration levels: (i) specific indicators, that offer a diagnostic and managerial tool for farmers and rural administrators, by pointing out particular attributes of the rural activities that may be failing to comply with defined environmental performance objectives; (ii) integrated sustainability dimensions, that show decision-makers the major contributions of the rural activities toward local sustainable development, facilitating the definition of control actions and promotion measures; and (iii) aggregated sustainability index, that can be considered a yardstick for eco-certification purposes. Nine fully documented case studies carried out with the APOIA-NovoRural system, focusing on different scales, diverse rural activities/farming systems, and contrasting spatial/territorial contexts, attest to the malleability of the method and its applicability as an integrated farm environmental management tool.« less
NASA Astrophysics Data System (ADS)
Koike, Toshio; Lawford, Richard; Cripe, Douglas
2013-04-01
It is critically important to recognize and co-manage the fundamental linkages across the water-dependent domains; land use, including deforestation; ecosystem services; and food-, energy- and health-securities. Sharing coordinated, comprehensive and sustained observations and information for sound decision-making is a first step; however, to take full advantage of these opportunities, we need to develop an effective collaboration mechanism for working together across different disciplines, sectors and agencies, and thereby gain a holistic view of the continuity between environmentally sustainable development, climate change adaptation and enhanced resilience. To promote effective multi-sectoral, interdisciplinary collaboration based on coordinated and integrated efforts, the intergovernmental Group on Earth Observations (GEO) is implementing the Global Earth Observation System of Systems (GEOSS). A component of GEOSS now under development is the "GEOSS Water Cycle Integrator (WCI)", which integrates Earth observations, modeling, data and information, management systems and education systems. GEOSS/WCI sets up "work benches" by which partners can share data, information and applications in an interoperable way, exchange knowledge and experiences, deepen mutual understanding and work together effectively to ultimately respond to issues of both mitigation and adaptation. (A work bench is a virtual geographical or phenomenological space where experts and managers collaborate to use information to address a problem within that space). GEOSS/WCI enhances the coordination of efforts to strengthen individual, institutional and infrastructure capacities, especially for effective interdisciplinary coordination and integration. GEO has established the GEOSS Asian Water Cycle Initiative (AWCI) and GEOSS African Water Cycle Coordination Initiative (AfWCCI). Through regional, inter-disciplinary, multi-sectoral integration and inter-agency coordination in Asia and Africa, GEOSS/WCI is now leading to effective actions and public awareness in support of water security and sustainable development.
2010-09-30
environmental impact than do 5 historic approaches used in Navy environmental assessments (EA) and impact statements (EIS). Many previous methods...of Sound on the Marine Environment (ESME) program contributes to the ultimate goal of creating an environmental assessment tool for activities that...expand the species library available for use in 3MB, 2) continue incorporating the ability to project environmental influences on simulated animal
2011-09-30
capability to emulate the dive and movement behavior of marine mammals provides a significant advantage to modeling environmental impact than do historic...approaches used in Navy environmental assessments (EA) and impact statements (EIS). Many previous methods have been statistical or pseudo-statistical...Siderius. 2011. Comparison of methods used for computing the impact of sound on the marine environment, Marine Environmental Research, 71:342-350. [published
Performance on Tests of Central Auditory Processing by Individuals Exposed to High-Intensity Blasts
2012-07-01
percent (gap detected on at least four of the six presentations), with all longer durations receiving a score greater than 50 percent. Binaural ...Processing and Sound Localization Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The...Masking Level Difference (MLD) test evaluates the integrity of the earliest sites of binaural comparison and sensitivity to interaural phase in the
NASA Technical Reports Server (NTRS)
Serke, David J.; Politovich, Marcia K.; Reehorst, Andrew L.; Gaydos, Andrew
2009-01-01
The Alliance Icing Research Study-II (AIRS-II) field program was conducted near Montreal, Canada during the winter of 2003. The NASA Icing Remote Detection System (NIRSS) was deployed to detect in-flight icing hazards and consisted of a vertically pointing multichannel radiometer, a ceilometer and an x-band cloud radar. The radiometer was used to derive atmospheric temperature soundings and integrated liquid water, while the ceilometer and radar were used only to define cloud boundaries. The purpose of this study is to show that the radar reflectivity profiles from AIRS-II case studies could be used to provide a qualitative icing hazard.
NASA Technical Reports Server (NTRS)
Biaggi-Labiosa, Azlin
2016-01-01
Present an overview of the Nanotechnology Project at NASA's Game Changing Technology Industry Day. Mature and demonstrate flight readiness of CNT reinforced composites for future NASA mission applications?Sounding rocket test in a multiexperiment payload?Integrate into cold gas thruster system as propellant storage?The technology would provide the means for reduced COPV mass and improved damage tolerance and flight qualify CNT reinforced composites. PROBLEM/NEED BEING ADDRESSED:?Reduce weight and enhance the performance and damage tolerance of aerospace structuresGAME-CHANGING SOLUTION:?Improve mechanical properties of CNTs to eventually replace CFRP –lighter and stronger?First flight-testing of a CNT reinforced composite structural component as part of an operational flight systemUNIQUENESS:?CNT manufacturing methods developed?Flight qualify CNT reinforced composites
NASA's Hybrid Reality Lab: One Giant Leap for Full Dive
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2017-01-01
This presentation demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA's GPUs to create highly immersive future training systems augmented with extremely realistic haptic feedback, sound, additional sensory information, and how these can be used to improve the engineering workflow. Include in this presentation is an environment simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.
ERIC Educational Resources Information Center
Rossing, Thomas D.
1980-01-01
Described are the components for a high-fidelity sound-reproducing system which focuses on various program sources, the amplifier, and loudspeakers. Discussed in detail are amplifier power and distortion, air suspension, loudspeaker baffles and enclosures, bass-reflex enclosure, drone cones, rear horn and acoustic labyrinth enclosures, horn…
SoundProof: A Smartphone Platform for Wireless Monitoring of Wildlife and Environment
NASA Astrophysics Data System (ADS)
Lukac, M.; Monibi, M.; Lane, M. L.; Howell, L.; Ramanathan, N.; Borker, A.; McKown, M.; Croll, D.; Terschy, B.
2011-12-01
We are developing an open-source, low-cost wildlife and environmental monitoring solution based on Android smartphones. Using a smartphone instead of a traditional microcontroller or single board computer has several advantages: smartphones are single integrated devices with multiple radios and a battery; they have a robust software interface which enables customization; and are field-tested by millions of users daily. Consequently, smartphones can improve the cost, configurability, and real-time access to data for environmental monitoring, ultimately replacing existing monitoring solutions which are proprietary, difficult to customize, expensive, and require labor-intensive maintenance. While smartphones can radically change environmental and wildlife monitoring, there are a number of technical challenges to address. We present our smartphone-based platform, SoundProof, discuss the challenges of building an autonomous system based on Android phones, and our ongoing efforts to enable environmental monitoring. Our system is built using robust off-the-shelf hardware and mature open-source software where available, to increase scalability and ease of installation. Key features include: * High-quality acoustic signal collection from external microphones to monitor wildlife populations. * Real-time data access, remote programming, and configuration of the field sensor via wireless cellular or WiFi channels, accessible from a website. * Waterproof packaging and solar charger setup for long-term field deployments. * Rich instrumentation of the end-to-end system to quickly identify and debug problems. * Supplementary mesh networking system with long-range wireless antennae to provide coverage when no cell network is available. We have deployed this system to monitor Rufous Crowned Sparrows on Anacapa Island, Chinese Crested Turns on the Matsu Islands in Taiwan, and Ashy Storm Petrels on South East Farallon Island. We have testbeds at two UC Natural Reserves to field-test new or exploratory features before deployment. Side-by-side validation data collected in the field using SoundProof and state-of-the-art wildlife monitoring solutions, including the Cornell ARU and Wildlife Acoustic's Songmeter, demonstrate that acoustic signals collected with cellphones provide sufficient data integrity for measuring the success of bird conservation efforts, measuring bird relative abundance and detecting elusive species. We are extending this platform to numerous other areas of environmental monitoring. Recent developments such as the Android Open Accessory, the IOIO Board, MicroBridge, Amarino, and Cellbots enable microcontrollers to talk with Android applications, making it affordable and feasible to extend our platform to operate with the most common sensors.
Hatch, Leila T; Clark, Christopher W; Van Parijs, Sofie M; Frankel, Adam S; Ponirakis, Dimitri W
2012-12-01
The effects of chronic exposure to increasing levels of human-induced underwater noise on marine animal populations reliant on sound for communication are poorly understood. We sought to further develop methods of quantifying the effects of communication masking associated with human-induced sound on contact-calling North Atlantic right whales (Eubalaena glacialis) in an ecologically relevant area (~10,000 km(2) ) and time period (peak feeding time). We used an array of temporary, bottom-mounted, autonomous acoustic recorders in the Stellwagen Bank National Marine Sanctuary to monitor ambient noise levels, measure levels of sound associated with vessels, and detect and locate calling whales. We related wind speed, as recorded by regional oceanographic buoys, to ambient noise levels. We used vessel-tracking data from the Automatic Identification System to quantify acoustic signatures of large commercial vessels. On the basis of these integrated sound fields, median signal excess (the difference between the signal-to-noise ratio and the assumed recognition differential) for contact-calling right whales was negative (-1 dB) under current ambient noise levels and was further reduced (-2 dB) by the addition of noise from ships. Compared with potential communication space available under historically lower noise conditions, calling right whales may have lost, on average, 63-67% of their communication space. One or more of the 89 calling whales in the study area was exposed to noise levels ≥120 dB re 1 μPa by ships for 20% of the month, and a maximum of 11 whales were exposed to noise at or above this level during a single 10-min period. These results highlight the limitations of exposure-threshold (i.e., dose-response) metrics for assessing chronic anthropogenic noise effects on communication opportunities. Our methods can be used to integrate chronic and wide-ranging noise effects in emerging ocean-planning forums that seek to improve management of cumulative effects of noise on marine species and their habitats. ©2012 Society for Conservation Biology.
Hsiao, Chun-Jen; Hsu, Chih-Hsiang; Lin, Ching-Lung; Wu, Chung-Hsin; Jen, Philip Hung-Sun
2016-08-17
Although echolocating bats and other mammals share the basic design of laryngeal apparatus for sound production and auditory system for sound reception, they have a specialized laryngeal mechanism for ultrasonic sound emissions as well as a highly developed auditory system for processing species-specific sounds. Because the sounds used by bats for echolocation and rodents for communication are quite different, there must be differences in the central nervous system devoted to producing and processing species-specific sounds between them. The present study examines the difference in the relative size of several brain structures and expression of auditory-related and vocal-related proteins in the central nervous system of echolocation bats and rodents. Here, we report that bats using constant frequency-frequency-modulated sounds (CF-FM bats) and FM bats for echolocation have a larger volume of midbrain nuclei (inferior and superior colliculi) and cerebellum relative to the size of the brain than rodents (mice and rats). However, the former have a smaller volume of the cerebrum and olfactory bulb, but greater expression of otoferlin and forkhead box protein P2 than the latter. Although the size of both midbrain colliculi is comparable in both CF-FM and FM bats, CF-FM bats have a larger cerebrum and greater expression of otoferlin and forkhead box protein P2 than FM bats. These differences in brain structure and protein expression are discussed in relation to their biologically relevant sounds and foraging behavior.
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
Cortical activity patterns predict robust speech discrimination ability in noise
Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.
2012-01-01
The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331
What is the link between synaesthesia and sound symbolism?
Bankieris, Kaitlyn; Simner, Julia
2015-01-01
Sound symbolism is a property of certain words which have a direct link between their phonological form and their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well understood. We examined whether sound symbolism might be mediated by the same types of cross-modal processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic) foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a two-alternative forced-choice task. Both groups showed superior understanding compared to chance levels, but synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may rely on the types of cross-modal integration that drive synaesthetes’ unusual experiences. It also suggests that synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains unrelated to the specific form of synaesthesia. PMID:25498744