Sample records for active sound control

  1. Spherical loudspeaker array for local active control of sound.

    PubMed

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  2. Active control and sound synthesis--two different ways to investigate the influence of the modal parameters of a guitar on its sound.

    PubMed

    Benacchio, Simon; Mamou-Mani, Adrien; Chomette, Baptiste; Finel, Victor

    2016-03-01

    The vibrational behavior of musical instruments is usually studied using physical modeling and simulations. Recently, active control has proven its efficiency to experimentally modify the dynamical behavior of musical instruments. This approach could also be used as an experimental tool to systematically study fine physical phenomena. This paper proposes to use modal active control as an alternative to sound simulation to study the complex case of the coupling between classical guitar strings and soundboard. A comparison between modal active control and sound simulation investigates the advantages, the drawbacks, and the limits of these two approaches.

  3. Controlling sound radiation through an opening with secondary loudspeakers along its boundaries.

    PubMed

    Wang, Shuping; Tao, Jiancheng; Qiu, Xiaojun

    2017-10-17

    We propose a virtual sound barrier system that blocks sound transmission through openings without affecting access, light and air circulation. The proposed system applies active control technique to cancel sound transmission with a double layered loudspeaker array at the edge of the opening. Unlike traditional transparent glass windows, recently invented double-glazed ventilation windows and planar active sound barriers or any other metamaterials designed to reduce sound transmission, secondary loudspeakers are put only along the boundaries of the opening, which provides the possibility to make it invisible. Simulation and experimental results demonstrate its feasibility for broadband sound control, especially for low frequency sound which is usually hard to attenuate with existing methods.

  4. A multichannel amplitude and relative-phase controller for active sound quality control

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.

    2017-05-01

    The enhancement of the sound quality of periodic disturbances for a number of listeners within an enclosure often confronts difficulties given by cross-channel interferences, which arise from simultaneously profiling the primary sound at each error sensor. These interferences may deteriorate the original sound among each listener, which is an unacceptable result from the point of view of sound quality control. In this paper we provide experimental evidence on controlling both amplitude and relative-phase functions of stationary complex primary sounds for a number of listeners within a cavity, attaining amplifications of twice the original value, reductions on the order of 70 dB, and relative-phase shifts between ± π rad, still in a free-of-interference control scenario. To accomplish such burdensome control targets, we have designed a multichannel active sound profiling scheme that bases its operation on exchanging time-domain control signals among the control units during uptime. Provided the real parts of the eigenvalues of persistently excited control matrices are positive, the proposed multichannel array is able to counterbalance cross-channel interferences, while attaining demanding control targets. Moreover, regularization of unstable control matrices is not seen to prevent the proposed array to provide free-of-interference amplitude and relative-phase control, but the system performance is degraded, as a function of the amount of regularization needed. The assessment of Loudness and Roughness metrics on the controlled primary sound proves that the proposed distributed control scheme noticeably outperforms current techniques, since active amplitude- and/or relative-phase-based enhancement of the auditory qualities of a primary sound no longer implies in causing interferences among different positions. In this regard, experimental results also confirm the effectiveness of the proposed scheme on stably enhancing the sound qualities of periodic sounds for multiple listeners within a cavity.

  5. Active noise control using a steerable parametric array loudspeaker.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2010-06-01

    Arguably active noise control enables the sound suppression at the designated control points, while the sound pressure except the targeted locations is likely to augment. The reason is clear; a control source normally radiates the sound omnidirectionally. To cope with this problem, this paper introduces a parametric array loudspeaker (PAL) which produces a spatially focused sound beam due to the attribute of ultrasound used for carrier waves, thereby allowing one to suppress the sound pressure at the designated point without causing spillover in the whole sound field. First the fundamental characteristics of PAL are overviewed. The scattered pressure in the near field contributed by source strength of PAL is then described, which is needed for the design of an active noise control system. Furthermore, the optimal control law for minimizing the sound pressure at control points is derived, the control effect being investigated analytically and experimentally. With a view to tracking a moving target point, a steerable PAL based upon a phased array scheme is presented, with the result that the generation of a moving zone of quiet becomes possible without mechanically rotating the PAL. An experiment is finally conducted, demonstrating the validity of the proposed method.

  6. A double-panel active segmented partition module using decoupled analog feedback controllers: numerical model.

    PubMed

    Sagers, Jason D; Leishman, Timothy W; Blotter, Jonathan D

    2009-06-01

    Low-frequency sound transmission has long plagued the sound isolation performance of lightweight partitions. Over the past 2 decades, researchers have investigated actively controlled structures to prevent sound transmission from a source space into a receiving space. An approach using active segmented partitions (ASPs) seeks to improve low-frequency sound isolation capabilities. An ASP is a partition which has been mechanically and acoustically segmented into a number of small individually controlled modules. This paper provides a theoretical and numerical development of a single ASP module configuration, wherein each panel of the double-panel structure is independently actuated and controlled by an analog feedback controller. A numerical model is developed to estimate frequency response functions for the purpose of controller design, to understand the effects of acoustic coupling between the panels, to predict the transmission loss of the module in both passive and active states, and to demonstrate that the proposed ASP module will produce bidirectional sound isolation.

  7. Active/Passive Control of Sound Radiation from Panels using Constrained Layer Damping

    NASA Technical Reports Server (NTRS)

    Gibbs, Gary P.; Cabell, Randolph H.

    2003-01-01

    A hybrid passive/active noise control system utilizing constrained layer damping and model predictive feedback control is presented. This system is used to control the sound radiation of panels due to broadband disturbances. To facilitate the hybrid system design, a methodology for placement of constrained layer damping which targets selected modes based on their relative radiated sound power is developed. The placement methodology is utilized to determine two constrained layer damping configurations for experimental evaluation of a hybrid system. The first configuration targets the (4,1) panel mode which is not controllable by the piezoelectric control actuator, and the (2,3) and (5,2) panel modes. The second configuration targets the (1,1) and (3,1) modes. The experimental results demonstrate the improved reduction of radiated sound power using the hybrid passive/active control system as compared to the active control system alone.

  8. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  9. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    PubMed

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  10. Frequency-independent radiation modes of interior sound radiation: Experimental study and global active control

    NASA Astrophysics Data System (ADS)

    Hesse, C.; Papantoni, V.; Algermissen, S.; Monner, H. P.

    2017-08-01

    Active control of structural sound radiation is a promising technique to overcome the poor passive acoustic isolation performance of lightweight structures in the low-frequency region. Active structural acoustic control commonly aims at the suppression of the far-field radiated sound power. This paper is concerned with the active control of sound radiation into acoustic enclosures. Experimental results of a coupled rectangular plate-fluid system under stochastic excitation are presented. The amplitudes of the frequency-independent interior radiation modes are determined in real-time using a set of structural vibration sensors, for the purpose of estimating their contribution to the acoustic potential energy in the enclosure. This approach is validated by acoustic measurements inside the cavity. Utilizing a feedback control approach, a broadband reduction of the global acoustic response inside the enclosure is achieved.

  11. Active control of noise on the source side of a partition to increase its sound isolation

    NASA Astrophysics Data System (ADS)

    Tarabini, Marco; Roure, Alain; Pinhede, Cedric

    2009-03-01

    This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.

  12. Investigation of spherical loudspeaker arrays for local active control of sound.

    PubMed

    Peleg, Tomer; Rafaely, Boaz

    2011-10-01

    Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America

  13. The use of an active controlled enclosure to attenuate sound radiation from a heavy radiator

    NASA Astrophysics Data System (ADS)

    Sun, Yao; Yang, Tiejun; Zhu, Minggang; Pan, Jie

    2017-03-01

    Active structural acoustical control usually experiences difficulty in the control of heavy sources or sources where direct applications of control forces are not practical. To overcome this difficulty, an active controlled enclosure, which forms a cavity with both flexible and open boundary, is employed. This configuration permits indirect implementation of active control in which the control inputs can be applied to subsidiary structures other than the sources. To determine the control effectiveness of the configuration, the vibro-acoustic behavior of the system, which consists of a top plate with an open, a sound cavity and a source panel, is investigated in this paper. A complete mathematical model of the system is formulated involving modified Fourier series formulations and the governing equations are solved using the Rayleigh-Ritz method. The coupling mechanisms of a partly opened cavity and a plate are analysed in terms of modal responses and directivity patterns. Furthermore, to attenuate sound power radiated from both the top panel and the open, two strategies are studied: minimizing the total radiated power and the cancellation of volume velocity. Moreover, three control configurations are compared, using a point force on the control panel (structural control), using a sound source in the cavity (acoustical control) and applying hybrid structural-acoustical control. In addition, the effects of boundary condition of the control panel on the sound radiation and control performance are discussed.

  14. Decentralized Control of Sound Radiation using a High-Authority/Low-Authority Control Strategy with Anisotropic Actuators

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2008-01-01

    This paper describes a combined control strategy designed to reduce sound radiation from stiffened aircraft-style panels. The control architecture uses robust active damping in addition to high-authority linear quadratic Gaussian (LQG) control. Active damping is achieved using direct velocity feedback with triangularly shaped anisotropic actuators and point velocity sensors. While active damping is simple and robust, stability is guaranteed at the expense of performance. Therefore the approach is often referred to as low-authority control. In contrast, LQG control strategies can achieve substantial reductions in sound radiation. Unfortunately, the unmodeled interaction between neighboring control units can destabilize decentralized control systems. Numerical simulations show that combining active damping and decentralized LQG control can be beneficial. In particular, augmenting the in-bandwidth damping supplements the performance of the LQG control strategy and reduces the destabilizing interaction between neighboring control units.

  15. Multichannel feedforward control schemes with coupling compensation for active sound profiling

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.

    2017-05-01

    Active sound profiling includes a number of control techniques that enables the equalization, rather than the mere reduction, of acoustic noise. Challenges may rise when trying to achieve distinct targeted sound profiles simultaneously at multiple locations, e.g., within a vehicle cabin. This paper introduces distributed multichannel control schemes for independently tailoring structural borne sound reaching a number of locations within a cavity. The proposed techniques address the cross interactions amongst feedforward active sound profiling units, which compensate for interferences of the primary sound at each location of interest by exchanging run-time data amongst the control units, while attaining the desired control targets. Computational complexity, convergence, and stability of the proposed multichannel schemes are examined in light of the physical system at which they are implemented. The tuning performance of the proposed algorithms is benchmarked with the centralized and pure-decentralized control schemes through computer simulations on a simplified numerical model, which has also been subjected to plant magnitude variations. Provided that the representation of the plant is accurate enough, the proposed multichannel control schemes have been shown as the only ones that properly deliver targeted active sound profiling tasks at each error sensor location. Experimental results in a 1:3-scaled vehicle mock-up further demonstrate that the proposed schemes are able to attain reductions of more than 60 dB upon periodic disturbances at a number of positions, while resolving cross-channel interferences. Moreover, when the sensor/actuator placement is found as defective at a given frequency, the inclusion of a regularization parameter in the cost function is seen to not hinder the proper operation of the proposed compensation schemes, at the time that it assures their stability, at the expense of losing control performance.

  16. Active room compensation for sound reinforcement using sound field separation techniques.

    PubMed

    Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena

    2018-03-01

    This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.

  17. Active Noise Control Experiments using Sound Energy Flu

    NASA Astrophysics Data System (ADS)

    Krause, Uli

    2015-03-01

    This paper reports on the latest results concerning the active noise control approach using net flow of acoustic energy. The test set-up consists of two loudspeakers simulating the engine noise and two smaller loudspeakers which belong to the active noise system. The system is completed by two acceleration sensors and one microphone per loudspeaker. The microphones are located in the near sound field of the loudspeakers. The control algorithm including the update equation of the feed-forward controller is introduced. Numerical simulations are performed with a comparison to a state of the art method minimising the radiated sound power. The proposed approach is experimentally validated.

  18. Shaping reverberating sound fields with an actively tunable metasurface.

    PubMed

    Ma, Guancong; Fan, Xiying; Sheng, Ping; Fink, Mathias

    2018-06-26

    A reverberating environment is a common complex medium for airborne sound, with familiar examples such as music halls and lecture theaters. The complexity of reverberating sound fields has hindered their meaningful control. Here, by combining acoustic metasurface and adaptive wavefield shaping, we demonstrate the versatile control of reverberating sound fields in a room. This is achieved through the design and the realization of a binary phase-modulating spatial sound modulator that is based on an actively reconfigurable acoustic metasurface. We demonstrate useful functionalities including the creation of quiet zones and hotspots in a typical reverberating environment. Copyright © 2018 the Author(s). Published by PNAS.

  19. Theoretical and experimental study on active sound transmission control based on single structural mode actuation using point force actuators.

    PubMed

    Sanada, Akira; Tanaka, Nobuo

    2012-08-01

    This study deals with the feedforward active control of sound transmission through a simply supported rectangular panel using vibration actuators. The control effect largely depends on the excitation method, including the number and locations of actuators. In order to obtain a large control effect at low frequencies over a wide frequency, an active transmission control method based on single structural mode actuation is proposed. Then, with the goal of examining the feasibility of the proposed method, the (1, 3) mode is selected as the target mode and a modal actuation method in combination with six point force actuators is considered. Assuming that a single input single output feedforward control is used, sound transmission in the case minimizing the transmitted sound power is calculated for some actuation methods. Simulation results showed that the (1, 3) modal actuation is globally effective at reducing the sound transmission by more than 10 dB in the low-frequency range for both normal and oblique incidences. Finally, experimental results also showed that a large reduction could be achieved in the low-frequency range, which proves the validity and feasibility of the proposed method.

  20. Anticipated Effectiveness of Active Noise Control in Propeller Aircraft Interiors as Determined by Sound Quality Tests

    NASA Technical Reports Server (NTRS)

    Powell, Clemans A.; Sullivan, Brenda M.

    2004-01-01

    Two experiments were conducted, using sound quality engineering practices, to determine the subjective effectiveness of hypothetical active noise control systems in a range of propeller aircraft. The two tests differed by the type of judgments made by the subjects: pair comparisons in the first test and numerical category scaling in the second. Although the results of the two tests were in general agreement that the hypothetical active control measures improved the interior noise environments, the pair comparison method appears to be more sensitive to subtle changes in the characteristics of the sounds which are related to passenger preference.

  1. Active Control of Sound Radiation due to Subsonic Wave Scattering from Discontinuities on Thin Elastic Beams.

    NASA Astrophysics Data System (ADS)

    Guigou, Catherine Renee J.

    1992-01-01

    Much progress has been made in recent years in active control of sound radiation from vibrating structures. Reduction of the far-field acoustic radiation can be obtained by directly modifying the response of the structure by applying structural inputs rather than by adding acoustic sources. Discontinuities, which are present in many structures are often important in terms of sound radiation due to wave scattering behavior at their location. In this thesis, an edge or boundary type discontinuity (clamped edge) and a point discontinuity (blocking mass) are analytically studied in terms of sound radiation. When subsonic vibrational waves impinge on these discontinuities, large scattered sound levels are radiated. Active control is then achieved by applying either control forces, which approximate shakers, or pairs of control moments, which approximate piezoelectric actuators, near the discontinuity. Active control of sound radiation from a simply-supported beam is also examined. For a single frequency, the flexural response of the beam subject to an incident wave or an input force (disturbance) and to control forces or control moments is expressed in terms of waves of both propagating and near-field types. The far-field radiated pressure is then evaluated in terms of the structural response, using Rayleigh's formula or a stationary phase approach, depending upon the application. The control force and control moment magnitudes are determined by optimizing a quadratic cost function, which is directly related to the control performance. On determining the optimal control complex amplitudes, these can be resubstituted in the constitutive equations for the system under study and the minimized radiated fields can be evaluated. High attenuation in radiated sound power and radiated acoustic pressure is found to be possible when one or two active control actuators are located near the discontinuity, as is shown to be mostly associated with local changes in beam response near the discontinuity. The effect of the control actuators on the far-field radiated pressure, the wavenumber spectrum, the flexural displacement and the near-field time averaged intensity and pressure distributions are studied in order to further understand the control mechanisms. The influence of the near-field structural waves is investigated as well. Some experimental results are presented for comparison.

  2. Active control of counter-rotating open rotor interior noise in a Dornier 728 experimental aircraft

    NASA Astrophysics Data System (ADS)

    Haase, Thomas; Unruh, Oliver; Algermissen, Stephan; Pohl, Martin

    2016-08-01

    The fuel consumption of future civil aircraft needs to be reduced because of the CO2 restrictions declared by the European Union. A consequent lightweight design and a new engine concept called counter-rotating open rotor are seen as key technologies in the attempt to reach this ambitious goals. Bearing in mind that counter-rotating open rotor engines emit very high sound pressures at low frequencies and that lightweight structures have a poor transmission loss in the lower frequency range, these key technologies raise new questions in regard to acoustic passenger comfort. One of the promising solutions for the reduction of sound pressure levels inside the aircraft cabin are active sound and vibration systems. So far, active concepts have rarely been investigated for a counter-rotating open rotor pressure excitation on complex airframe structures. Hence, the state of the art is augmented by the preliminary study presented in this paper. The study shows how an active vibration control system can influence the sound transmission of counter-rotating open rotor noise through a complex airframe structure into the cabin. Furthermore, open questions on the way towards the realisation of an active control system are addressed. In this phase, an active feedforward control system is investigated in a fully equipped Dornier 728 experimental prototype aircraft. In particular, the sound transmission through the airframe, the coupling of classical actuators (inertial and piezoelectric patch actuators) into the structure and the performance of the active vibration control system with different error sensors are investigated. It can be shown that the active control system achieves a reduction up to 5 dB at several counter-rotating open rotor frequencies but also that a better performance could be achieved through further optimisations.

  3. Evaluating the performance of active noise control systems in commercial and industrial applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Depies, C.; Deneen, S.; Lowe, M.

    1995-06-01

    Active sound cancellation technology is increasingly being used to quiet commercial and industrial air-moving devices. Engineers and designers are implementing active or combination active/passive technology to control sound quality in the workplace and the acoustical environment in residential areas near industrial facilities. Sound level measurements made before and after the installation of active systems have proved that significant improvements in sound quality can be obtained even if there is little or no change in the NC/RC or dBA numbers. Noise produced by centrifugal and vane-axial fans, pumps and blowers, commonly used for ventilation and material movement in industry, are frequentlymore » dominated by high amplitude, tonal noise at low frequencies. And the low-frequency noise produced by commercial air handlers often has less tonal and more broadband characteristics, resulting in audible duct rumble noise and objectionable room spectrums. Because the A-weighting network, which is commonly used for industrial noise measurements, de-emphasizes low frequencies, its single number rating can be misleading in terms of judging the overall subjective sound quality in impacted areas and assessing the effectiveness of noise control measures. Similarly, NC values, traditionally used for commercial HVAC acoustical design criteria, can be governed by noise at any frequency and cannot accurately depict human judgment of the aural comfort level. Analyses of frequency spectrum characteristics provide the most effective means of assessing sound quality and determining mitigative measures for achieving suitable background sound levels.« less

  4. Active noise control using a distributed mode flat panel loudspeaker.

    PubMed

    Zhu, H; Rajamani, R; Dudney, J; Stelson, K A

    2003-07-01

    A flat panel distributed mode loudspeaker (DML) has many advantages over traditional cone speakers in terms of its weight, size, and durability. However, its frequency response is uneven and complex, thus bringing its suitability for active noise control (ANC) under question. This paper presents experimental results demonstrating the effective use of panel DML speakers in an ANC application. Both feedback and feedforward control techniques are considered. Effective feedback control with a flat panel speaker could open up a whole range of new noise control applications and has many advantages over feedforward control. The paper develops a new control algorithm to attenuate tonal noise of a known frequency by feedback control. However, due to the uneven response of the speakers, feedback control is found to be only moderately effective even for this narrow-band application. Feedforward control proves to be most capable for the flat panel speaker. Using feedforward control, the sound pressure level can be significantly reduced in close proximity to an error microphone. The paper demonstrates an interesting application of the flat panel in which the panel is placed in the path of sound and effectively used to block sound transmission using feedforward control. This is a new approach to active noise control enabled by the use of flat panels and can be used to prevent sound from entering into an enclosure in the first place rather than the traditional approach of attempting to cancel sound after it enters the enclosure.

  5. [Actuator placement for active sound and vibration control

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Two refereed journal publications and ten talks given at conferences, seminars, and colloquia resulted from research supported by NASA. They are itemized in this report. The two publications were entitled "Reactive Tabu and Search Sensor Selection in Active Structural Acoustic Control Problems" and "Quelling Cabin Noise in Turboprop Aircraft via Active Control." The conference presentations covered various aspects of actuator placement, including location problems, for active sound and vibration control of cylinders, of commuter jets, of propeller driven or turboprop aircraft, and for quelling aircraft cabin or interior noise.

  6. Reduction of interior sound fields in flexible cylinders by active vibration control

    NASA Technical Reports Server (NTRS)

    Jones, J. D.; Fuller, C. R.

    1988-01-01

    The mechanisms of interior sound reduction through active control of a thin flexible shell's vibrational response are presently evaluated in view of an analytical model. The noise source is a single exterior acoustic monopole. The active control model is evaluated for harmonic excitation; the results obtained indicate spatially-averaged noise reductions in excess of 20 dB over the source plane, for acoustic resonant conditions inside the cavity.

  7. Aircraft panel with sensorless active sound power reduction capabilities through virtual mechanical impedances

    NASA Astrophysics Data System (ADS)

    Boulandet, R.; Michau, M.; Micheau, P.; Berry, A.

    2016-01-01

    This paper deals with an active structural acoustic control approach to reduce the transmission of tonal noise in aircraft cabins. The focus is on the practical implementation of the virtual mechanical impedances method by using sensoriactuators instead of conventional control units composed of separate sensors and actuators. The experimental setup includes two sensoriactuators developed from the electrodynamic inertial exciter and distributed over an aircraft trim panel which is subject to a time-harmonic diffuse sound field. The target mechanical impedances are first defined by solving a linear optimization problem from sound power measurements before being applied to the test panel using a complex envelope controller. Measured data are compared to results obtained with sensor-actuator pairs consisting of an accelerometer and an inertial exciter, particularly as regards sound power reduction. It is shown that the two types of control unit provide similar performance, and that here virtual impedance control stands apart from conventional active damping. In particular, it is clear from this study that extra vibrational energy must be provided by the actuators for optimal sound power reduction, mainly due to the high structural damping in the aircraft trim panel. Concluding remarks on the benefits of using these electrodynamic sensoriactuators to control tonal disturbances are also provided.

  8. Mathematically trivial control of sound using a parametric beam focusing source.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2011-01-01

    By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology.

  9. Ventilation duct with concurrent acoustic feed-forward and decentralised structural feedback active control

    NASA Astrophysics Data System (ADS)

    Rohlfing, J.; Gardonio, P.

    2014-02-01

    This paper presents theoretical and experimental work on concurrent active noise and vibration control for a ventilation duct. The active noise control system is used to reduce the air-borne noise radiated via the duct outlet whereas the active vibration control system is used to both reduce the structure-borne noise radiated by the duct wall and to minimise the structural feed-through effect that reduces the effectiveness of the active noise control system. An elemental model based on structural mobility functions and acoustic impedance functions has been developed to investigate the principal effects and limitations of feed-forward active noise control and decentralised velocity feedback vibration control. The principal simulation results have been contrasted and validated with measurements taken on a laboratory duct set-up, equipped with an active noise control system and a decentralised vibration control system. Both simulations and experimental results show that the air-borne noise radiated from the duct outlet can be significantly attenuated using the feed-forward active noise control. In the presence of structure-borne noise the performance of the active noise control system is impaired by a structure-borne feed-through effect. Also the sound radiation from the duct wall is increased. In this case, if the active noise control is combined with a concurrent active vibration control system, the sound radiation by the duct outlet is further reduced and the sound radiation from the duct wall at low frequencies reduces noticeably.

  10. Inducing physiological stress recovery with sounds of nature in a virtual reality forest--results from a pilot study.

    PubMed

    Annerstedt, Matilda; Jönsson, Peter; Wallergård, Mattias; Johansson, Gerd; Karlson, Björn; Grahn, Patrik; Hansen, Ase Marie; Währborg, Peter

    2013-06-13

    Experimental research on stress recovery in natural environments is limited, as is study of the effect of sounds of nature. After inducing stress by means of a virtual stress test, we explored physiological recovery in two different virtual natural environments (with and without exposure to sounds of nature) and in one control condition. Cardiovascular data and saliva cortisol were collected. Repeated ANOVA measurements indicated parasympathetic activation in the group subjected to sounds of nature in a virtual natural environment, suggesting enhanced stress recovery may occur in such surroundings. The group that recovered in virtual nature without sound and the control group displayed no particular autonomic activation or deactivation. The results demonstrate a potential mechanistic link between nature, the sounds of nature, and stress recovery, and suggest the potential importance of virtual reality as a tool in this research field. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Peripheral mechanisms for vocal production in birds - differences and similarities to human speech and singing.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-10-01

    Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.

  12. Vibro-acoustic model of an active aircraft cabin window

    NASA Astrophysics Data System (ADS)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2017-06-01

    This paper presents modeling and design of an active structural acoustic control (ASAC) system for controlling the low frequency sound field transmitted through an aircraft cabin window. The system uses stacked piezoelectric elements arranged in a manner to generate out-of-plane actuation point forces acting on the window panel boundaries. A theoretical vibro-acoustic model for an active quadruple-panel system is developed to characterize the dynamic behavior of the system and achieve a good understanding of the active control performance and the physical phenomena of the sound transmission loss (STL) characteristics. The quadruple-panel system represents the passenger window design used in some classes of modern aircraft with an exterior double pane of Plexiglas, an interior dust cover pane and a glazed dimmable pane, all separated by thin air cavities. The STL characteristics of identical pane window configurations with different piezoelectric actuator sets are analyzed. A parametric study describes the influence of important active parameters, such as the input voltage, number and location of the actuator elements, on the STL is investigated. In addition, a mathematical model for obtaining the optimal input voltage is developed to improve the acoustic attenuation capability of the control system. In general, the achieved results indicate that the proposed ASAC design offers a considerable improvement in the passive sound loss performance of cabin window design without significant effects, such as weight increase, on the original design. Also, the results show that the acoustic control of the active model with piezoelectric actuators bonded to the dust cover pane generates high structural vibrations in the radiating panel (dust cover) and an increase in sound power radiation. High active acoustic attenuation can be achieved by designing the ASAC system to apply active control forces on the inner Plexiglas panel or dimmable panel by installing the actuators on the boundaries of one of the two panels. In some cases, increasing the actuator numbers in the structure advances the active control performance by controlling more structural modes; however, this decreases the STL of the passive control system because of the increase in structure-borne sound transmission paths of the stiffer piezoelectric actuators.

  13. Causal feedforward control of a stochastically excited fuselage structure with active sidewall panel.

    PubMed

    Misol, Malte; Haase, Thomas; Monner, Hans Peter; Sinapius, Michael

    2014-10-01

    This paper provides experimental results of an aircraft-relevant double panel structure mounted in a sound transmission loss facility. The primary structure of the double panel system is excited either by a stochastic point force or by a diffuse sound field synthesized in the reverberation room of the transmission loss facility. The secondary structure, which is connected to the frames of the primary structure, is augmented by actuators and sensors implementing an active feedforward control system. Special emphasis is placed on the causality of the active feedforward control system and its implications on the disturbance rejection at the error sensors. The coherence of the sensor signals is analyzed for the two different disturbance excitations. Experimental results are presented regarding the causality, coherence, and disturbance rejection of the active feedforward control system. Furthermore, the sound transmission loss of the double panel system is evaluated for different configurations of the active system. A principal result of this work is the evidence that it is possible to strongly influence the transmission of stochastic disturbance sources through double panel configurations by means of an active feedforward control system.

  14. Active Control of Panel Vibrations Induced by a Boundary Layer Flow

    NASA Technical Reports Server (NTRS)

    Chow, Pao-Liu

    1998-01-01

    In recent years, active and passive control of sound and vibration in aeroelastic structures have received a great deal of attention due to many potential applications to aerospace and other industries. There exists a great deal of research work done in this area. Recent advances in the control of sound and vibration can be found in the several conference proceedings. In this report we will summarize our research findings supported by the NASA grant NAG-1-1175. The problems of active and passive control of sound and vibration has been investigated by many researchers for a number of years. However, few of the articles are concerned with the sound and vibration with flow-structure interaction. Experimental and numerical studies on the coupling between panel vibration and acoustic radiation due to flow excitation have been done by Maestrello and his associates at NASA/Langley Research Center. Since the coupled system of nonlinear partial differential equations is formidable, an analytical solution to the full problem seems impossible. For this reason, we have to simplify the problem to that of the nonlinear panel vibration induced by a uniform flow or a boundary-layer flow with a given wall pressure distribution. Based on this simplified model, we have been able to study the control and stabilization of the nonlinear panel vibration, which have not been treated satisfactorily by other authors. The vibration suppression will clearly reduce the sound radiation power from the panel. The major research findings will be presented in the next three sections. In Section II we shall describe our results on the boundary control of nonlinear panel vibration, with or without flow excitation. Section III is concerned with active control of the vibration and sound radiation from a nonlinear elastic panel. A detailed description of our work on the parametric vibrational control of nonlinear elastic panel will be presented in Section IV. This paper will be submitted to the Journal of Acoustic Society of America for publication.

  15. Active control of sound radiation from a vibrating rectangular panel by sound sources and vibration inputs - An experimental comparison

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.; Hansen, C. H.; Snyder, S. D.

    1991-01-01

    Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.

  16. Foam-PVDF smart skin for active control of sound

    NASA Astrophysics Data System (ADS)

    Fuller, Chris R.; Guigou, Cathy; Gentry, C. A.

    1996-05-01

    This work is concerned with the development and testing of a foam-PVDF smart skin designed for active noise control. The smart skin is designed to reduce sound by the action of the passive absorption of the foam (which is effective at higher frequencies) and the active input of an embedded PVDF element driven by an oscillating electrical input (which is effective at lower frequencies). It is primarily developed to be used in an aircraft fuselage in order to reduce interior noise associated with turbulent boundary layer excitation. The device consists of cylindrically curved sections of PVDF piezoelectric film embedded in partially reticulated polyurethane acoustic foam. The active PVDF layer was configured to behave in a linear sense as well as to couple the predominantly in-plane strain due to the piezoelectric effect and the vertical motion that is needed to accelerate fluid particles and hence radiate sound away from the foam surface. For performance testing, the foam-PVDF element was mounted near the surface of an oscillating rigid piston mounted in a baffle in an anechoic chamber. A far-field and a near-field microphone were considered as an error sensor and compared in terms of their efficiency to control the far-field sound radiation. A feedforward LMS controller was used to minimize the error sensor signal under broadband excitation (0 - 1.6 kHz). The potential of the smart foam-PVDF skin for globally reducing sound radiation is demonstrated as more than 20 dB attenuation is obtained over the studied frequency band. The device thus has the potential of simultaneously controlling low and high frequency sound in a very thin compact arrangement.

  17. Wave field synthesis, adaptive wave field synthesis and ambisonics using decentralized transformed control: Potential applications to sound field reproduction and active noise control

    NASA Astrophysics Data System (ADS)

    Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw

    2005-09-01

    Sound field reproduction finds applications in listening to prerecorded music or in synthesizing virtual acoustics. The objective is to recreate a sound field in a listening environment. Wave field synthesis (WFS) is a known open-loop technology which assumes that the reproduction environment is anechoic. Classical WFS, therefore, does not perform well in a real reproduction space such as room. Previous work has suggested that it is physically possible to reproduce a progressive wave field in-room situation using active control approaches. In this paper, a formulation of adaptive wave field synthesis (AWFS) introduces practical possibilities for an adaptive sound field reproduction combining WFS and active control (with WFS departure penalization) with a limited number of error sensors. AWFS includes WFS and closed-loop ``Ambisonics'' as limiting cases. This leads to the modification of the multichannel filtered-reference least-mean-square (FXLMS) and the filtered-error LMS (FELMS) adaptive algorithms for AWFS. Decentralization of AWFS for sound field reproduction is introduced on the basis of sources' and sensors' radiation modes. Such decoupling may lead to decentralized control of source strength distributions and may reduce computational burden of the FXLMS and the FELMS algorithms used for AWFS. [Work funded by NSERC, NATEQ, Université de Sherbrooke and VRQ.] Ultrasound/Bioresponse to

  18. Numerical Comparison of Active Acoustic and Structural Noise Control in a Stiffened Double Wall Cylinder

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.

    1996-01-01

    The active acoustic and structural noise control characteristics of a double wall cylinder with and without ring stiffeners were numerically evaluated. An exterior monopole was assumed to acoustically excite the outside of the double wall cylinder at an acoustic cavity resonance frequency. Structural modal vibration properties of the inner and outer shells were analyzed by post-processing the results from a finite element analysis. A boundary element approach was used to calculate the acoustic cavity response and the coupled structural-acoustic interaction. In the frequency region of interest, below 500 Hz, all structural resonant modes were found to be acoustically slow and the nonresonant modal response to be dominant. Active sound transmission control was achieved by control forces applied to the inner or outer shell, or acoustic control monopoles placed just outside the inner or outer shell. A least mean square technique was used to minimize the interior sound pressures at the nodes of a data recovery mesh. Results showed that single acoustic control monopoles placed just outside the inner or outer shells resulted in better sound transmission control than six distributed point forces applied to either one of the shells. Adding stiffeners to the double wall structure constrained the modal vibrations of the shells, making the double wall stiffer with associated higher modal frequencies. Active noise control obtained for the stiffened double wall configurations was less than for the unstiffened cylinder. In all cases, the acoustic control monopoles controlled the sound transmission into the interior better than the structural control forces.

  19. Active control of sound transmission through partitions composed of discretely controlled modules

    NASA Astrophysics Data System (ADS)

    Leishman, Timothy W.

    This thesis provides a detailed theoretical and experimental investigation of active segmented partitions (ASPs) for the control of sound transmission. ASPs are physically segmented arrays of interconnected acoustically and structurally small modules that are discretely controlled using electronic controllers. Theoretical analyses of the thesis first address physical principles fundamental to ASP modeling and experimental measurement techniques. Next, they explore specific module configurations, primarily using equivalent circuits. Measured normal-incidence transmission losses and related properties of experimental ASPs are determined using plane wave tubes and the two-microphone transfer function technique. A scanning laser vibrometer is also used to evaluate distributed transmitting surface vibrations. ASPs have the inherent potential to provide excellent active sound transmission control (ASTC) through lightweight structures, using very practical control strategies. The thesis analyzes several unique ASP configurations and evaluates their abilities to produce high transmission losses via global minimization of normal transmitting surface vibrations. A novel dual diaphragm configuration is shown to employ this strategy particularly well. It uses an important combination of acoustical actuation and mechano-acoustical segmentation to produce exceptionally high transmission loss (e.g., 50 to 80 dB) over a broad frequency range-including lower audible frequencies. Such performance is shown to be comparable to that produced by much more massive partitions composed of thick layers of steel or concrete and sand. The configuration uses only simple localized error sensors and actuators, permitting effective use of independent single-channel controllers in a decentralized format. This work counteracts the commonly accepted notion that active vibration control of partitions is an ineffective means of controlling sound transmission. With appropriate construction, actuation, and error sensing, ASPs can achieve high sound transmission loss through efficient global control of transmitting surface vibrations. This approach is applicable to a wide variety of source and receiving spaces-and to both near fields and far fields.

  20. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    NASA Technical Reports Server (NTRS)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1996-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  1. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    NASA Technical Reports Server (NTRS)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1994-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  2. Potential Subjective Effectiveness of Active Interior Noise Control in Propeller Airplanes

    NASA Technical Reports Server (NTRS)

    Powell, Clemans A.; Sullivan, Brenda M.

    2000-01-01

    Active noise control technology offers the potential for weight-efficient aircraft interior noise reduction, particularly for propeller aircraft. However, there is little information on how passengers respond to this type of interior noise control. This paper presents results of two experiments that use sound quality engineering practices to determine the subjective effectiveness of hypothetical active noise control (ANC) systems in a range of propeller aircraft. The two experiments differed by the type of judgments made by the subjects: pair comparisons based on preference in the first and numerical category scaling of noisiness in the second. Although the results of the two experiments were in general agreement that the hypothetical active control measures improved the interior noise environments, the pair comparison method appears to be more sensitive to subtle changes in the characteristics of the sounds which are related to passenger preference. The reductions in subjective response due to the ANC conditions were predicted with reasonable accuracy by reductions in measured loudness level. Inclusion of corrections for the sound quality characteristics of tonality and fluctuation strength in multiple regression models improved the prediction of the ANC effects.

  3. Brain responses to sound intensity changes dissociate depressed participants and healthy controls.

    PubMed

    Ruohonen, Elisa M; Astikainen, Piia

    2017-07-01

    Depression is associated with bias in emotional information processing, but less is known about the processing of neutral sensory stimuli. Of particular interest is processing of sound intensity which is suggested to indicate central serotonergic function. We tested weather event-related brain potentials (ERPs) to occasional changes in sound intensity can dissociate first-episode depressed, recurrent depressed and healthy control participants. The first-episode depressed showed larger N1 amplitude to deviant sounds compared to recurrent depression group and control participants. In addition, both depression groups, but not the control group, showed larger N1 amplitude to deviant than standard sounds. Whether these manifestations of sensory over-excitability in depression are directly related to the serotonergic neurotransmission requires further research. The method based on ERPs to sound intensity change is fast and low-cost way to objectively measure brain activation and holds promise as a future diagnostic tool. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Prevalence of high frequency hearing loss consistent with noise exposure among people working with sound systems and general population in Brazil: A cross-sectional study

    PubMed Central

    El Dib, Regina P; Silva, Edina MK; Morais, José F; Trevisani, Virgínia FM

    2008-01-01

    Background Music is ever present in our daily lives, establishing a link between humans and the arts through the senses and pleasure. Sound technicians are the link between musicians and audiences or consumers. Recently, general concern has arisen regarding occurrences of hearing loss induced by noise from excessively amplified sound-producing activities within leisure and professional environments. Sound technicians' activities expose them to the risk of hearing loss, and consequently put at risk their quality of life, the quality of the musical product and consumers' hearing. The aim of this study was to measure the prevalence of high frequency hearing loss consistent with noise exposure among sound technicians in Brazil and compare this with a control group without occupational noise exposure. Methods This was a cross-sectional study comparing 177 participants in two groups: 82 sound technicians and 95 controls (non-sound technicians). A questionnaire on music listening habits and associated complaints was applied, and data were gathered regarding the professionals' numbers of working hours per day and both groups' hearing complaint and presence of tinnitus. The participants' ear canals were visually inspected using an otoscope. Hearing assessments were performed (tonal and speech audiometry) using a portable digital AD 229 E audiometer funded by FAPESP. Results There was no statistically significant difference between the sound technicians and controls regarding age and gender. Thus, the study sample was homogenous and would be unlikely to lead to bias in the results. A statistically significant difference in hearing loss was observed between the groups: 50% among the sound technicians and 10.5% among the controls. The difference could be addressed to high sound levels. Conclusion The sound technicians presented a higher prevalence of high frequency hearing loss consistent with noise exposure than did the general population, although the possibility of residual confounding due to unmeasured factors such as socioeconomic status cannot be ruled out. PMID:18462490

  5. Application of sound and temperature to control boundary-layer transition

    NASA Technical Reports Server (NTRS)

    Maestrello, Lucio; Parikh, Paresh; Bayliss, A.; Huang, L. S.; Bryant, T. D.

    1987-01-01

    The growth and decay of a wave packet convecting in a boundary layer over a concave-convex surface and its active control by localized surface heating are studied numerically using direct computations of the Navier-Stokes equations. The resulting sound radiations are computed using linearized Euler equations with the pressure from the Navier-Stokes solution as a time-dependent boundary condition. It is shown that on the concave portion the amplitude of the wave packet increases and its bandwidth broadens while on the convex portion some of the components in the packet are stabilized. The pressure field decays exponentially away from the surface and then algebraically, exhibiting a decay characteristic of acoustic waves in two dimensions. The far-field acoustic behavior exhibits a super-directivity type of behavior with a beaming downstream. Active control by surface heating is shown to reduce the growth of the wave packet but have little effect on acoustic far field behavior for the cases considered. Active control by sound emanating from the surface of an airfoil in the vicinity of the leading edge is experimentally investigated. The purpose is to control the separated region at high angles of attack. The results show that injection of sound at shedding frequency of the flow is effective in an increase of lift and reduction of drag.

  6. Robust Feedback Control of Flow Induced Structural Radiation of Sound

    NASA Technical Reports Server (NTRS)

    Heatwole, Craig M.; Bernhard, Robert J.; Franchek, Matthew A.

    1997-01-01

    A significant component of the interior noise of aircraft and automobiles is a result of turbulent boundary layer excitation of the vehicular structure. In this work, active robust feedback control of the noise due to this non-predictable excitation is investigated. Both an analytical model and experimental investigations are used to determine the characteristics of the flow induced structural sound radiation problem. The problem is shown to be broadband in nature with large system uncertainties associated with the various operating conditions. Furthermore the delay associated with sound propagation is shown to restrict the use of microphone feedback. The state of the art control methodologies, IL synthesis and adaptive feedback control, are evaluated and shown to have limited success for solving this problem. A robust frequency domain controller design methodology is developed for the problem of sound radiated from turbulent flow driven plates. The control design methodology uses frequency domain sequential loop shaping techniques. System uncertainty, sound pressure level reduction performance, and actuator constraints are included in the design process. Using this design method, phase lag was added using non-minimum phase zeros such that the beneficial plant dynamics could be used. This general control approach has application to lightly damped vibration and sound radiation problems where there are high bandwidth control objectives requiring a low controller DC gain and controller order.

  7. Active Control by Conservation of Energy Concept

    NASA Technical Reports Server (NTRS)

    Maestrello, Lucio

    2000-01-01

    Three unrelated experiments are discussed; each was extremely sensitive to initial conditions. The initial conditions are the beginnings of the origins of the information that nonlinearity displays. Initial conditions make the phenomenon unstable and unpredictable. With the knowledge of the initial conditions, active control requires far less power than that present in the system response. The first experiment is on the control of shocks from an axisymmetric supersonic jet; the second, control of a nonlinear panel response forced by turbulent boundary layer and sound; the third, control of subharmonic and harmonics of a panel forced by sound. In all three experiments, control is achieved by redistribution of periodic energy response such that the energy is nearly preserved from a previous uncontrolled state. This type of active control improves the performance of the system being controlled.

  8. Coherent active methods for applications in room acoustics.

    PubMed

    Guicking, D; Karcher, K; Rollwage, M

    1985-10-01

    An adjustment of reverberation time in rooms is often desired, even for low frequencies where passive absorbers fail. Among the active (electroacoustic) systems, incoherent ones permit lengthening of reverberation time only, whereas coherent active methods will allow sound absorption as well. A coherent-active wall lining consists of loudspeakers with microphones in front and adjustable control electronics. The microphones pick up the incident sound and drive the speakers in such a way that the reflection coefficient takes on prescribed values. An experimental device for the one-dimensional case allows reflection coefficients between almost zero and about 1.5 to be realized below 1000 Hz. The extension to three dimensions presents problems, especially by nearfield effects. Experiments with a 3 X 3 loudspeaker array and computer simulations proved that the amplitude reflection coefficient can be adjusted between 10% and 200% for sinusoidal waves at normal and oblique incidence. Future developments have to make the system work with broadband excitation and in more diffuse sound fields. It is also planned to combine the active reverberation control with active diffusion control.

  9. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  10. Scanning silence: mental imagery of complex sounds.

    PubMed

    Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz

    2005-07-15

    In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.

  11. Active control of turbulent boundary layer-induced sound transmission through the cavity-backed double panels

    NASA Astrophysics Data System (ADS)

    Caiazzo, A.; Alujević, N.; Pluymers, B.; Desmet, W.

    2018-05-01

    This paper presents a theoretical study of active control of turbulent boundary layer (TBL) induced sound transmission through the cavity-backed double panels. The aerodynamic model used is based on the Corcos wall pressure distribution. The structural-acoustic model encompasses a source panel (skin panel), coupled through an acoustic cavity to the radiating panel (trim panel). The radiating panel is backed by a larger acoustic enclosure (the back cavity). A feedback control unit is located inside the acoustic cavity between the two panels. It consists of a control force actuator and a sensor mounted at the actuator footprint on the radiating panel. The control actuator can react off the source panel. It is driven by an amplified velocity signal measured by the sensor. A fully coupled analytical structural-acoustic model is developed to study the effects of the active control on the sound transmission into the back cavity. The stability and performance of the active control system are firstly studied on a reduced order model. In the reduced order model only two fundamental modes of the fully coupled system are assumed. Secondly, a full order model is considered with a number of modes large enough to yield accurate simulation results up to 1000 Hz. It is shown that convincing reductions of the TBL-induced vibrations of the radiating panel and the sound pressure inside the back cavity can be expected. The reductions are more pronounced for a certain class of systems, which is characterised by the fundamental natural frequency of the skin panel larger than the fundamental natural frequency of the trim panel.

  12. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music: the influence of emotion and cognitive style.

    PubMed

    Villarreal, Eduardo A Garza; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.

  13. Developing an active artificial hair cell using nonlinear feedback control

    NASA Astrophysics Data System (ADS)

    Joyce, Bryan S.; Tarazaga, Pablo A.

    2015-09-01

    The hair cells in the mammalian cochlea convert sound-induced vibrations into electrical signals. These cells have inspired a variety of artificial hair cells (AHCs) to serve as biologically inspired sound, fluid flow, and acceleration sensors and could one day replace damaged hair cells in humans. Most of these AHCs rely on passive transduction of stimulus while it is known that the biological cochlea employs active processes to amplify sound-induced vibrations and improve sound detection. In this work, an active AHC mimics the active, nonlinear behavior of the cochlea. The AHC consists of a piezoelectric bimorph beam subjected to a base excitation. A feedback control law is used to reduce the linear damping of the beam and introduce a cubic damping term which gives the AHC the desired nonlinear behavior. Model and experimental results show the AHC amplifies the response due to small base accelerations, has a higher frequency sensitivity than the passive system, and exhibits a compressive nonlinearity like that of the mammalian cochlea. This bio-inspired accelerometer could lead to new sensors with lower thresholds of detection, improved frequency sensitivities, and wider dynamic ranges.

  14. Active control of spectral detail radiated by an air-loaded impacted membrane

    NASA Astrophysics Data System (ADS)

    Rollow, J. Douglas, IV

    An active control system is developed to independently operate on the vibration of individual modes of an air-loaded drum head, resulting in changes in the acoustic field radiated from the structure. The timbre of the system is investigated, and techniques for changing the characteristic frequencies by means of the control system are proposed. A feedforward control system is constructed for empirical investigation of this approach, creating a musical instrument which can produce a variety of sounds not available with strictly mechanical systems. The work is motivated by applications for actively controlled structures, active control of sound quality, and musical acoustics. The instrument consists of a Mylar timpano head stretched over an enclosure which has been outfitted with electroacoustic drivers. Sensors are arranged on the surface of the drum head and combined to measure modal vibration, and the array of drivers allows independent control of these modes. A signal processor is used to form modal control filters which can modify the loading of each mode, changing the time-dependent and spectral characteristics, and therefore the timbre, of the radiated sound. A theoretical formulation of active control of structural vibration by means of fluid-coupled actuators is expressed, and computational solutions show the effects of fluid loading and the radiated field. Experimental results with the new instrument are shown, with implementations of the control system providing a demonstrated degree of control, and illustrating several limitations of such systems.

  15. Hybrid mode-scattering/sound-absorbing segmented liner system and method

    NASA Technical Reports Server (NTRS)

    Walker, Bruce E. (Inventor); Hersh, Alan S. (Inventor); Rice, Edward J. (Inventor)

    1999-01-01

    A hybrid mode-scattering/sound-absorbing segmented liner system and method in which an initial sound field within a duct is steered or scattered into higher-order modes in a first mode-scattering segment such that it is more readily and effectively absorbed in a second sound-absorbing segment. The mode-scattering segment is preferably a series of active control components positioned along the annulus of the duct, each of which includes a controller and a resonator into which a piezoelectric transducer generates the steering noise. The sound-absorbing segment is positioned acoustically downstream of the mode-scattering segment, and preferably comprises a honeycomb-backed passive acoustic liner. The invention is particularly adapted for use in turbofan engines, both in the inlet and exhaust.

  16. Active control of turbulent boundary layer sound transmission into a vehicle interior

    NASA Astrophysics Data System (ADS)

    Caiazzo, A.; Alujević, N.; Pluymers, B.; Desmet, W.

    2016-09-01

    In high speed automotive, aerospace, and railway transportation, the turbulent boundary layer (TBL) is one of the most important sources of interior noise. The stochastic pressure distribution associated with the turbulence is able to excite significantly structural vibration of vehicle exterior panels. They radiate sound into the vehicle through the interior panels. Therefore, the air flow noise becomes very influential when it comes to the noise vibration and harshness assessment of a vehicle, in particular at low frequencies. Normally, passive solutions, such as sound absorbing materials, are used for reducing the TBL-induced noise transmission into a vehicle interior, which generally improve the structure sound isolation performance. These can achieve excellent isolation performance at higher frequencies, but are unable to deal with the low-frequency interior noise components. In this paper, active control of TBL noise transmission through an acoustically coupled double panel system into a rectangular cavity is examined theoretically. The Corcos model of the TBL pressure distribution is used to model the disturbance. The disturbance is rejected by an active vibration isolation unit reacting between the exterior and the interior panels. Significant reductions of the low-frequency vibrations of the interior panel and the sound pressure in the cavity are observed.

  17. Modulation of electrocortical brain activity by attention in individuals with and without tinnitus.

    PubMed

    Paul, Brandon T; Bruce, Ian C; Bosnyak, Daniel J; Thompson, David C; Roberts, Larry E

    2014-01-01

    Age and hearing-level matched tinnitus and control groups were presented with a 40 Hz AM sound using a carrier frequency of either 5 kHz (in the tinnitus frequency region of the tinnitus subjects) or 500 Hz (below this region). On attended blocks subjects pressed a button after each sound indicating whether a single 40 Hz AM pulse of variable increased amplitude (target, probability 0.67) had or had not occurred. On passive blocks subjects rested and ignored the sounds. The amplitude of the 40 Hz auditory steady-state response (ASSR) localizing to primary auditory cortex (A1) increased with attention in control groups probed at 500 Hz and 5 kHz and in the tinnitus group probed at 500 Hz, but not in the tinnitus group probed at 5 kHz (128 channel EEG). N1 amplitude (this response localizing to nonprimary cortex, A2) increased with attention at both sound frequencies in controls but at neither frequency in tinnitus. We suggest that tinnitus-related neural activity occurring in the 5 kHz but not the 500 Hz region of tonotopic A1 disrupted attentional modulation of the 5 kHz ASSR in tinnitus subjects, while tinnitus-related activity in A1 distributing nontonotopically in A2 impaired modulation of N1 at both sound frequencies.

  18. Modulation of Electrocortical Brain Activity by Attention in Individuals with and without Tinnitus

    PubMed Central

    Paul, Brandon T.; Bruce, Ian C.; Bosnyak, Daniel J.; Thompson, David C.; Roberts, Larry E.

    2014-01-01

    Age and hearing-level matched tinnitus and control groups were presented with a 40 Hz AM sound using a carrier frequency of either 5 kHz (in the tinnitus frequency region of the tinnitus subjects) or 500 Hz (below this region). On attended blocks subjects pressed a button after each sound indicating whether a single 40 Hz AM pulse of variable increased amplitude (target, probability 0.67) had or had not occurred. On passive blocks subjects rested and ignored the sounds. The amplitude of the 40 Hz auditory steady-state response (ASSR) localizing to primary auditory cortex (A1) increased with attention in control groups probed at 500 Hz and 5 kHz and in the tinnitus group probed at 500 Hz, but not in the tinnitus group probed at 5 kHz (128 channel EEG). N1 amplitude (this response localizing to nonprimary cortex, A2) increased with attention at both sound frequencies in controls but at neither frequency in tinnitus. We suggest that tinnitus-related neural activity occurring in the 5 kHz but not the 500 Hz region of tonotopic A1 disrupted attentional modulation of the 5 kHz ASSR in tinnitus subjects, while tinnitus-related activity in A1 distributing nontonotopically in A2 impaired modulation of N1 at both sound frequencies. PMID:25024849

  19. Auditory cortex controls sound-driven innate defense behaviour through corticofugal projections to inferior colliculus.

    PubMed

    Xiong, Xiaorui R; Liang, Feixue; Zingg, Brian; Ji, Xu-ying; Ibrahim, Leena A; Tao, Huizhong W; Zhang, Li I

    2015-06-11

    Defense against environmental threats is essential for animal survival. However, the neural circuits responsible for transforming unconditioned sensory stimuli and generating defensive behaviours remain largely unclear. Here, we show that corticofugal neurons in the auditory cortex (ACx) targeting the inferior colliculus (IC) mediate an innate, sound-induced flight behaviour. Optogenetic activation of these neurons, or their projection terminals in the IC, is sufficient for initiating flight responses, while the inhibition of these projections reduces sound-induced flight responses. Corticocollicular axons monosynaptically innervate neurons in the cortex of the IC (ICx), and optogenetic activation of the projections from the ICx to the dorsal periaqueductal gray is sufficient for provoking flight behaviours. Our results suggest that ACx can both amplify innate acoustic-motor responses and directly drive flight behaviours in the absence of sound input through corticocollicular projections to ICx. Such corticofugal control may be a general feature of innate defense circuits across sensory modalities.

  20. Sperm whales reduce foraging effort during exposure to 1-2 kHz sonar and killer whale sounds.

    PubMed

    Isojunno, Saana; Cure, Charlotte; Kvadsheim, Petter Helgevold; Lam, Frans-Peter Alexander; Tyack, Peter Lloyd; Wensveen, Paul Jacobus; Miller, Patrick James O'Malley

    2016-01-01

    The time and energetic costs of behavioral responses to incidental and experimental sonar exposures, as well as control stimuli, were quantified using hidden state analysis of time series of acoustic and movement data recorded by tags (DTAG) attached to 12 sperm whales (Physeter macrocephalus) using suction cups. Behavioral state transition modeling showed that tagged whales switched to a non-foraging, non-resting state during both experimental transmissions of low-frequency active sonar from an approaching vessel (LFAS; 1-2 kHz, source level 214 dB re 1 µPa m, four tag records) and playbacks of potential predator (killer whale, Orcinus orca) sounds broadcast at naturally occurring sound levels as a positive control from a drifting boat (five tag records). Time spent in foraging states and the probability of prey capture attempts were reduced during these two types of exposures with little change in overall locomotion activity, suggesting an effect on energy intake with no immediate compensation. Whales switched to the active non-foraging state over received sound pressure levels of 131-165 dB re 1 µPa during LFAS exposure. In contrast, no changes in foraging behavior were detected in response to experimental negative controls (no-sonar ship approach or noise control playback) or to experimental medium-frequency active sonar exposures (MFAS; 6-7 kHz, source level 199 re 1 µPa m, received sound pressure level [SPL] = 73-158 dB re 1 µPa). Similarly, there was no reduction in foraging effort for three whales exposed to incidental, unidentified 4.7-5.1 kHz sonar signals received at lower levels (SPL = 89-133 dB re 1 µPa). These results demonstrate that similar to predation risk, exposure to sonar can affect functional behaviors, and indicate that increased perception of risk with higher source level or lower frequency may modulate how sperm whales respond to anthropogenic sound.

  1. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    PubMed Central

    Colizoli, Olympia; Murre, Jaap M. J.; Rouw, Romke

    2013-01-01

    Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of non-linguistic sounds induce the experience of taste, smell and physical sensations for SC. SC's lexical-gustatory associations were significantly more consistent than those of a group of controls. We tested for effects of presentation modality (visual vs. auditory), taste-related congruency, and synesthetic inducer-concurrent direction using a priming task. SC's performance did not differ significantly from a trained control group. We used functional magnetic resonance imaging to investigate the neural correlates of SC's synesthetic experiences by comparing her brain activation to the literature on brain networks related to language, music, and sound processing, in addition to synesthesia. Words that induced a strong taste were contrasted to words that induced weak-to-no tastes (“tasty” vs. “tasteless” words). Brain activation was also measured during passive listening to music and environmental sounds. Brain activation patterns showed evidence that two regions are implicated in SC's synesthetic experience of taste and smell: the left anterior insula and left superior parietal lobe. Anterior insula activation may reflect the synesthetic taste experience. The superior parietal lobe is proposed to be involved in binding sensory information across sub-types of synesthetes. We conclude that SC's synesthesia is genuine and reflected in her brain activation. The type of inducer (visual-lexical, auditory-lexical, and non-lexical auditory stimuli) could be differentiated based on patterns of brain activity. PMID:24167497

  2. Superior Analgesic Effect of an Active Distraction versus Pleasant Unfamiliar Sounds and Music: The Influence of Emotion and Cognitive Style

    PubMed Central

    Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene; Østergaard, Leif; Vuust, Peter

    2012-01-01

    Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception. PMID:22242169

  3. Reduction of external noise of mobile energy facilities by using active noise control system in muffler

    NASA Astrophysics Data System (ADS)

    Polivaev, O. I.; Kuznetsov, A. N.; Larionov, A. N.; Beliansky, R. G.

    2018-03-01

    The paper describes a method for the reducing emission of low-frequency noise of modern automotive vehicles into the environment. The importance of reducing the external noise of modern mobile energy facilities made in Russia is substantiated. Standard methods for controlling external noise in technology are of low efficiency when low-frequency sound waves are reduced. In this case, it is in the low-frequency zone of the sound range that the main power of the noise emitted by the machinery lies. The most effective way to reduce such sound waves is to use active noise control systems. A design of a muffler using a similar system is presented. This muffler allowed one to reduce the emission of increased noise levels into the environment by 7-11 dB and to increase acoustic comfort at the operator's workplace by 3-5 dB.

  4. Active Control Of Structure-Borne Noise

    NASA Astrophysics Data System (ADS)

    Elliott, S. J.

    1994-11-01

    The successful practical application of active noise control requires an understanding of both its acoustic limitations and the limitations of the electrical control strategy used. This paper is concerned with the active control of sound in enclosures. First, a review is presented of the fundamental physical limitations of using loudspeakers to achieve either global or local control. Both approaches are seen to have a high frequency limit, due to either the acoustic modal overlap, or the spatial correlation function of the pressure field. These physical performance limits could, in principle, be achieved with either a feedback or a feedforward control strategy. These strategies are reviewed and the use of adaptive digital filters is discussed for both approaches. The application of adaptive feedforward control in the control of engine and road noise in cars is described. Finally, an indirect approach to the active control of sound is discussed, in which the vibration is suppressed in the structural paths connecting the source of vibration to the enclosure. Two specific examples of this strategy are described, using an active automotive engine mount and the incorporation of actuators into helicopter struts to control gear-meshing tones. In both cases good passive design can minimize the complexity of the active controller.

  5. Low-frequency acoustic pressure, velocity, and intensity thresholds in a bottlenose dolphin (Tursiops truncatus) and white whale (Delphinapterus leucas)

    NASA Astrophysics Data System (ADS)

    Finneran, James J.; Carder, Donald A.; Ridgway, Sam H.

    2002-01-01

    The relative contributions of acoustic pressure and particle velocity to the low-frequency, underwater hearing abilities of the bottlenose dolphin (Tursiops truncatus) and white whale (Delphinapterus leucas) were investigated by measuring (masked) hearing thresholds while manipulating the relationship between the pressure and velocity. This was accomplished by varying the distance within the near field of a single underwater sound projector (experiment I) and using two underwater sound projectors and an active sound control system (experiment II). The results of experiment I showed no significant change in pressure thresholds as the distance between the subject and the sound source was changed. In contrast, velocity thresholds tended to increase and intensity thresholds tended to decrease as the source distance decreased. These data suggest that acoustic pressure is a better indicator of threshold, compared to particle velocity or mean active intensity, in the subjects tested. Interpretation of the results of experiment II (the active sound control system) was difficult because of complex acoustic conditions and the unknown effects of the subject on the generated acoustic field; however, these data also tend to support the results of experiment I and suggest that odontocete thresholds should be reported in units of acoustic pressure, rather than intensity.

  6. Control of boundary layer transition location and plate vibration in the presence of an external acoustic field

    NASA Technical Reports Server (NTRS)

    Maestrello, L.; Grosveld, F. W.

    1991-01-01

    The experiment is aimed at controlling the boundary layer transition location and the plate vibration when excited by a flow and an upstream sound source. Sound has been found to affect the flow at the leading edge and the response of a flexible plate in a boundary layer. Because the sound induces early transition, the panel vibration is acoustically coupled to the turbulent boundary layer by the upstream radiation. Localized surface heating at the leading edge delays the transition location downstream of the flexible plate. The response of the plate excited by a turbulent boundary layer (without sound) shows that the plate is forced to vibrate at different frequencies and with different amplitudes as the flow velocity changes indicating that the plate is driven by the convective waves of the boundary layer. The acoustic disturbances induced by the upstream sound dominate the response of the plate when the boundary layer is either turbulent or laminar. Active vibration control was used to reduce the sound induced displacement amplitude of the plate.

  7. Active noise control for infant incubators.

    PubMed

    Yu, Xun; Gujjula, Shruthi; Kuo, Sen M

    2009-01-01

    This paper presents an active noise control system for infant incubators. Experimental results show that global noise reduction can be achieved for infant incubator ANC systems. An audio-integration algorithm is presented to introduce a healthy audio (intrauterine) sound with the ANC system to mask the residual noise and soothe the infant. Carbon nanotube based transparent thin film speaker is also introduced in this paper as the actuator for the ANC system to generate the destructive secondary sound, which can significantly save the congested incubator space and without blocking the view of doctors and nurses.

  8. A Functional Neuroimaging Study of Sound Localization: Visual Cortex Activity Predicts Performance in Early-Blind Individuals

    PubMed Central

    Gougoux, Frédéric; Zatorre, Robert J; Lassonde, Maryse; Voss, Patrice

    2005-01-01

    Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged), the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life. PMID:15678166

  9. Loud and angry: sound intensity modulates amygdala activation to angry voices in social anxiety disorder

    PubMed Central

    Simon, Doerte; Becker, Michael; Mothes-Lasch, Martin; Miltner, Wolfgang H.R.

    2017-01-01

    Abstract Angry expressions of both voices and faces represent disorder-relevant stimuli in social anxiety disorder (SAD). Although individuals with SAD show greater amygdala activation to angry faces, previous work has failed to find comparable effects for angry voices. Here, we investigated whether voice sound-intensity, a modulator of a voice’s threat-relevance, affects brain responses to angry prosody in SAD. We used event-related functional magnetic resonance imaging to explore brain responses to voices varying in sound intensity and emotional prosody in SAD patients and healthy controls (HCs). Angry and neutral voices were presented either with normal or high sound amplitude, while participants had to decide upon the speaker’s gender. Loud vs normal voices induced greater insula activation, and angry vs neutral prosody greater orbitofrontal cortex activation in SAD as compared with HC subjects. Importantly, an interaction of sound intensity, prosody and group was found in the insula and the amygdala. In particular, the amygdala showed greater activation to loud angry voices in SAD as compared with HC subjects. This finding demonstrates a modulating role of voice sound-intensity on amygdalar hyperresponsivity to angry prosody in SAD and suggests that abnormal processing of interpersonal threat signals in amygdala extends beyond facial expressions in SAD. PMID:27651541

  10. Control of sound radiation from a wavepacket over a curved surface

    NASA Technical Reports Server (NTRS)

    Maestrello, Lucio; El Hady, Nabil M.

    1989-01-01

    Active control of acoustic pressure in the far field resulting from the growth and decay of a wavepacket convecting in a boundary layer over a concave-convex surface is investigated numerically using direct computations of the Navier-Stokes equations. The resulting sound radiation is computed using linearized Euler equations with the pressure from the Navier-Stokes solution as a time-dependent boundary condition. The acoustic far field exhibits directivity type of behavior that points upstream to the flow direction. A fixed control algorithm is used where the attenuation signal is synthesized by a filter which actively adapt it to the amplitude-time response of the outgoing acoustic wave.

  11. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Cross-Polarization Optical Coherence Tomography with Active Maintenance of the Circular Polarization of a Sounding Wave in a Common Path System

    NASA Astrophysics Data System (ADS)

    Gelikonov, V. M.; Romashov, V. N.; Shabanov, D. V.; Ksenofontov, S. Yu.; Terpelov, D. A.; Shilyagin, P. A.; Gelikonov, G. V.; Vitkin, I. A.

    2018-05-01

    We consider a cross-polarization optical coherence tomography system with a common path for the sounding and reference waves and active maintenance of the circular polarization of a sounding wave. The system is based on the formation of birefringent characteristics of the total optical path, which are equivalent to a quarter-wave plate with a 45° orientation of its optical axes with respect to the linearly polarized reference wave. Conditions under which any light-polarization state can be obtained using a two-element phase controller are obtained. The dependence of the local cross-scattering coefficient of light in a model medium and biological tissue on the sounding-wave polarization state is demonstrated. The necessity of active maintenance of the circular polarization of a sounding wave in this common path system (including a flexible probe) is shown to realize uniform optimal conditions for cross-polarization studies of biological tissue.

  13. Auditory cortex controls sound-driven innate defense behaviour through corticofugal projections to inferior colliculus

    PubMed Central

    Xiong, Xiaorui R.; Liang, Feixue; Zingg, Brian; Ji, Xu-ying; Ibrahim, Leena A.; Tao, Huizhong W.; Zhang, Li I.

    2015-01-01

    Defense against environmental threats is essential for animal survival. However, the neural circuits responsible for transforming unconditioned sensory stimuli and generating defensive behaviours remain largely unclear. Here, we show that corticofugal neurons in the auditory cortex (ACx) targeting the inferior colliculus (IC) mediate an innate, sound-induced flight behaviour. Optogenetic activation of these neurons, or their projection terminals in the IC, is sufficient for initiating flight responses, while the inhibition of these projections reduces sound-induced flight responses. Corticocollicular axons monosynaptically innervate neurons in the cortex of the IC (ICx), and optogenetic activation of the projections from the ICx to the dorsal periaqueductal gray is sufficient for provoking flight behaviours. Our results suggest that ACx can both amplify innate acoustic-motor responses and directly drive flight behaviours in the absence of sound input through corticocollicular projections to ICx. Such corticofugal control may be a general feature of innate defense circuits across sensory modalities. PMID:26068082

  14. A cry in the dark: depressed mothers show reduced neural activation to their own infant’s cry

    PubMed Central

    Ablow, Jennifer C.

    2012-01-01

    This study investigated depression-related differences in primiparous mothers’ neural response to their own infant’s distress cues. Mothers diagnosed with major depressive disorder (n = 11) and comparison mothers with no diagnosable psychopathology (n = 11) were exposed to their own 18-months-old infant’s cry sound, as well as unfamiliar infant’s cry and control sound, during functional neuroimaging. Depressed mothers’ response to own infant cry greater than other sounds was compared to non-depressed mothers’ response in the whole brain [false discovery rate (FDR) corrected]. A continuous measure of self-reported depressive symptoms (CESD) was also tested as a predictor of maternal response. Non-depressed mothers activated to their own infant’s cry greater than control sound in a distributed network of para/limbic and prefrontal regions, whereas depressed mothers as a group failed to show activation. Non-depressed compared to depressed mothers showed significantly greater striatal (caudate, nucleus accumbens) and medial thalamic activation. Additionally, mothers with lower depressive symptoms activated more strongly in left orbitofrontal, dorsal anterior cingulate and medial superior frontal regions. Non-depressed compared to depressed mothers activated uniquely to own infant greater than other infant cry in occipital fusiform areas. Disturbance of these neural networks involved in emotional response and regulation may help to explain parenting deficits in depressed mothers. PMID:21208990

  15. Association between patterns of jaw motor activity during sleep and clinical signs and symptoms of sleep bruxism.

    PubMed

    Yoshida, Yuya; Suganuma, Takeshi; Takaba, Masayuki; Ono, Yasuhiro; Abe, Yuka; Yoshizawa, Shuichiro; Sakai, Takuro; Yoshizawa, Ayako; Nakamura, Hirotaka; Kawana, Fusae; Baba, Kazuyoshi

    2017-08-01

    The aim of this study was to investigate the association between patterns of jaw motor activity during sleep and clinical signs and symptoms of sleep bruxism. A total of 35 university students and staff members participated in this study after providing informed consent. All participants were divided into either a sleep bruxism group (n = 21) or a control group (n = 14), based on the following clinical diagnostic criteria: (1) reports of tooth-grinding sounds for at least two nights a week during the preceding 6 months by their sleep partner; (2) presence of tooth attrition with exposed dentin; (3) reports of morning masticatory muscle fatigue or tenderness; and (4) presence of masseter muscle hypertrophy. Video-polysomnography was performed in the sleep laboratory for two nights. Sleep bruxism episodes were measured using masseter electromyography, visually inspected and then categorized into phasic or tonic episodes. Phasic episodes were categorized further into episodes with or without grinding sounds as evaluated by audio signals. Sleep bruxism subjects with reported grinding sounds had a significantly higher total number of phasic episodes with grinding sounds than subjects without reported grinding sounds or controls (Kruskal-Wallis/Steel-Dwass tests; P < 0.05). Similarly, sleep bruxism subjects with tooth attrition exhibited significantly longer phasic burst durations than those without or controls (Kruskal-Wallis/Steel-Dwass tests; P < 0.05). Furthermore, sleep bruxism subjects with morning masticatory muscle fatigue or tenderness exhibited significantly longer tonic burst durations than those without or controls (Kruskal-Wallis/Steel-Dwass tests; P < 0.05). These results suggest that each clinical sign and symptom of sleep bruxism represents different aspects of jaw motor activity during sleep. © 2016 European Sleep Research Society.

  16. Experiments on active isolation using distributed PVDF error sensors

    NASA Technical Reports Server (NTRS)

    Lefebvre, S.; Guigou, C.; Fuller, C. R.

    1992-01-01

    A control system based on a two-channel narrow-band LMS algorithm is used to isolate periodic vibration at low frequencies on a structure composed of a rigid top plate mounted on a flexible receiving plate. The control performance of distributed PVDF error sensors and accelerometer point sensors is compared. For both sensors, high levels of global reduction, up to 32 dB, have been obtained. It is found that, by driving the PVDF strip output voltage to zero, the controller may force the structure to vibrate so that the integration of the strain under the length of the PVDF strip is zero. This ability of the PVDF sensors to act as spatial filters is especially relevant in active control of sound radiation. It is concluded that the PVDF sensors are flexible, nonfragile, and inexpensive and can be used as strain sensors for active control applications of vibration isolation and sound radiation.

  17. Active control of sound transmission through a rectangular panel using point-force actuators and piezoelectric film sensors.

    PubMed

    Sanada, Akira; Higashiyama, Kouji; Tanaka, Nobuo

    2015-01-01

    This study deals with the active control of sound transmission through a rectangular panel, based on single input, single output feedforward vibration control using point-force actuators and piezoelectric film sensors. It focuses on the phenomenon in which the sound power transmitted through a finite-sized panel drops significantly at some frequencies just below the resonance frequencies of the panel in the low-frequency range as a result of modal coupling cancellation. In a previous study, it was shown that when point-force actuators are located on nodal lines for the frequency at which this phenomenon occurs, a force equivalent to the incident sound wave can act on the panel. In this study, a practical method for sensing volume velocity using a small number of piezoelectric film strips is investigated. It is found that two quadratically shaped piezoelectric film strips, attached at the same nodal lines as those where the actuators were placed, can sense the volume velocity approximately in the low-frequency range. Results of simulations show that combining the proposed actuation method and the sensing method can achieve a practical control effect at low frequencies over a wide frequency range. Finally, experiments are carried out to demonstrate the validity and feasibility of the proposed method.

  18. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    PubMed Central

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J.

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface. PMID:29515494

  19. Corollary discharge provides the sensory content of inner speech.

    PubMed

    Scott, Mark

    2013-09-01

    Inner speech is one of the most common, but least investigated, mental activities humans perform. It is an internal copy of one's external voice and so is similar to a well-established component of motor control: corollary discharge. Corollary discharge is a prediction of the sound of one's voice generated by the motor system. This prediction is normally used to filter self-caused sounds from perception, which segregates them from externally caused sounds and prevents the sensory confusion that would otherwise result. The similarity between inner speech and corollary discharge motivates the theory, tested here, that corollary discharge provides the sensory content of inner speech. The results reported here show that inner speech attenuates the impact of external sounds. This attenuation was measured using a context effect (an influence of contextual speech sounds on the perception of subsequent speech sounds), which weakens in the presence of speech imagery that matches the context sound. Results from a control experiment demonstrated this weakening in external speech as well. Such sensory attenuation is a hallmark of corollary discharge.

  20. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    PubMed

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  1. An experimental model to measure the ability of headphones with active noise control to reduce patient's exposure to noise in an intensive care unit.

    PubMed

    Gallacher, Stuart; Enki, Doyo; Stevens, Sian; Bennett, Mark J

    2017-10-17

    Defining the association between excessive noise in intensive care units, sleep disturbance and morbidity, including delirium, is confounded by the difficulty of implementing successful strategies to reduce patient's exposure to noise. Active noise control devices may prove to be useful adjuncts but there is currently little to quantify their ability to reduce noise in this complex environment. Sound meters were embedded in the auditory meatus of three polystyrene model heads with no headphones (control), with headphones alone and with headphones using active noise control and placed in patient bays in a cardiac ICU. Ten days of recording sound levels at a frequency of 1 Hz were performed, and the noise levels in each group were compared using repeated measures MANOVA and subsequent pairwise testing. Multivariate testing demonstrated that there is a significant difference in the mean noise exposure levels between the three groups (p < 0.001). Subsequent pairwise testing between the three groups shows that the reduction in noise is greatest with headphones and active noise control. The mean reduction in noise exposure between the control and this group over 24 h is 6.8 (0.66) dB. The use of active noise control was also associated with a reduction in the exposure to high-intensity sound events over the course of the day. The use of active noise cancellation, as delivered by noise-cancelling headphones, is associated with a significant reduction in noise exposure in our model of noise exposure in a cardiac ICU. This is the first study to look at the potential effectiveness of active noise control in adult patients in an intensive care environment and shows that active noise control is a candidate technology to reduce noise exposure levels the patients experience during stays on intensive care.

  2. Experimental Simulation of Active Control With On-line System Identification on Sound Transmission Through an Elastic Plate

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An adaptive control algorithm with on-line system identification capability has been developed. One of the great advantages of this scheme is that an additional system identification mechanism such as an additional uncorrelated random signal generator as the source of system identification is not required. A time-varying plate-cavity system is used to demonstrate the control performance of this algorithm. The time-varying system consists of a stainless-steel plate which is bolted down on a rigid cavity opening where the cavity depth was changed with respect to time. For a given externally located harmonic sound excitation, the system identification and the control are simultaneously executed to minimize the transmitted sound in the cavity. The control performance of the algorithm is examined for two cases. First, all the water was drained, the external disturbance frequency is swept with 1 Hz/sec. The result shows an excellent frequency tracking capability with cavity internal sound suppression of 40 dB. For the second case, the water level is initially empty and then raised to 3/20 full in 60 seconds while the external sound excitation is fixed with a frequency. Hence, the cavity resonant frequency decreases and passes the external sound excitation frequency. The algorithm shows 40 dB transmitted noise suppression without compromising the system identification tracking capability.

  3. Human brain regions involved in recognizing environmental sounds.

    PubMed

    Lewis, James W; Wightman, Frederic L; Brefczynski, Julie A; Phinney, Raymond E; Binder, Jeffrey R; DeYoe, Edgar A

    2004-09-01

    To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.

  4. A wavenumber approach to analysing the active control of plane waves with arrays of secondary sources

    NASA Astrophysics Data System (ADS)

    Elliott, Stephen J.; Cheer, Jordan; Bhan, Lam; Shi, Chuang; Gan, Woon-Seng

    2018-04-01

    The active control of an incident sound field with an array of secondary sources is a fundamental problem in active control. In this paper the optimal performance of an infinite array of secondary sources in controlling a plane incident sound wave is first considered in free space. An analytic solution for normal incidence plane waves is presented, indicating a clear cut-off frequency for good performance, when the separation distance between the uniformly-spaced sources is equal to a wavelength. The extent of the near field pressure close to the source array is also quantified, since this determines the positions of the error microphones in a practical arrangement. The theory is also extended to oblique incident waves. This result is then compared with numerical simulations of controlling the sound power radiated through an open aperture in a rigid wall, subject to an incident plane wave, using an array of secondary sources in the aperture. In this case the diffraction through the aperture becomes important when its size is compatible with the acoustic wavelength, in which case only a few sources are necessary for good control. When the size of the aperture is large compared to the wavelength, and diffraction is less important but more secondary sources need to be used for good control, the results then become similar to those for the free field problem with an infinite source array.

  5. Loud and angry: sound intensity modulates amygdala activation to angry voices in social anxiety disorder.

    PubMed

    Simon, Doerte; Becker, Michael; Mothes-Lasch, Martin; Miltner, Wolfgang H R; Straube, Thomas

    2017-03-01

    Angry expressions of both voices and faces represent disorder-relevant stimuli in social anxiety disorder (SAD). Although individuals with SAD show greater amygdala activation to angry faces, previous work has failed to find comparable effects for angry voices. Here, we investigated whether voice sound-intensity, a modulator of a voice's threat-relevance, affects brain responses to angry prosody in SAD. We used event-related functional magnetic resonance imaging to explore brain responses to voices varying in sound intensity and emotional prosody in SAD patients and healthy controls (HCs). Angry and neutral voices were presented either with normal or high sound amplitude, while participants had to decide upon the speaker's gender. Loud vs normal voices induced greater insula activation, and angry vs neutral prosody greater orbitofrontal cortex activation in SAD as compared with HC subjects. Importantly, an interaction of sound intensity, prosody and group was found in the insula and the amygdala. In particular, the amygdala showed greater activation to loud angry voices in SAD as compared with HC subjects. This finding demonstrates a modulating role of voice sound-intensity on amygdalar hyperresponsivity to angry prosody in SAD and suggests that abnormal processing of interpersonal threat signals in amygdala extends beyond facial expressions in SAD. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  6. Frequency-independent radiation modes of interior sound radiation: An analytical study

    NASA Astrophysics Data System (ADS)

    Hesse, C.; Vivar Perez, J. M.; Sinapius, M.

    2017-03-01

    Global active control methods of sound radiation into acoustic cavities necessitate the formulation of the interior sound field in terms of the surrounding structural velocity. This paper proposes an efficient approach to do this by presenting an analytical method to describe the radiation modes of interior sound radiation. The method requires no knowledge of the structural modal properties, which are often difficult to obtain in control applications. The procedure is exemplified for two generic systems of fluid-structure interaction, namely a rectangular plate coupled to a cuboid cavity and a hollow cylinder with the fluid in its enclosed cavity. The radiation modes are described as a subset of the acoustic eigenvectors on the structural-acoustic interface. For the two studied systems, they are therefore independent of frequency.

  7. Modeling of influencing parameters in active noise control on an enclosure wall

    NASA Astrophysics Data System (ADS)

    Tarabini, Marco; Roure, Alain

    2008-04-01

    This paper investigates, by means of a numerical model, the possibility of using an active noise barrier to virtually reduce the acoustic transparency of a partition wall inside an enclosure. The room is modeled with the image method as a rectangular enclosure with a stationary point source; the active barrier is set up by an array of loudspeakers and error microphones and is meant to minimize the squared sound pressure on a wall with the use of a decentralized control. Simulations investigate the effects of the enclosure characteristics and of the barrier geometric parameters on the sound pressure attenuation on the controlled partition, on the whole enclosure potential energy and on the diagonal control stability. Performances are analyzed in a frequency range of 25-300 Hz at discrete 25 Hz steps. Influencing parameters and their effects on the system performances are identified with a statistical inference procedure. Simulation results have shown that it is possible to averagely reduce the sound pressure on the controlled partition. In the investigated configuration, the surface attenuation and the diagonal control stability are mainly driven by the distance between the loudspeakers and the error microphones and by the loudspeakers directivity; minor effects are due to the distance between the error microphones and the wall, by the wall reflectivity and by the active barrier grid meshing. Room dimensions and source position have negligible effects. Experimental results point out the validity of the model and the efficiency of the barrier in the reduction of the wall acoustic transparency.

  8. Sound-field reproduction in-room using optimal control techniques: simulations in the frequency domain.

    PubMed

    Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw

    2005-02-01

    This paper describes the simulations and results obtained when applying optimal control to progressive sound-field reproduction (mainly for audio applications) over an area using multiple monopole loudspeakers. The model simulates a reproduction system that operates either in free field or in a closed space approaching a typical listening room, and is based on optimal control in the frequency domain. This rather simple approach is chosen for the purpose of physical investigation, especially in terms of sensing microphones and reproduction loudspeakers configurations. Other issues of interest concern the comparison with wave-field synthesis and the control mechanisms. The results suggest that in-room reproduction of sound field using active control can be achieved with a residual normalized squared error significantly lower than open-loop wave-field synthesis in the same situation. Active reproduction techniques have the advantage of automatically compensating for the room's natural dynamics. For the considered cases, the simulations show that optimal control results are not sensitive (in terms of reproduction error) to wall absorption in the reproduction room. A special surrounding configuration of sensors is introduced for a sensor-free listening area in free field.

  9. Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.

    2003-01-01

    Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.

  10. Physical mechanisms of active control of sound transmission through rib stiffened double-panel structure

    NASA Astrophysics Data System (ADS)

    Ma, Xiyue; Chen, Kean; Ding, Shaohu; Yu, Haoxin

    2016-06-01

    This paper presents an analytical investigation on physical mechanisms of actively controlling sound transmission through a rib stiffened double-panel structure using point source in the cavity. The combined modal expansion and vibro-acoustic coupling methods are applied to establish the theoretical model of such active structure. Under the condition of minimizing radiated power of the radiating ribbed plate, the physical mechanisms are interpreted in detail from the point of view of modal couplings similar as that used in existed literatures. Results obtained demonstrate that the rule of sound energy transmission and the physical mechanisms for the rib stiffened double-panel structure are all changed, and affected by the coupling effects of the rib when compared with the analytical results obtained for unribbed double-panel case. By taking the coupling effects of the rib into considerations, the cavity modal suppression and rearrangement mechanisms obtained in existed investigations are modified and supplemented for the ribbed plate case, which gives a clear interpretation for the physical nature involved in the active rib stiffened double-panel structure.

  11. Cancelation and its simulation using Matlab according to active noise control case study of automotive noise silencer

    NASA Astrophysics Data System (ADS)

    Alfisyahrin; Isranuri, I.

    2018-02-01

    Active Noise Control is a technique to overcome noisy with noise or sound countered with sound in scientific terminology i.e signal countered with signals. This technique can be used to dampen relevant noise in accordance with the wishes of the engineering task and reducing automotive muffler noise to a minimum. Objective of this study is to develop a Active Noise Control which should cancel the noise of automotive Exhaust (Silencer) through Signal Processing Simulation methods. Noise generator of Active Noise Control is to make the opponent signal amplitude and frequency of the automotive noise. The steps are: Firstly, the noise of automotive silencer was measured to characterize the automotive noise that its amplitude and frequency which intended to be expressed. The opposed sound which having similar character with the signal source should be generated by signal function. A comparison between the data which has been completed with simulation calculations Fourier transform field data is data that has been captured on the muffler (noise silencer) Toyota Kijang Capsule assembly 2009. MATLAB is used to simulate how the signal processing noise generated by exhaust (silencer) using FFT. This opponent is inverted phase signal from the signal source 180° conducted by Instruments of Signal Noise Generators. The process of noise cancelation examined through simulation using computer software simulation. The result is obtained that attenuation of sound (noise cancellation) has a difference of 33.7%. This value is obtained from the comparison of the value of the signal source and the signal value of the opponent. So it can be concluded that the noisy signal can be attenuated by 33.7%.

  12. Active Noise Control for Dishwasher noise

    NASA Astrophysics Data System (ADS)

    Lee, Nokhaeng; Park, Youngjin

    2016-09-01

    The dishwasher is a useful home appliance and continually used for automatically washing dishes. It's commonly placed in the kitchen with built-in style for practicality and better use of space. In this environment, people are easily exposed to dishwasher noise, so it is an important issue for the consumers, especially for the people living in open and narrow space. Recently, the sound power levels of the noise are about 40 - 50 dBA. It could be achieved by removal of noise sources and passive means of insulating acoustical path. For more reduction, such a quiet mode with the lower speed of cycle has been introduced, but this deteriorates the washing capacity. Under this background, we propose active noise control for dishwasher noise. It is observed that the noise is propagating mainly from the lower part of the front side. Control speakers are placed in the part for the collocation. Observation part of estimating sound field distribution and control part of generating the anti-noise are designed for active noise control. Simulation result shows proposed active noise control scheme could have a potential application for dishwasher noise reduction.

  13. A Lightweight Loudspeaker for Aircraft Communications and Active Noise Control

    NASA Technical Reports Server (NTRS)

    Warnaka, Glenn E.; Kleinle, Mark; Tsangaris, Parry; Oslac, Michael J.; Moskow, Harry J.

    1992-01-01

    A series of new, lightweight loudspeakers for use on commercial aircraft has been developed. The loudspeakers use NdFeB magnets and aluminum alloy frames to reduce the weight. The NdFeB magnet is virtually encapsulated by steel in the new speaker designs. Active noise reduction using internal loudspeakers was demonstrated to be effective in 1983. A weight, space, and cost efficient method for creating the active sound attenuating fields is to use the existing cabin loudspeakers for both communication and sound attenuation. This will require some additional loudspeaker design considerations.

  14. Active control of panel vibrations induced by a boundary layer flow

    NASA Technical Reports Server (NTRS)

    Chow, Pao-Liu

    1995-01-01

    The problems of active and passive control of sound and vibration has been investigated by many researchers for a number of years. However, few of the articles are concerned with the sound and vibration with flow-structure interaction. Experimental and numerical studies on the coupling between panel vibration and acoustic radiation due to flow excitation have been done by Maestrello and his associates at NASA/Langley Research Center. Since the coupled system of nonlinear partial differential equations is formidable, an analytical solution to the full problem seems impossible. For this reason, we have to simplify the problem to that of the nonlinear panel vibration induced by a uniform flow or a boundary-layer flow with a given wall pressure distribution. Based on this simplified model, we have been able to consider the control and stabilization of the nonlinear panel vibration, which have not been treated satisfactorily by other authors. Although the sound radiation has not been included, the vibration suppression will clearly reduce the sound radiation power from the panel. The major research findings are presented in three sections. In section two we describe results on the boundary control of nonlinear panel vibration, with or without flow excitation. Sections three and four are concerned with some analytical and numerical results in the optimal control of the linear and nonlinear panel vibrations, respectively, excited by the flow pressure fluctuations. Finally, in section five, we draw some conclusions from research findings.

  15. Fibre architecture and song activation rates of syringeal muscles are not lateralized in the European starling

    PubMed Central

    Uchida, A. M.; Meyers, R. A.; Cooper, B. G.; Goller, F.

    2010-01-01

    The songbird vocal organ, the syrinx, is composed of two sound generators, which are independently controlled by sets of two extrinsic and four intrinsic muscles. These muscles rank among the fastest vertebrate muscles, but the molecular and morphological foundations of this rapid physiological performance are unknown. Here we show that the four intrinsic muscles in the syrinx of male European starlings (Sturnus vulgaris) are composed of fast oxidative and superfast fibres. Dorsal and ventral tracheobronchialis muscles contain slightly more superfast fibres relative to the number of fast oxidative fibres than dorsal and ventral syringealis muscles. This morphological difference is not reflected in the highest, burst-like activation rate of the two muscle groups during song as assessed with electromyographic recordings. No difference in fibre type ratio was found between the corresponding muscles of the left and right sound generators. Airflow and electromyographic measurements during song indicate that maximal activation rate and speed of airflow regulation do not differ between the two sound sources. Whereas the potential for high-speed muscular control exists on both sides, the two sound generators are used differentially for modulation of acoustic parameters. These results show that large numbers of superfast fibre types are present in intrinsic syringeal muscles of a songbird, providing further confirmation of rapid contraction kinetics. However, syringeal muscles are composed of two fibre types which raises questions about the neuromuscular control of this heterogeneous muscle architecture. PMID:20228343

  16. Fibre architecture and song activation rates of syringeal muscles are not lateralized in the European starling.

    PubMed

    Uchida, A M; Meyers, R A; Cooper, B G; Goller, F

    2010-04-01

    The songbird vocal organ, the syrinx, is composed of two sound generators, which are independently controlled by sets of two extrinsic and four intrinsic muscles. These muscles rank among the fastest vertebrate muscles, but the molecular and morphological foundations of this rapid physiological performance are unknown. Here we show that the four intrinsic muscles in the syrinx of male European starlings (Sturnus vulgaris) are composed of fast oxidative and superfast fibres. Dorsal and ventral tracheobronchialis muscles contain slightly more superfast fibres relative to the number of fast oxidative fibres than dorsal and ventral syringealis muscles. This morphological difference is not reflected in the highest, burst-like activation rate of the two muscle groups during song as assessed with electromyographic recordings. No difference in fibre type ratio was found between the corresponding muscles of the left and right sound generators. Airflow and electromyographic measurements during song indicate that maximal activation rate and speed of airflow regulation do not differ between the two sound sources. Whereas the potential for high-speed muscular control exists on both sides, the two sound generators are used differentially for modulation of acoustic parameters. These results show that large numbers of superfast fibre types are present in intrinsic syringeal muscles of a songbird, providing further confirmation of rapid contraction kinetics. However, syringeal muscles are composed of two fibre types which raises questions about the neuromuscular control of this heterogeneous muscle architecture.

  17. Cell-specific gain modulation by synaptically released zinc in cortical circuits of audition.

    PubMed

    Anderson, Charles T; Kumar, Manoj; Xiong, Shanshan; Tzounopoulos, Thanos

    2017-09-09

    In many excitatory synapses, mobile zinc is found within glutamatergic vesicles and is coreleased with glutamate. Ex vivo studies established that synaptically released (synaptic) zinc inhibits excitatory neurotransmission at lower frequencies of synaptic activity but enhances steady state synaptic responses during higher frequencies of activity. However, it remains unknown how synaptic zinc affects neuronal processing in vivo. Here, we imaged the sound-evoked neuronal activity of the primary auditory cortex in awake mice. We discovered that synaptic zinc enhanced the gain of sound-evoked responses in CaMKII-expressing principal neurons, but it reduced the gain of parvalbumin- and somatostatin-expressing interneurons. This modulation was sound intensity-dependent and, in part, NMDA receptor-independent. By establishing a previously unknown link between synaptic zinc and gain control of auditory cortical processing, our findings advance understanding about cortical synaptic mechanisms and create a new framework for approaching and interpreting the role of the auditory cortex in sound processing.

  18. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: robust virtual sensor design.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-03-01

    The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America

  19. Radiometric sounding system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteman, C.D.; Anderson, G.A.; Alzheimer, J.M.

    1995-04-01

    Vertical profiles of solar and terrestrial radiative fluxes are key research needs for global climate change research. These fluxes are expected to change as radiatively active trace gases are emitted to the earth`s atmosphere as a consequence of energy production and industrial and other human activities. Models suggest that changes in the concentration of such gases will lead to radiative flux divergences that will produce global warming of the earth`s atmosphere. Direct measurements of the vertical variation of solar and terrestrial radiative fluxes that lead to these flux divergences have been largely unavailable because of the expense of making suchmore » measurements from airplanes. These measurements are needed to improve existing atmospheric radiative transfer models, especially under the cloudy conditions where the models have not been adequately tested. A tethered-balloon-borne Radiometric Sounding System has been developed at Pacific Northwest Laboratory to provide an inexpensive means of making routine vertical soundings of radiative fluxes in the earth`s atmospheric boundary layer to altitudes up to 1500 m above ground level. Such vertical soundings would supplement measurements being made from aircraft and towers. The key technical challenge in the design of the Radiometric Sounding System is to develop a means of keeping the radiometers horizontal while the balloon ascends and descends in a turbulent atmospheric environment. This problem has been addressed by stabilizing a triangular radiometer-carrying platform that is carried on the tetherline of a balloon sounding system. The platform, carried 30 m or more below the balloon to reduce the balloon`s effect on the radiometric measurements, is leveled by two automatic control loops that activate motors, gears and pulleys when the platform is off-level. The sensitivity of the automatic control loops to oscillatory motions of various frequencies and amplitudes can be adjusted using filters.« less

  20. Musicianship enhances ipsilateral and contralateral efferent gain control to the cochlea.

    PubMed

    Bidelman, Gavin M; Schneider, Amy D; Heitzmann, Victoria R; Bhagat, Shaum P

    2017-02-01

    Human hearing sensitivity is easily compromised with overexposure to excessively loud sounds, leading to permanent hearing damage. Consequently, finding activities and/or experiential factors that distinguish "tender" from "tough" ears (i.e., acoustic vulnerability) would be important for identifying people at higher risk for hearing damage. To regulate sound transmission and protect the inner ear against acoustic trauma, the auditory system modulates gain control to the cochlea via biological feedback of the medial olivocochlear (MOC) efferents, a neuronal pathway linking the lower brainstem and cochlear outer hair cells. We hypothesized that a salient form of auditory experience shown to have pervasive neuroplastic benefits, namely musical training, might act to fortify hearing through tonic engagement of these reflexive pathways. By measuring MOC efferent feedback via otoacoustic emissions (cochlear emitted sounds), we show that dynamic ipsilateral and contralateral cochlear gain control is enhanced in musically-trained individuals. Across all participants, MOC strength was correlated with the years of listeners' training suggested that efferent gain control is experience dependent. Our data provide new evidence that intensive listening experience(s) (e.g., musicianship) can strengthen the ipsi/contralateral MOC efferent system and sound regulation to the inner ear. Implications for reducing acoustic vulnerability to damaging sounds are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Brain Networks of Novelty-Driven Involuntary and Cued Voluntary Auditory Attention Shifting

    PubMed Central

    Huang, Samantha; Belliveau, John W.; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-01-01

    In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory “cue” signaling the location where a subsequent “target” sound was likely to be presented. The target was occasionally replaced by an unexpected “novel” sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone “standard” trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra­­parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found ­evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues. PMID:22937153

  2. Active noise control: A tutorial for HVAC designers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelin, L.J.

    1997-08-01

    This article will identify the capabilities and limitations of ANC in its application to HVAC noise control. ANC can be used in ducted HVAC systems to cancel ductborne, low-frequency fan noise by injecting sound waves of equal amplitude and opposite phase into an air duct, as close as possible to the source of the unwanted noise. Destructive interference of the fan noise and injected noise results in sound cancellation. The noise problems that it solves are typically described as rumble, roar or throb, all of which are difficult to address using traditional noise control methods. This article will also contrastmore » the use of active against passive noise control techniques. The main differences between the two noise control measures are acoustic performance, energy consumption, and design flexibility. The article will first present the fundamentals and basic physics of ANC. The application to real HVAC systems will follow.« less

  3. From electromyographic activity to frequency modulation in zebra finch song.

    PubMed

    Döppler, Juan F; Bush, Alan; Goller, Franz; Mindlin, Gabriel B

    2018-02-01

    Behavior emerges from the interaction between the nervous system and peripheral devices. In the case of birdsong production, a delicate and fast control of several muscles is required to control the configuration of the syrinx (the avian vocal organ) and the respiratory system. In particular, the syringealis ventralis muscle is involved in the control of the tension of the vibrating labia and thus affects the frequency modulation of the sound. Nevertheless, the translation of the instructions (which are electrical in nature) into acoustical features is complex and involves nonlinear, dynamical processes. In this work, we present a model of the dynamics of the syringealis ventralis muscle and the labia, which allows calculating the frequency of the generated sound, using as input the electrical activity recorded in the muscle. In addition, the model provides a framework to interpret inter-syllabic activity and hints at the importance of the biomechanical dynamics in determining behavior.

  4. Auditory-Motor Processing of Speech Sounds

    PubMed Central

    Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.

    2013-01-01

    The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846

  5. Diel patterns in underwater sounds produced by beluga whales and Pacific white-sided dolphins at John G. Shedd Aquarium

    NASA Astrophysics Data System (ADS)

    Brickman, Jon; Tanchez, Erin; Thomas, Jeanette

    2005-09-01

    Diel patterns in underwater sounds from five beluga whales (Delphinapterus leucas) and five Pacific white-sided dolphins (Lagenorhynchus obliquidens) housed at John G. Shedd Aquarium in Chicago, IL were studied. Underwater sounds were sampled systematically over 24-h periods by using a battery-operated cassette recorder and an Ithaco 605C hydrophone controlled by a digital timer, which activated every hour and then shut off after 2.5 min. Belugas had 14 sounds and Pacific white-sided dolphins produced 5 sounds. For each species, the use of some sounds was correlated with other sounds. The diel pattern for both species was similar and mostly affected by the presence of humans. Sounds gradually increased after the staff and visitors arrived, peaked during the midday, gradually decreased as closing of the aquarium approached, and was minimal overnight. These data can help identify the best time of day to make recordings and perhaps could be used to examine social, reproductive, or health changes in these captive cetaceans.

  6. Comparison of the Effects of Benson Muscle Relaxation and Nature Sounds on the Fatigue in Patients With Heart Failure: A Randomized Controlled Clinical Trial.

    PubMed

    Seifi, Leila; Najafi Ghezeljeh, Tahereh; Haghani, Hamid

    This study was conducted with the aim of comparing the effects of Benson muscle relaxation and nature sounds on fatigue in patients with heart failure. Fatigue and exercise intolerance as prevalent symptoms experienced by patients with heart failure can cause the loss of independence in the activities of daily living. It can also damage self-care and increase dependence to others, which subsequently can reduce the quality of life. This randomized controlled clinical trial was conducted in an urban area of Iran in 2016. Samples were consisted of 105 hospitalized patients with heart failure chosen using a convenience sampling method. They were assigned to relaxation, nature sounds, and control groups using a randomized block design. In addition to routine care, the Benson muscle relaxation and nature sounds groups received interventions in mornings and evenings twice a day for 20 minutes within 3 consecutive days. A 9-item questionnaire was used to collect data regarding fatigue before and after the interventions. Relaxation and nature sounds reduced fatigue in patients with heart failure in comparison to the control group. However, no statistically significant difference was observed between the interventions. Benson muscle relaxation and nature sounds are alternative methods for the reduction of fatigue in patients with heart failure. They are inexpensive and easy to be administered and upon patients' preferences can be used by nurses along with routine nursing interventions.

  7. Auditory symptom provocation in dental phobia: a near-infrared spectroscopy study.

    PubMed

    Köchel, Angelika; Plichta, Michael M; Schäfer, Axel; Schöngassner, Florian; Fallgatter, Andreas J; Schienle, Anne

    2011-09-26

    The act of drilling a tooth belongs to the most feared situations of patients suffering from dental phobia. We presented 25 female patients and 24 nonphobic women with the sound of a dental drill, pleasant and neutral sounds. Brain activation was recorded via near infrared spectroscopy in fronto-parietal and premotor areas. The groups differed in supplementary motor area (SMA) recruitment. Relative to controls, the phobics displayed increased oxy hemoglobin while presented with the phobia-relevant sound, but showed comparable activation in the other conditions. As the SMA is engaged in the preparation of motor actions, the increased response in patients might mirror the priming of flight behavior during exposure. We found no indication of an emotional modulation of parietal and dorsomedial prefrontal cortex activation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Functional Brain Activation Differences in School-Age Children with Speech Sound Errors: Speech and Print Processing

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Felsenfeld, Susan; Frost, Stephen J.; Mencl, W. Einar; Fulbright, Robert K.; Grigorenko, Elena L.; Landi, Nicole; Seki, Ayumi; Pugh, Kenneth R.

    2012-01-01

    Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8;6[years;months] through 10;10, with 17 matched controls. Results: When…

  9. English Pronunciation: A Systematic Approach to Word-Stress and Vowel-Sounds.

    ERIC Educational Resources Information Center

    Carmona, Francisco

    A handbook on English word stress and stressed-vowel sounds is based on the idea that these segments are, in most cases, controlled by phonological context and their pronunciation can be understood through a system of rules. It serves as a reference for teachers and as a text for students. Chapters address these topics: word stress and active and…

  10. Experimental study of a smart foam sound absorber.

    PubMed

    Leroy, Pierre; Berry, Alain; Herzog, Philippe; Atalla, Noureddine

    2011-01-01

    This article presents the experimental implementation and results of a hybrid passive/active absorber (smart foam) made up from the combination of a passive absorbent (foam) and a curved polyvinylidene fluoride (PVDF) film actuator bonded to the rear surface of the foam. Various smart foam prototypes were built and tested in active absorption experiments conducted in an impedance tube under plane wave propagation condition at frequencies between 100 and 1500 Hz. Three control cases were tested. The first case used a fixed controller derived in the frequency domain from estimations of the primary disturbance at a directive microphone position in the tube and the transfer function between the control PVDF and the directive microphone. The two other cases used an adaptive time-domain feedforward controller to absorb either a single-frequency incident wave or a broadband incident wave. The non-linearity of the smart foams and the causality constraint were identified to be important factors influencing active control performance. The effectiveness of the various smart foam prototypes is discussed in terms of the active and passive absorption coefficients as well as the control voltage of the PVDF actuator normalized by the incident sound pressure.

  11. Righting elicited by novel or familiar auditory or vestibular stimulation in the haloperidol-treated rat: rat posturography as a model to study anticipatory motor control.

    PubMed

    Clark, Callie A M; Sacrey, Lori-Ann R; Whishaw, Ian Q

    2009-09-15

    External cues, including familiar music, can release Parkinson's disease patients from catalepsy but the neural basis of the effect is not well understood. In the present study, posturography, the study of posture and its allied reflexes, was used to develop an animal model that could be used to investigate the underlying neural mechanisms of this sound-induced behavioral activation. In the rat, akinetic catalepsy induced by a dopamine D2 receptor antagonist (haloperidol 5mg/kg) can model human catalepsy. Using this model, two experiments examined whether novel versus familiar sound stimuli could interrupt haloperidol-induced catalepsy in the rat. Rats were placed on a variably inclined grid and novel or familiar auditory cues (single key jingle or multiple key jingles) were presented. The dependent variable was movement by the rats to regain equilibrium as assessed with a movement notation score. The sound cues enhanced movements used to regain postural stability and familiar sound stimuli were more effective than unfamiliar sound stimuli. The results are discussed in relation to the idea that nonlemniscal and lemniscal auditory pathways differentially contribute to behavioral activation versus tonotopic processing of sound.

  12. The Impact of Eliminating Extraneous Sound and Light on Students' Achievement: An Empirical Study

    ERIC Educational Resources Information Center

    Mangipudy, Rajarajeswari

    2010-01-01

    The impact of eliminating extraneous sound and light on students' achievement was investigated under four conditions: Light and Sound controlled, Sound Only controlled, Light Only controlled and neither Light nor Sound controlled. Group, age and gender were the control variables. Four randomly selected groups of high school freshmen students with…

  13. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    PubMed

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of β oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training. Copyright © 2017 the authors 0270-6474/17/375948-12$15.00/0.

  14. Active acoustical impedance using distributed electrodynamical transducers.

    PubMed

    Collet, M; David, P; Berthillier, M

    2009-02-01

    New miniaturization and integration capabilities available from emerging microelectromechanical system (MEMS) technology will allow silicon-based artificial skins involving thousands of elementary actuators to be developed in the near future. SMART structures combining large arrays of elementary motion pixels coated with macroscopic components are thus being studied so that fundamental properties such as shape, stiffness, and even reflectivity of light and sound could be dynamically adjusted. This paper investigates the acoustic impedance capabilities of a set of distributed transducers connected with a suitable controlling strategy. Research in this domain aims at designing integrated active interfaces with a desired acoustical impedance for reaching an appropriate global acoustical behavior. This generic problem is intrinsically connected with the control of multiphysical systems based on partial differential equations (PDEs) and with the notion of multiscaled physics when a dense array of electromechanical systems (or MEMS) is considered. By using specific techniques based on PDE control theory, a simple boundary control equation capable of annihilating the wave reflections has been built. The obtained strategy is also discretized as a low order time-space operator for experimental implementation by using a dense network of interlaced microphones and loudspeakers. The resulting quasicollocated architecture guarantees robustness and stability margins. This paper aims at showing how a well controlled semidistributed active skin can substantially modify the sound transmissibility or reflectivity of the corresponding homogeneous passive interface. In Sec. IV, numerical and experimental results demonstrate the capabilities of such a method for controlling sound propagation in ducts. Finally, in Sec. V, an energy-based comparison with a classical open-loop strategy underlines the system's efficiency.

  15. Stereotypic Laryngeal and Respiratory Motor Patterns Generate Different Call Types in Rat Ultrasound Vocalization

    PubMed Central

    RIEDE, TOBIAS

    2014-01-01

    Rodents produce highly variable ultrasound whistles as communication signals unlike many other mammals, who employ flow-induced vocal fold oscillations to produce sound. The role of larynx muscles in controlling sound features across different call types in ultrasound vocalization (USV) was investigated using laryngeal muscle electromyographic (EMG) activity, subglottal pressure measurements and vocal sound output in awake and spontaneously behaving Sprague–Dawley rats. Results support the hypothesis that glottal shape determines fundamental frequency. EMG activities of thyroarytenoid and cricothyroid muscles were aligned with call duration. EMG intensity increased with fundamental frequency. Phasic activities of both muscles were aligned with fast changing fundamental frequency contours, for example in trills. Activities of the sternothyroid and sternohyoid muscles, two muscles involved in vocal production in other mammals, are not critical for the production of rat USV. To test how stereotypic laryngeal and respiratory activity are across call types and individuals, sets of ten EMG and subglottal pressure parameters were measured in six different call types from six rats. Using discriminant function analysis, on average 80% of parameter sets were correctly assigned to their respective call type. This was significantly higher than the chance level. Since fundamental frequency features of USV are tightly associated with stereotypic activity of intrinsic laryngeal muscles and muscles contributing to build-up of subglottal pressure, USV provide insight into the neurophysiological control of peripheral vocal motor patterns. PMID:23423862

  16. Health Activities Project (HAP): Sight and Sound Module.

    ERIC Educational Resources Information Center

    Buller, Dave; And Others

    Contained within this Health Activities Project (HAP) learning packet are activities for children in grades 5-8. Design of the activities centers around the idea that students can control their own health and safety. Within this module are teacher and student folios describing six activities which involve students in restricting their vision by…

  17. Lateralization as a symmetry breaking process in birdsong

    NASA Astrophysics Data System (ADS)

    Trevisan, M. A.; Cooper, B.; Goller, F.; Mindlin, G. B.

    2007-03-01

    The singing by songbirds is a most convincing example in the animal kingdom of functional lateralization of the brain, a feature usually associated with human language. Lateralization is expressed as one or both of the bird’s sound sources being active during the vocalization. Normal songs require high coordination between the vocal organ and respiratory activity, which is bilaterally symmetric. Moreover, the physical and neural substrate used to produce the song lack obvious asymmetries. In this work we show that complex spatiotemporal patterns of motor activity controlling airflow through the sound sources can be explained in terms of spontaneous symmetry breaking bifurcations. This analysis also provides a framework from which to study the effects of imperfections in the system’ s symmetries. A physical model of the avian vocal organ is used to generate synthetic sounds, which allows us to predict acoustical signatures of the song and compare the predictions of the model with experimental data.

  18. Topography of sound level representation in the FM sweep selective region of the pallid bat auditory cortex.

    PubMed

    Measor, Kevin; Yarrow, Stuart; Razak, Khaleel A

    2018-05-26

    Sound level processing is a fundamental function of the auditory system. To determine how the cortex represents sound level, it is important to quantify how changes in level alter the spatiotemporal structure of cortical ensemble activity. This is particularly true for echolocating bats that have control over, and often rapidly adjust, call level to actively change echo level. To understand how cortical activity may change with sound level, here we mapped response rate and latency changes with sound level in the auditory cortex of the pallid bat. The pallid bat uses a 60-30 kHz downward frequency modulated (FM) sweep for echolocation. Neurons tuned to frequencies between 30 and 70 kHz in the auditory cortex are selective for the properties of FM sweeps used in echolocation forming the FM sweep selective region (FMSR). The FMSR is strongly selective for sound level between 30 and 50 dB SPL. Here we mapped the topography of level selectivity in the FMSR using downward FM sweeps and show that neurons with more monotonic rate level functions are located in caudomedial regions of the FMSR overlapping with high frequency (50-60 kHz) neurons. Non-monotonic neurons dominate the FMSR, and are distributed across the entire region, but there is no evidence for amplitopy. We also examined how first spike latency of FMSR neurons change with sound level. The majority of FMSR neurons exhibit paradoxical latency shift wherein the latency increases with sound level. Moreover, neurons with paradoxical latency shifts are more strongly level selective and are tuned to lower sound level than neurons in which latencies decrease with level. These data indicate a clustered arrangement of neurons according to monotonicity, with no strong evidence for finer scale topography, in the FMSR. The latency analysis suggests mechanisms for strong level selectivity that is based on relative timing of excitatory and inhibitory inputs. Taken together, these data suggest how the spatiotemporal spread of cortical activity may represent sound level. Copyright © 2018. Published by Elsevier B.V.

  19. Electroacoustic control of Rijke tube instability

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Huang, Lixi

    2017-11-01

    Unsteady heat release coupled with pressure fluctuation triggers the thermoacoustic instability which may damage a combustion chamber severely. This study demonstrates an electroacoustic control approach of suppressing the thermoacoustic instability in a Rijke tube by altering the wall boundary condition. An electrically shunted loudspeaker driver device is connected as a side-branch to the main tube via a small aperture. Tests in an impedance tube show that this device has sound absorption coefficient up to 40% under normal incidence from 100 Hz to 400 Hz, namely over two octaves. Experimental result demonstrates that such a broadband acoustic performance can effectively eliminate the Rijke-tube instability from 94 Hz to 378 Hz (when the tube length varies from 1.8 m to 0.9 m, the first mode frequency for the former is 94 Hz and the second mode frequency for the latter is 378 Hz). Theoretical investigation reveals that the devices act as a damper draining out sound energy through a tiny hole to eliminate the instability. Finally, it is also estimated based on the experimental data that small amount of sound energy is actually absorbed when the system undergoes a transition from the unstable to stable state if the contrpaol is activated. When the system is actually stabilized, no sound is radiated so no sound energy needs to be absorbed by the control device.

  20. Sound levels in modern rodent housing rooms are an uncontrolled environmental variable with fluctuations mainly due to human activities

    PubMed Central

    Lauer, Amanda M.; May, Bradford J.; Hao, Ziwei Judy; Watson, Julie

    2009-01-01

    Noise in animal housing facilities is an environmental variable that can affect hearing, behavior and physiology in mice. The authors measured sound levels in two rodent housing rooms (room 1 and room 2) during several periods of 24 h. Room 1, which was subject to heavy personnel traffic, contained ventilated racks and static cages that housed large numbers of mice. Room 2 was accessed by only a few staff members and contained only static cages that housed fewer mice. In both rooms, background sound levels were about 80 dB, and transient noises caused sound levels to temporarily rise 30–40 dB above the baseline level; such peaks occurred frequently during work hours (8:30 AM to 4:30 PM) and infrequently during non-work hours. Noise peaks during work hours in room 1 occurred about two times as often as in room 2 (P = 0.01). Use of changing stations located in the rooms caused background noise to increase by about 10 dB. Loud noise and noise variability were attributed mainly to personnel activity. Attempts to reduce noise should concentrate on controlling sounds produced by in-room activities and experimenter traffic; this may reduce the variability of research outcomes and improve animal welfare. PMID:19384312

  1. Promises of formal and informal musical activities in advancing neurocognitive development throughout childhood.

    PubMed

    Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Huotilainen, Minna

    2015-03-01

    Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications. © 2015 New York Academy of Sciences.

  2. Hybrid Active-Passive Systems for Control of Aircraft Interior Noise

    NASA Technical Reports Server (NTRS)

    Fuller, Chris R.

    1999-01-01

    Previous work has demonstrated the large potential for hybrid active-passive systems for attenuating interior noise in aircraft fuselages. The main advantage of an active-passive system is, by utilizing the natural dynamics of the actuator system, the control actuator power and weight is markedly reduced and stability/robustness is enhanced. Three different active-passive approaches were studied in the past year. The first technique utilizes multiple tunable vibration absorbers (ATVA) for reducing narrow band sound radiated from panels and transmitted through fuselage structures. The focus is on reducing interior noise due to propeller or turbo fan harmonic excitation. Two types of tunable vibration absorbers were investigated; a solid state system based upon a piezoelectric mechanical exciter and an electromechanical system based upon a Motran shaker. Both of these systems utilize a mass-spring dynamic effect to maximize tile output force near resonance of the shaker system and so can also be used as vibration absorbers. The dynamic properties of the absorbers (i.e. resonance frequency) were modified using a feedback signal from an accelerometer mounted on the active mass, passed through a compensator and fed into the drive component of the shaker system (piezoelectric element or voice coil respectively). The feedback loop consisted of a two coefficient FIR filter, implemented on a DSP, where the input is acceleration of tile ATVA mass and the output is a force acting in parallel with the stiffness of the absorber. By separating the feedback signal into real and imaginary components, the effective natural frequency and damping of the ATVA can be altered independently. This approach gave control of the resonance frequencies while also allowing the simultaneous removal of damping from the ATVA, thus increasing the ease of controllability and effectiveness. In order to obtain a "tuned" vibration absorber the chosen resonant frequency was set to the excitation frequency. In order to minimize sound radiation a gradient descent algorithm was developed which globally adapted the resonance frequencies of multiple ATVA's while minimizing a cost based upon the radiated sound power or sound energy obtained from an array of microphones.

  3. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography.

    PubMed

    Ishii, Akira; Tanaka, Masaaki; Iwamae, Masayoshi; Kim, Chongsoo; Yamano, Emi; Watanabe, Yasuyoshi

    2013-06-13

    It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds with mental fatigue and that the insular cortex is involved in the neural substrates of this phenomenon.

  4. Open Source Tools for Temporally Controlled Rodent Behavior Suitable for Electrophysiology and Optogenetic Manipulations.

    PubMed

    Solari, Nicola; Sviatkó, Katalin; Laszlovszky, Tamás; Hegedüs, Panna; Hangya, Balázs

    2018-01-01

    Understanding how the brain controls behavior requires observing and manipulating neural activity in awake behaving animals. Neuronal firing is timed at millisecond precision. Therefore, to decipher temporal coding, it is necessary to monitor and control animal behavior at the same level of temporal accuracy. However, it is technically challenging to deliver sensory stimuli and reinforcers as well as to read the behavioral responses they elicit with millisecond precision. Presently available commercial systems often excel in specific aspects of behavior control, but they do not provide a customizable environment allowing flexible experimental design while maintaining high standards for temporal control necessary for interpreting neuronal activity. Moreover, delay measurements of stimulus and reinforcement delivery are largely unavailable. We combined microcontroller-based behavior control with a sound delivery system for playing complex acoustic stimuli, fast solenoid valves for precisely timed reinforcement delivery and a custom-built sound attenuated chamber using high-end industrial insulation materials. Together this setup provides a physical environment to train head-fixed animals, enables calibrated sound stimuli and precisely timed fluid and air puff presentation as reinforcers. We provide latency measurements for stimulus and reinforcement delivery and an algorithm to perform such measurements on other behavior control systems. Combined with electrophysiology and optogenetic manipulations, the millisecond timing accuracy will help interpret temporally precise neural signals and behavioral changes. Additionally, since software and hardware provided here can be readily customized to achieve a large variety of paradigms, these solutions enable an unusually flexible design of rodent behavioral experiments.

  5. Sound field measurement in a double layer cavitation cluster by rugged miniature needle hydrophones.

    PubMed

    Koch, Christian

    2016-03-01

    During multi-bubble cavitation the bubbles tend to organize themselves into clusters and thus the understanding of properties and dynamics of clustering is essential for controlling technical applications of cavitation. Sound field measurements are a potential technique to provide valuable experimental information about the status of cavitation clouds. Using purpose-made, rugged, wide band, and small-sized needle hydrophones, sound field measurements in bubble clusters were performed and time-dependent sound pressure waveforms were acquired and analyzed in the frequency domain up to 20 MHz. The cavitation clusters were synchronously observed by an electron multiplying charge-coupled device (EMCCD) camera and the relation between the sound field measurements and cluster behaviour was investigated. Depending on the driving power, three ranges could be identified and characteristic properties were assigned. At low power settings no transient and no or very low stable cavitation activity can be observed. The medium range is characterized by strong pressure peaks and various bubble cluster forms. At high power a stable double layer was observed which grew with further increasing power and became quite dynamic. The sound field was irregular and the fundamental at driving frequency decreased. Between the bubble clouds completely different sound field properties were found in comparison to those in the cloud where the cavitation activity is high. In between the sound field pressure amplitude was quite small and no collapses were detected. Copyright © 2015. Published by Elsevier B.V.

  6. The Brain Basis for Misophonia.

    PubMed

    Kumar, Sukhbinder; Tansley-Hancock, Olana; Sedley, William; Winston, Joel S; Callaghan, Martina F; Allen, Micah; Cope, Thomas E; Gander, Phillip E; Bamiou, Doris-Eva; Griffiths, Timothy D

    2017-02-20

    Misophonia is an affective sound-processing disorder characterized by the experience of strong negative emotions (anger and anxiety) in response to everyday sounds, such as those generated by other people eating, drinking, chewing, and breathing [1-8]. The commonplace nature of these sounds (often referred to as "trigger sounds") makes misophonia a devastating disorder for sufferers and their families, and yet nothing is known about the underlying mechanism. Using functional and structural MRI coupled with physiological measurements, we demonstrate that misophonic subjects show specific trigger-sound-related responses in brain and body. Specifically, fMRI showed that in misophonic subjects, trigger sounds elicit greatly exaggerated blood-oxygen-level-dependent (BOLD) responses in the anterior insular cortex (AIC), a core hub of the "salience network" that is critical for perception of interoceptive signals and emotion processing. Trigger sounds in misophonics were associated with abnormal functional connectivity between AIC and a network of regions responsible for the processing and regulation of emotions, including ventromedial prefrontal cortex (vmPFC), posteromedial cortex (PMC), hippocampus, and amygdala. Trigger sounds elicited heightened heart rate (HR) and galvanic skin response (GSR) in misophonic subjects, which were mediated by AIC activity. Questionnaire analysis showed that misophonic subjects perceived their bodies differently: they scored higher on interoceptive sensibility than controls, consistent with abnormal functioning of AIC. Finally, brain structural measurements implied greater myelination within vmPFC in misophonic individuals. Overall, our results show that misophonia is a disorder in which abnormal salience is attributed to particular sounds based on the abnormal activation and functional connectivity of AIC. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. The NASA Sounding Rocket Program and space sciences.

    PubMed

    Gurkin, L W

    1992-10-01

    High altitude suborbital rockets (sounding rockets) have been extensively used for space science research in the post-World War II period; the NASA Sounding Rocket Program has been on-going since the inception of the Agency and supports all space science disciplines. In recent years, sounding rockets have been utilized to provide a low gravity environment for materials processing research, particularly in the commercial sector. Sounding rockets offer unique features as a low gravity flight platform. Quick response and low cost combine to provide more frequent spaceflight opportunities. Suborbital spacecraft design practice has achieved a high level of sophistication which optimizes the limited available flight times. High data-rate telemetry, real-time ground up-link command and down-link video data are routinely used in sounding rocket payloads. Standard, off-the-shelf, active control systems are available which limit payload body rates such that the gravitational environment remains less than 10(-4) g during the control period. Operational launch vehicles are available which can provide up to 7 minutes of experiment time for experiment weights up to 270 kg. Standard payload recovery systems allow soft impact retrieval of payloads. When launched from White Sands Missile Range, New Mexico, payloads can be retrieved and returned to the launch site within hours.

  8. The NASA Sounding Rocket Program and space sciences

    NASA Technical Reports Server (NTRS)

    Gurkin, L. W.

    1992-01-01

    High altitude suborbital rockets (sounding rockets) have been extensively used for space science research in the post-World War II period; the NASA Sounding Rocket Program has been on-going since the inception of the Agency and supports all space science disciplines. In recent years, sounding rockets have been utilized to provide a low gravity environment for materials processing research, particularly in the commercial sector. Sounding rockets offer unique features as a low gravity flight platform. Quick response and low cost combine to provide more frequent spaceflight opportunities. Suborbital spacecraft design practice has achieved a high level of sophistication which optimizes the limited available flight times. High data-rate telemetry, real-time ground up-link command and down-link video data are routinely used in sounding rocket payloads. Standard, off-the-shelf, active control systems are available which limit payload body rates such that the gravitational environment remains less than 10(-4) g during the control period. Operational launch vehicles are available which can provide up to 7 minutes of experiment time for experiment weights up to 270 kg. Standard payload recovery systems allow soft impact retrieval of payloads. When launched from White Sands Missile Range, New Mexico, payloads can be retrieved and returned to the launch site within hours.

  9. Short-Term Effects of Binaural Beats on EEG Power, Functional Connectivity, Cognition, Gait and Anxiety in Parkinson's Disease.

    PubMed

    Gálvez, Gerardo; Recuero, Manuel; Canuet, Leonides; Del-Pozo, Francisco

    2018-06-01

    We applied rhythmic binaural sound to Parkinson's Disease (PD) patients to investigate its influence on several symptoms of this disease and on Electrophysiology (Electrocardiography and Electroencephalography (EEG)). We conducted a double-blind, randomized controlled study in which rhythmic binaural beats and control were administered over two randomized and counterbalanced sessions (within-subjects repeated-measures design). Patients ([Formula: see text], age [Formula: see text], stage I-III Hoehn & Yahr scale) participated in two sessions of sound stimulation for 10[Formula: see text]min separated by a minimum of 7 days. Data were collected immediately before and after both stimulations with the following results: (1) a decrease in theta activity, (2) a general decrease in Functional Connectivity (FC), and (3) an improvement in working memory performance. However, no significant changes were identified in the gait performance, heart rate or anxiety level of the patients. With regard to the control stimulation, we did not identify significant changes in the variables analyzed. The use of binaural-rhythm stimulation for PD, as designed in this study, seems to be an effective, portable, inexpensive and noninvasive method to modulate brain activity. This influence on brain activity did not induce changes in anxiety or gait parameters; however, it resulted in a normalization of EEG power (altered in PD), normalization of brain FC (also altered in PD) and working memory improvement (a normalizing effect). In summary, we consider that sound, particularly binaural-rhythmic sound, may be a co-assistant tool in the treatment of PD, however more research is needed to consider the use of this type of stimulation as an effective therapy.

  10. Multiple target sound quality balance for hybrid electric powertrain noise

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, J. A.; Sarrazin, M.; Janssens, K.; de Oliveira, L. P. R.; Desmet, W.

    2018-01-01

    The integration of the electric motor to the powertrain in hybrid electric vehicles (HEVs) presents acoustic stimuli that elicit new perceptions. The large number of spectral components, as well as the wider bandwidth of this sort of noises, pose new challenges to current noise, vibration and harshness (NVH) approaches. This paper presents a framework for enhancing the sound quality (SQ) of the hybrid electric powertrain noise perceived inside the passenger compartment. Compared with current active sound quality control (ASQC) schemes, where the SQ improvement is just an effect of the control actions, the proposed technique features an optimization stage, which enables the NVH specialist to actively implement the amplitude balance of the tones that better fits into the auditory expectations. Since Loudness, Roughness, Sharpness and Tonality are the most relevant SQ metrics for interior HEV noise, they are used as performance metrics in the concurrent optimization analysis, which, eventually, drives the control design method. Thus, multichannel active sound profiling systems that feature cross-channel compensation schemes are guided by the multi-objective optimization stage, by means of optimal sets of amplitude gain factors that can be implemented at each single sensor location, while minimizing cross-channel effects that can either degrade the original SQ condition, or even hinder the implementation of independent SQ targets. The proposed framework is verified experimentally, with realistic stationary hybrid electric powertrain noise, showing SQ enhancement for multiple locations within a scaled vehicle mock-up. The results show total success rates in excess of 90%, which indicate that the proposed method is promising, not only for the improvement of the SQ of HEV noise, but also for a variety of periodic disturbances with similar features.

  11. Degraded speech sound processing in a rat model of fragile X syndrome

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Kilgard, Michael P.

    2014-01-01

    Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies. PMID:24713347

  12. The Use of an Open Field Model to Assess Sound-Induced Fear and Anxiety Associated Behaviors in Labrador Retrievers.

    PubMed

    Gruen, Margaret E; Case, Beth C; Foster, Melanie L; Lazarowski, Lucia; Fish, Richard E; Landsberg, Gary; DePuy, Venita; Dorman, David C; Sherman, Barbara L

    2015-01-01

    Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were robust to fear/anxiety responses. The results suggest that the open field sound test may be a useful method to evaluate the suitability of dogs for IED-detection training.

  13. The Use of an Open Field Model to Assess Sound-Induced Fear and Anxiety Associated Behaviors in Labrador Retrievers

    PubMed Central

    Gruen, Margaret E.; Case, Beth C.; Foster, Melanie L.; Lazarowski, Lucia; Fish, Richard E.; Landsberg, Gary; DePuy, Venita; Dorman, David C.; Sherman, Barbara L.

    2015-01-01

    Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were robust to fear/anxiety responses. The results suggest that the open field sound test may be a useful method to evaluate the suitability of dogs for IED-detection training. PMID:26273235

  14. Lightweight fiber optic microphones and accelerometers

    NASA Astrophysics Data System (ADS)

    Bucaro, J. A.; Lagakos, N.

    2001-06-01

    We have designed, fabricated, and tested two lightweight fiber optic sensors for the dynamic measurement of acoustic pressure and acceleration. These sensors, one a microphone and the other an accelerometer, are required for active blanket sound control technology under development in our laboratory. The sensors were designed to perform to certain specifications dictated by our active sound control application and to do so without exhibiting sensitivity to the high electrical voltages expected to be present. Furthermore, the devices had to be small (volumes less than 1.5 cm3) and light (less than 2 g). To achieve these design criteria, we modified and extended fiber optic reflection microphone and fiber microbend displacement device designs reported in the literature. After fabrication, the performances of each sensor type were determined from measurements made in a dynamic pressure calibrator and on a shaker table. The fiber optic microbend accelerometer, which weighs less than 1.8 g, was found to meet all performance goals including 1% linearity, 90 dB dynamic range, and a minimum detectable acceleration of 0.2 mg/√Hz . The fiber optic microphone, which weighs less than 1.3 g, also met all goals including 1% linearity, 85 dB dynamic range, and a minimum detectable acoustic pressure level of 0.016 Pa/√Hz . In addition to our specific use in active sound control, these sensors appear to have application in a variety of other areas.

  15. Noise, anti-noise and fluid flow control.

    PubMed

    Williams, J E Ffowcs

    2002-05-15

    This paper celebrates Thomas Young's discovery that wave interference was responsible for much that is known about light and colour. A substantial programme of work has been aimed at controlling the noise of aerodynamic flows. Much of that field can be explained in terms of interference and it is argued in this paper that the theoretical techniques for analysing noise can also be seen to rest on interference effects. Interference can change the character of wave fields to produce, out of well-ordered fields, wave systems quite different from the interfering wave elements. Lighthill's acoustic analogy is described as an example of this effect, an example in which the exact model of turbulence-generated noise is seen to consist of elementary interfering sound waves; waves that are sometimes heard in advance of their sources. The paper goes on to describe an emerging field of technology where sound is suppressed by superimposing on it a destructively interfering secondary sound; one designed and manufactured specifically for interference. That sound is known as anti-sound, or anti-noise when the sound is chaotic enough. Examples are then referred to where the noisy effect to be controlled is actually a disturbance of a linearly unstable system; a disturbance that is destroyed by destructive interference with a deliberately constructed antidote. The practical benefits of this kind of instability control are much greater and can even change the whole character of flows. It is argued that completely unnatural unstable conditions can be held with active controllers generating destructively interfering elements. Examples are given in which gravitational instability of stratified fluids can be prevented. The Kelvin-Helmholtz instability of shear flows can also be avoided by simple controls. Those are speculative examples of what might be possible in future developments of an interference effect, which has made anti-noise a useful technology.

  16. Speech Sound Processing Deficits and Training-Induced Neural Plasticity in Rats with Dyslexia Gene Knockdown

    PubMed Central

    Centanni, Tracy M.; Chen, Fuyi; Booker, Anne M.; Engineer, Crystal T.; Sloan, Andrew M.; Rennaker, Robert L.; LoTurco, Joseph J.; Kilgard, Michael P.

    2014-01-01

    In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments. PMID:24871331

  17. Nonspeech oral motor treatment issues related to children with developmental speech sound disorders.

    PubMed

    Ruscello, Dennis M

    2008-07-01

    This article examines nonspeech oral motor treatments (NSOMTs) in the population of clients with developmental speech sound disorders. NSOMTs are a collection of nonspeech methods and procedures that claim to influence tongue, lip, and jaw resting postures; increase strength; improve muscle tone; facilitate range of motion; and develop muscle control. In the case of developmental speech sound disorders, NSOMTs are employed before or simultaneous with actual speech production treatment. First, NSOMTs are defined for the reader, and there is a discussion of NSOMTs under the categories of active muscle exercise, passive muscle exercise, and sensory stimulation. Second, different theories underlying NSOMTs along with the implications of the theories are discussed. Finally, a review of pertinent investigations is presented. The application of NSOMTs is questionable due to a number of reservations that include (a) the implied cause of developmental speech sound disorders, (b) neurophysiologic differences between the limbs and oral musculature, (c) the development of new theories of movement and movement control, and (d) the paucity of research literature concerning NSOMTs. There is no substantive evidence to support NSOMTs as interventions for children with developmental speech sound disorders.

  18. A unified approach for the spatial enhancement of sound

    NASA Astrophysics Data System (ADS)

    Choi, Joung-Woo; Jang, Ji-Ho; Kim, Yang-Hann

    2005-09-01

    This paper aims to control the sound field spatially, so that the desired or target acoustic variable is enhanced within a zone where a listener is located. This is somewhat analogous to having manipulators that can draw sounds in any place. This also means that one can somehow see the controlled shape of sound in frequency or in real time. The former assures its practical applicability, for example, listening zone control for music. The latter provides a mean of analyzing sound field. With all these regards, a unified approach is proposed that can enhance selected acoustic variables using multiple sources. Three kinds of acoustic variables that have to do with magnitude and direction of sound field are formulated and enhanced. The first one, which has to do with the spatial control of acoustic potential energy, enables one to make a zone of loud sound over an area. Otherwise, one can control directional characteristic of sound field by controlling directional energy density, or one can enhance the magnitude and direction of sound at the same time by controlling acoustic intensity. Throughout various examples, it is shown that these acoustic variables can be controlled successfully by the proposed approach.

  19. Design And Construction of an Impedance Tube for Measuring Sound Absorptivity and Transmissibility of Materials Using Transfer Function Method

    NASA Astrophysics Data System (ADS)

    Gowda, Haarish Kapaninaikappa

    Noise is defined as unwanted sound, when perceived in excess can cause many harmful effects such as annoyance, interference with speech, and hearing loss, hence there is a need to control noise in practical situations. Noise can be controlled actively and/or passively, here we discuss the passive noise control techniques. Passive noise control involves using energy dissipating or reflecting materials such as absorbers or barriers respectively. Damping and isolating materials are also used in eliminating structure-borne noise. These materials exhibit properties such as reflection, absorption and transmission loss when incidence is by a sound source. Thus, there is a need to characterize the acoustical properties of these materials for practical use. The theoretical background of the random incident sound absorption with reverberation room and normal incident sound absorption using impedance tube are well documented. The Transfer Matrix method for measuring transmission loss and absorption coefficient using impedance tube is very attractive since it is rather inexpensive and fast. In this research, a low-cost Impedance Tube is constructed using transfer function method to measure both absorption and transmissibility of materials. Equipment and measurement instruments available in the laboratory were used in the construction of the tube, adhering to cost-effectiveness. Care has been taken for precise construction of tube to ensure better measurement results. Further various samples varying from hard non-porous to soft porous materials were tested for absorption and sound transmission loss. Absorption values were also compared with reverberation room method with the available samples further ensuring the reliability of the newly constructed tube for future measurements.

  20. Active Control of Radiated Sound with Integrated Piezoelectric Composite Structures. Volume 3: Appendices (Concl.)

    DTIC Science & Technology

    1998-11-06

    after many iterations of analysis , development, construction and testing was found to provide amplification ratios of around 250:1 and generate...IEEE International Symposium on Application of Ferroelectrics 2, 767-770 (1996). 11. "A Comparative Analysis of Piezoelectric Bending Mode Actuators...Active 95, 359-368, Newport Beach, CA(1995) 21. "Multiple Reference Feedforward Active Noise Control. Part I. Analysis and Simulation of Behavior," Y

  1. Different neural activities support auditory working memory in musicians and bilinguals.

    PubMed

    Alain, Claude; Khatamian, Yasha; He, Yu; Lee, Yunjo; Moreno, Sylvain; Leung, Ada W S; Bialystok, Ellen

    2018-05-17

    Musical training and bilingualism benefit executive functioning and working memory (WM)-however, the brain networks supporting this advantage are not well specified. Here, we used functional magnetic resonance imaging and the n-back task to assess WM for spatial (sound location) and nonspatial (sound category) auditory information in musician monolingual (musicians), nonmusician bilinguals (bilinguals), and nonmusician monolinguals (controls). Musicians outperformed bilinguals and controls on the nonspatial WM task. Overall, spatial and nonspatial WM were associated with greater activity in dorsal and ventral brain regions, respectively. Increasing WM load yielded similar recruitment of the anterior-posterior attention network in all three groups. In both tasks and both levels of difficulty, musicians showed lower brain activity than controls in superior prefrontal frontal gyrus and dorsolateral prefrontal cortex (DLPFC) bilaterally, a finding that may reflect improved and more efficient use of neural resources. Bilinguals showed enhanced activity in language-related areas (i.e., left DLPFC and left supramarginal gyrus) relative to musicians and controls, which could be associated with the need to suppress interference associated with competing semantic activations from multiple languages. These findings indicate that the auditory WM advantage in musicians and bilinguals is mediated by different neural networks specific to each life experience. © 2018 New York Academy of Sciences.

  2. A new kind of universal smart home security safety monitoring system

    NASA Astrophysics Data System (ADS)

    Li, Biqing; Li, Zhao

    2018-04-01

    With the current level of social development, improved quality of life, existence and security issues of law and order has become an important issue. This graduation project adopts the form of wireless transmission, to STC89C52 microcontroller as the host control human infrared induction anti-theft monitoring system. The system mainly consists of main control circuit, power supply circuit, activities of the human body detection module, sound and light alarm circuit, record and display circuit. The main function is to achieve exploration activities on the human body, then the information is transmitted to the control panel, according to the system microcontroller program control sound and light alarm circuit, while recording the alarm location and time, and always check the record as required, and ultimately achieve the purpose of monitoring. The advantage of using pyroelectric infrared sensor can be installed in a hidden place, not easy to find, and low cost, good detection results, and has broad prospects for development.

  3. Local feedback control of light honeycomb panels.

    PubMed

    Hong, Chinsuk; Elliott, Stephen J

    2007-01-01

    This paper summarizes theoretical and experimental work on the feedback control of sound radiation from honeycomb panels using piezoceramic actuators. It is motivated by the problem of sound transmission in aircraft, specifically the active control of trim panels. Trim panels are generally honeycomb structures designed to meet the design requirement of low weight and high stiffness. They are resiliently mounted to the fuselage for the passive reduction of noise transmission. Local coupling of the closely spaced sensor and actuator was observed experimentally and modeled using a single degree of freedom system. The effect of the local coupling was to roll off the response between the actuator and sensor at high frequencies, so that a feedback control system can have high gain margins. Unfortunately, only relatively poor global performance is then achieved because of localization of reduction around the actuator. This localization prompts the investigation of a multichannel active control system. Globalized reduction was predicted using a model of 12-channel direct velocity feedback control. The multichannel system, however, does not appear to yield a significant improvement in the performance because of decreased gain margin.

  4. Structural Acoustics and Vibrations

    NASA Astrophysics Data System (ADS)

    Chaigne, Antoine

    This structural chapter is devoted to vibrations of structures and to their coupling with the acoustic field. Depending on the context, the radiated sound can be judged as desirable, as is mostly the case for musical instruments, or undesirable, like noise generated by machinery. In architectural acoustics, one main goal is to limit the transmission of sound through walls. In the automobile industry, the engineers have to control the noise generated inside and outside the passenger compartment. This can be achieved by means of passive or active damping. In general, there is a strong need for quieter products and better sound quality generated by the structures in our daily environment.

  5. Structural Acoustics and Vibrations

    NASA Astrophysics Data System (ADS)

    Chaigne, Antoine

    This chapter is devoted to vibrations of structures and to their coupling with the acoustic field. Depending on the context, the radiated sound can be judged as desirable, as is mostly the case for musical instruments, or undesirable, like noise generated by machinery. In architectural acoustics, one main goal is to limit the transmission of sound through walls. In the automobile industry, the engineers have to control the noise generated inside and outside the passenger compartment. This can be achieved by means of passive or active damping. In general, there is a strong need for quieter products and better sound quality generated by the structures in our daily environment.

  6. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  7. In Situ Mortality Experiments with Juvenile Sea Bass (Dicentrarchus labrax) in Relation to Impulsive Sound Levels Caused by Pile Driving of Windmill Foundations

    PubMed Central

    Debusschere, Elisabeth; De Coensel, Bert; Bajek, Aline; Botteldooren, Dick; Hostens, Kris; Vanaverbeke, Jan; Vandendriessche, Sofie; Van Ginderdeuren, Karl; Vincx, Magda; Degraer, Steven

    2014-01-01

    Impact assessments of offshore wind farm installations and operations on the marine fauna are performed in many countries. Yet, only limited quantitative data on the physiological impact of impulsive sounds on (juvenile) fishes during pile driving of offshore wind farm foundations are available. Our current knowledge on fish injury and mortality due to pile driving is mainly based on laboratory experiments, in which high-intensity pile driving sounds are generated inside acoustic chambers. To validate these lab results, an in situ field experiment was carried out on board of a pile driving vessel. Juvenile European sea bass (Dicentrarchus labrax) of 68 and 115 days post hatching were exposed to pile-driving sounds as close as 45 m from the actual pile driving activity. Fish were exposed to strikes with a sound exposure level between 181 and 188 dB re 1 µPa2.s. The number of strikes ranged from 1739 to 3067, resulting in a cumulative sound exposure level between 215 and 222 dB re 1 µPa2.s. Control treatments consisted of fish not exposed to pile driving sounds. No differences in immediate mortality were found between exposed and control fish groups. Also no differences were noted in the delayed mortality up to 14 days after exposure between both groups. Our in situ experiments largely confirm the mortality results of the lab experiments found in other studies. PMID:25275508

  8. Toward Inverse Control of Physics-Based Sound Synthesis

    NASA Astrophysics Data System (ADS)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  9. Hearing in nonprofessional pop/rock musicians.

    PubMed

    Schmuziger, Nicolas; Patscheke, Jochen; Probst, Rudolf

    2006-08-01

    The purpose of this study was to evaluate the hearing and subjective auditory symptoms in a group of nonprofessional pop/rock musicians who had experienced repeated exposures to intense sound levels during at least 5 yr of musical activity. An evaluation of both ears in 42 nonprofessional pop/rock musicians included pure-tone audiometry in the conventional and extended high-frequency range, the measurement of uncomfortable loudness levels, and an assessment of tinnitus and hypersensitivity to sound. Exclusion criteria were (a) the occurrence of acoustic trauma, (b) excessive noise exposure during occupational activities, (c) a history of recurrent otitis media, (d) previous ear surgery, (e) a fracture of the cranium, (f) ingestion of potentially ototoxic drugs, and (g) reported hearing difficulties within the immediate family. These audiometric results were then compared with a control group of 20 otologically normal young adults with no history of long-term noise exposure. After adjusting for age and gender, relative to ISO 7029, the mean hearing threshold in the frequency range of 3 to 8 kHz was 6 dB in the musicians and 1.5 dB in the control group. This difference was statistically significant (Mann-Whitney rank sum test, p < 0.001). A significant difference was also observed between musicians using regular hearing protection during their activities (average 3 to 8 kHz thresholds = 2.4 dB) and musicians who never used such hearing protection (average 3 to 8 kHz thresholds = 8.2 dB), after adjusting for age and gender (Mann-Whitney rank sum test, p = 0.006). Eleven of the musicians (26%) were found to be hypersensitive to sound, and seven (17%) presented with tinnitus. Tinnitus assessment, however, did not reveal any clinically significant psychological distress in these individuals. Tinnitus and hypersensitivity to sound were observed in a significant minority within a group of nonprofessional pop/rock musicians who had experienced repeated exposure to intense sound levels over at least 5 yr but with minimal impact on their lives. Moreover, hearing loss was minimal in the subjects who always used ear protection, being only 0.9 dB higher than the control group. In contrast, hearing loss was significantly more pronounced, at 6.7 dB higher than the control group, in those musicians who never used ear protection. Continued education about the risk to hearing and the benefits of the persistent use of ear protection is warranted for musicians who are exposed frequently to intense sound levels.

  10. Functional MRI of the vocalization-processing network in the macaque brain

    PubMed Central

    Ortiz-Rios, Michael; Kuśmierek, Paweł; DeWitt, Iain; Archakov, Denis; Azevedo, Frederico A. C.; Sams, Mikko; Jääskeläinen, Iiro P.; Keliris, Georgios A.; Rauschecker, Josef P.

    2015-01-01

    Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt. PMID:25883546

  11. Open Source Tools for Temporally Controlled Rodent Behavior Suitable for Electrophysiology and Optogenetic Manipulations

    PubMed Central

    Solari, Nicola; Sviatkó, Katalin; Laszlovszky, Tamás; Hegedüs, Panna; Hangya, Balázs

    2018-01-01

    Understanding how the brain controls behavior requires observing and manipulating neural activity in awake behaving animals. Neuronal firing is timed at millisecond precision. Therefore, to decipher temporal coding, it is necessary to monitor and control animal behavior at the same level of temporal accuracy. However, it is technically challenging to deliver sensory stimuli and reinforcers as well as to read the behavioral responses they elicit with millisecond precision. Presently available commercial systems often excel in specific aspects of behavior control, but they do not provide a customizable environment allowing flexible experimental design while maintaining high standards for temporal control necessary for interpreting neuronal activity. Moreover, delay measurements of stimulus and reinforcement delivery are largely unavailable. We combined microcontroller-based behavior control with a sound delivery system for playing complex acoustic stimuli, fast solenoid valves for precisely timed reinforcement delivery and a custom-built sound attenuated chamber using high-end industrial insulation materials. Together this setup provides a physical environment to train head-fixed animals, enables calibrated sound stimuli and precisely timed fluid and air puff presentation as reinforcers. We provide latency measurements for stimulus and reinforcement delivery and an algorithm to perform such measurements on other behavior control systems. Combined with electrophysiology and optogenetic manipulations, the millisecond timing accuracy will help interpret temporally precise neural signals and behavioral changes. Additionally, since software and hardware provided here can be readily customized to achieve a large variety of paradigms, these solutions enable an unusually flexible design of rodent behavioral experiments. PMID:29867383

  12. Enhanced Memory Consolidation Via Automatic Sound Stimulation During Non-REM Sleep.

    PubMed

    Leminen, Miika M; Virkkala, Jussi; Saure, Emma; Paajanen, Teemu; Zee, Phyllis C; Santostasi, Giovanni; Hublin, Christer; Müller, Kiti; Porkka-Heiskanen, Tarja; Huotilainen, Minna; Paunio, Tiina

    2017-03-01

    Slow-wave sleep (SWS) slow waves and sleep spindle activity have been shown to be crucial for memory consolidation. Recently, memory consolidation has been causally facilitated in human participants via auditory stimuli phase-locked to SWS slow waves. Here, we aimed to develop a new acoustic stimulus protocol to facilitate learning and to validate it using different memory tasks. Most importantly, the stimulation setup was automated to be applicable for ambulatory home use. Fifteen healthy participants slept 3 nights in the laboratory. Learning was tested with 4 memory tasks (word pairs, serial finger tapping, picture recognition, and face-name association). Additional questionnaires addressed subjective sleep quality and overnight changes in mood. During the stimulus night, auditory stimuli were adjusted and targeted by an unsupervised algorithm to be phase-locked to the negative peak of slow waves in SWS. During the control night no sounds were presented. Results showed that the sound stimulation increased both slow wave (p = .002) and sleep spindle activity (p < .001). When overnight improvement of memory performance was compared between stimulus and control nights, we found a significant effect in word pair task but not in other memory tasks. The stimulation did not affect sleep structure or subjective sleep quality. We showed that the memory effect of the SWS-targeted individually triggered single-sound stimulation is specific to verbal associative memory. Moreover, the ambulatory and automated sound stimulus setup was promising and allows for a broad range of potential follow-up studies in the future. © Sleep Research Society 2017. Published by Oxford University Press [on behalf of the Sleep Research Society].

  13. 40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 18 2014-07-01 2014-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...

  14. 40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...

  15. 40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 18 2012-07-01 2012-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...

  16. 40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 18 2013-07-01 2013-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...

  17. 40 CFR 81.32 - Puget Sound Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Puget Sound Intrastate Air Quality...) AIR PROGRAMS (CONTINUED) DESIGNATION OF AREAS FOR AIR QUALITY PLANNING PURPOSES Designation of Air Quality Control Regions § 81.32 Puget Sound Intrastate Air Quality Control Region. The Puget Sound...

  18. 140. Detail of north control panel in control room, looking ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    140. Detail of north control panel in control room, looking north. This panel monitors a variety of activities: gages indicate the level of Lake Tapps and level of the circular forebay; wattmeters indicate output of exciters. Photo by Jet Lowe, HAER, 1989. - Puget Sound Power & Light Company, White River Hydroelectric Project, 600 North River Avenue, Dieringer, Pierce County, WA

  19. 75 FR 38406 - Amendment of Norton Sound Low and Control 1234L Offshore Airspace Areas; Alaska

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-02

    ...-0071; Airspace Docket No. 10-AAL-1] RIN 2120-AA66 Amendment of Norton Sound Low and Control 1234L.... SUMMARY: This action modifies the Norton Sound Low and Control 1234L Offshore Airspace Areas in Alaska... rulemaking (NPRM) to modify two Alaskan Offshore Airspace Areas, Norton Sound Low, and Control 1234L (75 FR...

  20. Contributions of rapid neuromuscular transmission to the fine control of acoustic parameters of birdsong.

    PubMed

    Mencio, Caitlin; Kuberan, Balagurunathan; Goller, Franz

    2017-02-01

    Neural control of complex vocal behaviors, such as birdsong and speech, requires integration of biomechanical nonlinearities through muscular output. Although control of airflow and tension of vibrating tissues are known functions of vocal muscles, it remains unclear how specific muscle characteristics contribute to specific acoustic parameters. To address this gap, we removed heparan sulfate chains using heparitinases to perturb neuromuscular transmission subtly in the syrinx of adult male zebra finches (Taeniopygia guttata). Infusion of heparitinases into ventral syringeal muscles altered their excitation threshold and reduced neuromuscular transmission changing their ability to modulate airflow. The changes in muscle activation dynamics caused a reduction in frequency modulation rates and elimination of many high-frequency syllables but did not alter the fundamental frequency of syllables. Sound amplitude was reduced and sound onset pressure was increased, suggesting a role of muscles in the induction of self-sustained oscillations under low-airflow conditions, thus enhancing vocal efficiency. These changes were reversed to preinfusion levels by 7 days after infusion. These results illustrate complex interactions between the control of airflow and tension and further define the importance of syringeal muscle in the control of a variety of acoustic song characteristics. In summary, the findings reported here show that altering neuromuscular transmission can lead to reversible changes to the acoustic structure of song. Understanding the full extent of muscle involvement in song production is critical in decoding the motor program for the production of complex vocal behavior, including our search for parallels between birdsong and human speech motor control. It is largely unknown how fine motor control of acoustic parameters is achieved in vocal organs. Subtle manipulation of syringeal muscle function was used to test how active motor control influences acoustic parameters. Slowed activation kinetics of muscles reduced frequency modulation and, unexpectedly, caused a distinct decrease in sound amplitude and increase in phonation onset pressure. These results show that active control enhances the efficiency of energy conversion in the syrinx. Copyright © 2017 the American Physiological Society.

  1. Modeling and analysis of secondary sources coupling for active sound field reduction in confined spaces

    NASA Astrophysics Data System (ADS)

    Montazeri, Allahyar; Taylor, C. James

    2017-10-01

    This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.

  2. Local Probing Spinel and Perovskite Complex Magnetic Systems

    NASA Astrophysics Data System (ADS)

    Oliveira, Goncalo Nuno de Pinho

    Noise is defined as unwanted sound, when perceived in excess can cause many harmful effects such as annoyance, interference with speech, and hearing loss, hence there is a need to control noise in practical situations. Noise can be controlled actively and/or passively, here we discuss the passive noise control techniques. Passive noise control involves using energy dissipating or reflecting materials such as absorbers or barriers respectively. Damping and isolating materials are also used in eliminating structure-borne noise. These materials exhibit properties such as reflection, absorption and transmission loss when incidence is by a sound source. Thus, there is a need to characterize the acoustical properties of these materials for practical use. The theoretical background of the random incident sound absorption with reverberation room and normal incident sound absorption using impedance tube are well documented. The Transfer Matrix method for measuring transmission loss and absorption coefficient using impedance tube is very attractive since it is rather inexpensive and fast. In this research, a low-cost Impedance Tube is constructed using transfer function method to measure both absorption and transmissibility of materials. Equipment and measurement instruments available in the laboratory were used in the construction of the tube, adhering to cost-effectiveness. Care has been taken for precise construction of tube to ensure better measurement results. Further various samples varying from hard non-porous to soft porous materials were tested for absorption and sound transmission loss. Absorption values were also compared with reverberation room method with the available samples further ensuring the reliability of the newly constructed tube for future measurements.

  3. Effect of sound-related activities on human behaviours and acoustic comfort in urban open spaces.

    PubMed

    Meng, Qi; Kang, Jian

    2016-12-15

    Human activities are important to landscape design and urban planning; however, the effect of sound-related activities on human behaviours and acoustic comfort has not been considered. The objective of this study is to explore how human behaviours and acoustic comfort in urban open spaces can be changed by sound-related activities. On-site measurements were performed at a case study site in Harbin, China, and an acoustic comfort survey was simultaneously conducted. In terms of effect of sound activities on human behaviours, music-related activities caused 5.1-21.5% of persons who pass by the area to stand and watch the activity, while there was a little effect on the number of persons who performed excises during the activity. Human activities generally have little effect on the behaviour of pedestrians when only 1 to 3 persons are involved in the activities, while a deep effect on the behaviour of pedestrians is noted when >6 persons are involved in the activities. In terms of effect of activities on acoustic comfort, music-related activities can increase the sound level from 10.8 to 16.4dBA, while human activities such RS and PC can increase the sound level from 9.6 to 12.8dBA; however, they lead to very different acoustic comfort. The acoustic comfort of persons can differ with activities, for example the acoustic comfort of persons who stand watch can increase by music-related activities, while the acoustic comfort of persons who sit and watch can decrease by human sound-related activities. Some sound-related activities can show opposite trend of acoustic comfort between visitors and citizens. Persons with higher income prefer music sound-related activities, while those with lower income prefer human sound-related activities. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Modeling vocalization with ECoG cortical activity recorded during vocal production in the macaque monkey.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Fujii, Naotaka; Averbeck, Bruno B; Mishkin, Mortimer

    2014-01-01

    Vocal production is an example of controlled motor behavior with high temporal precision. Previous studies have decoded auditory evoked cortical activity while monkeys listened to vocalization sounds. On the other hand, there have been few attempts at decoding motor cortical activity during vocal production. Here we recorded cortical activity during vocal production in the macaque with a chronically implanted electrocorticographic (ECoG) electrode array. The array detected robust activity in motor cortex during vocal production. We used a nonlinear dynamical model of the vocal organ to reduce the dimensionality of `Coo' calls produced by the monkey. We then used linear regression to evaluate the information in motor cortical activity for this reduced representation of calls. This simple linear model accounted for circa 65% of the variance in the reduced sound representations, supporting the feasibility of using the dynamical model of the vocal organ for decoding motor cortical activity during vocal production.

  5. Cortical activation with sound stimulation in cochlear implant users demonstrated by positron emission tomography.

    PubMed

    Naito, Y; Okazawa, H; Honjo, I; Hirano, S; Takahashi, H; Shiomi, Y; Hoji, W; Kawano, M; Ishizu, K; Yonekura, Y

    1995-07-01

    Six postlingually deaf patients using multi-channel cochlear implants were examined by positron emission tomography (PET) using 15O-labeled water. Changes in regional cerebral blood flow (rCBF) were measured during different sound stimuli. The stimulation paradigms employed consisted of two sets of three different conditions; (1) no sound stimulation with the speech processor of the cochlear implant system switched off, (2) hearing white noise and (3) hearing sequential Japanese sentences. In the primary auditory area, the mean rCBF increase during noise stimulation was significantly greater on the side contralateral to the implant than on the ipsilateral side. Speech stimulation caused significantly greater rCBF increase compared with noise stimulation in the left immediate auditory association area (P < 0.01), the bilateral auditory association areas (P < 0.01), the posterior part of the bilateral inferior frontal gyri; the Broca's area (P < 0.01) and its right hemisphere homologue (P < 0.05). Activation of cortices related to verbal and non-verbal sound recognition was clearly demonstrated in the current subjects probably because complete silence was attained in the control condition.

  6. Acoustic transmission and radiation analysis of adaptive shape-memory alloy reinforced laminated plates

    NASA Astrophysics Data System (ADS)

    Liang, C.; Rogers, C. A.; Fuller, C. R.

    1991-02-01

    A theoretical analysis of sound transmission/radiation of shape-memory alloy (SMA) hybrid composite panels is presented. Unlike other composite materials, SMA hybrid composite is dynamically tunable by electrical activation of the SMA fibers and has numerous active control capabilities. Two of the concepts that will be briefly described and utilized in this paper are referred to as active property tuning (APT) and active strain energy tuning (ASET). Tuning or activating the embedded shape-memory alloy fibers in conventional composite materials changes the overall stiffness of the SMA hybrid composite structure and consequently changes natural frequency and mode shapes. The sound transmission and radiation from a composite panel is related to its frequency and mode shapes. Because of the capability to change both the natural frequency and mode shapes, the acoustic characteristics of SMA hybrid composite plates can be changed as well. The directivity pattern, radiation efficiency, and transmission loss of laminated composite materials are investigated based on 'composite' mode shapes in order to derive a basic understanding of the nature and authority of acoustic control by use of SMA hybrid composites.

  7. Snap Your Fingers! An ERP/sLORETA Study Investigating Implicit Processing of Self- vs. Other-Related Movement Sounds Using the Passive Oddball Paradigm

    PubMed Central

    Justen, Christoph; Herbert, Cornelia

    2016-01-01

    So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatio-temporal dynamics. Event-related potentials (ERPs) were assessed while participants (N = 12 healthy subjects) listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz) and high (1000 Hz) pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low-resolution brain electromagnetic tomography (sLORETA) revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA) in the N2a/mismatch negativity (MMN) as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing [e.g., right anterior/posterior cingulate cortex (ACC/PPC)] as well as the right inferior parietal lobule (IPL) showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC) or the P3 (IPL). None of these brain regions showed enhanced activation while listening passively to low (500 Hz) and high (1000 Hz) pure tones. Taken together, the current results indicate (1) a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2) activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3) their differential temporal activation during deviance (N2a/MMN – ACC/PCC) and target detection (P3 – IPL) of self- vs. other-related movement sounds. PMID:27777557

  8. Effects of emotionally charged sounds in schizophrenia patients using exploratory eye movements: comparison with healthy subjects.

    PubMed

    Ishii, Youhei; Morita, Kiichiro; Shouji, Yoshihisa; Nakashima, Youko; Uchimura, Naohisa

    2010-02-01

    Emotion-associated sounds have been suggested to exert important effects upon human personal relationships. The present study was aimed to characterize the effects of the sounds of crying or laughing on visual cognitive function in schizophrenia patients. We recorded exploratory eye movements in 24 schizophrenia patients (mean age, 27.0 +/- 6.1 years; 14 male, 10 female) and age-matched controls. The total eye scanning length (TESL) and total number of gaze points in the left (left TNGP) and right (right TNGP) visual fields of the screen and the number of researching areas (NRA) were determined using eye-mark recording in the presence/absence of emotionally charged sounds. Controls' TESL for smiling pictures was longer than that for crying pictures irrespective of sounds. Patients' TESL for smiling pictures, however, was shorter than for crying pictures irrespective of the sounds. The left TNGP for smiling pictures was lower in patients than controls independent of sound. Importantly, the right TNGP was significantly larger with laughing sounds than in the absence of sound. In controls, the NRA for smiling pictures was significantly greater than for crying pictures irrespective of sound. Patient NRA did not significantly differ between smiling and crying pictures irrespective of sound. Eye movements in schizophrenia patients' left field for smiling pictures associated with laughing sounds particularly differed from those in controls, suggesting impaired visual cognitive function associated with positive emotion, also involving pleasure-related sounds, in schizophrenia.

  9. Brain Areas Controlling Heart Rate Variability in Tinnitus and Tinnitus-Related Distress

    PubMed Central

    Vanneste, Sven; De Ridder, Dirk

    2013-01-01

    Background Tinnitus is defined as an intrinsic sound perception that cannot be attributed to an external sound source. Distress in tinnitus patients is related to increased beta activity in the dorsal part of the anterior cingulate and the amount of distress correlates with network activity consisting of the amygdala-anterior cingulate cortex-insula-parahippocampus. Previous research also revealed that distress is associated to a higher sympathetic (OS) tone in tinnitus patients and tinnitus suppression to increased parasympathetic (PS) tone. Methodology The aim of the present study is to investigate the relationship between tinnitus distress and the autonomic nervous system and find out which cortical areas are involved in the autonomic nervous system influences in tinnitus distress by the use of source localized resting state electroencephalogram (EEG) recordings and electrocardiogram (ECG). Twenty-one tinnitus patients were included in this study. Conclusions The results indicate that the dorsal and subgenual anterior cingulate, as well as the left and right insula are important in the central control of heart rate variability in tinnitus patients. Whereas the sympathovagal balance is controlled by the subgenual and pregenual anterior cingulate cortex, the right insula controls sympathetic activity and the left insula the parasympathetic activity. The perceived distress in tinnitus patients seems to be sympathetically mediated. PMID:23533644

  10. Experimental implementation of acoustic impedance control by a 2D network of distributed smart cells

    NASA Astrophysics Data System (ADS)

    David, P.; Collet, M.; Cote, J.-M.

    2010-03-01

    New miniaturization and integration capabilities available from emerging microelectromechanical system (MEMS) technology will allow silicon-based artificial skins involving thousands of elementary actuators to be developed in the near future. Smart structures combining large arrays of elementary motion pixels are thus being studied so that fundamental properties could be dynamically adjusted. This paper investigates the acoustical capabilities of a network of distributed transducers connected with a suitable controlling strategy. The research aims at designing an integrated active interface for sound attenuation by using suitable changes of acoustical impedance. The control strategy is based on partial differential equations (PDE) and the multiscaled physics of electromechanical elements. Specific techniques based on PDE control theory have provided a simple boundary control equation able to annihilate the reflections of acoustic waves. To experimentally implement the method, the control strategy is discretized as a first order time-space operator. The obtained quasi-collocated architecture, composed of a large number of sensors and actuators, provides high robustness and stability. The experimental results demonstrate how a well controlled active skin can substantially modify sound reflectivity of the acoustical interface and reduce the propagation of acoustic waves.

  11. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography

    PubMed Central

    2013-01-01

    Background It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Methods Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. Results The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. Conclusions We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds with mental fatigue and that the insular cortex is involved in the neural substrates of this phenomenon. PMID:23764106

  12. A New Look at an Old Activity: Resonance Tubes Used to Teach Resonance

    ERIC Educational Resources Information Center

    Nelson, Jim; Nelson, Jane

    2017-01-01

    There are several variations of resonance laboratory activities used to determine the speed of sound. This is "not" one of them. This activity uses the resonance tube idea to teach "resonance," not to verify the speed of sound. Prior to this activity, the speed of sound has already been measured using computer sound-sensors and…

  13. Auditory perception and attention as reflected by the brain event-related potentials in children with Asperger syndrome.

    PubMed

    Lepistö, T; Silokallio, S; Nieminen-von Wendt, T; Alku, P; Näätänen, R; Kujala, T

    2006-10-01

    Language development is delayed and deviant in individuals with autism, but proceeds quite normally in those with Asperger syndrome (AS). We investigated auditory-discrimination and orienting in children with AS using an event-related potential (ERP) paradigm that was previously applied to children with autism. ERPs were measured to pitch, duration, and phonetic changes in vowels and to corresponding changes in non-speech sounds. Active sound discrimination was evaluated with a sound-identification task. The mismatch negativity (MMN), indexing sound-discrimination accuracy, showed right-hemisphere dominance in the AS group, but not in the controls. Furthermore, the children with AS had diminished MMN-amplitudes and decreased hit rates for duration changes. In contrast, their MMN to speech pitch changes was parietally enhanced. The P3a, reflecting involuntary orienting to changes, was diminished in the children with AS for speech pitch and phoneme changes, but not for the corresponding non-speech changes. The children with AS differ from controls with respect to their sound-discrimination and orienting abilities. The results of the children with AS are relatively similar to those earlier obtained from children with autism using the same paradigm, although these clinical groups differ markedly in their language development.

  14. Getting the cocktail party started: masking effects in speech perception

    PubMed Central

    Evans, S; McGettigan, C; Agnew, ZK; Rosen, S; Scott, SK

    2016-01-01

    Spoken conversations typically take place in noisy environments and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous functional Magnetic Resonance Imaging (fMRI), whilst they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioural task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream, and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment, activity was found within right lateralised frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise. PMID:26696297

  15. Mind the Gap: Two Dissociable Mechanisms of Temporal Processing in the Auditory System

    PubMed Central

    Anderson, Lucy A.

    2016-01-01

    High temporal acuity of auditory processing underlies perception of speech and other rapidly varying sounds. A common measure of auditory temporal acuity in humans is the threshold for detection of brief gaps in noise. Gap-detection deficits, observed in developmental disorders, are considered evidence for “sluggish” auditory processing. Here we show, in a mouse model of gap-detection deficits, that auditory brain sensitivity to brief gaps in noise can be impaired even without a general loss of central auditory temporal acuity. Extracellular recordings in three different subdivisions of the auditory thalamus in anesthetized mice revealed a stimulus-specific, subdivision-specific deficit in thalamic sensitivity to brief gaps in noise in experimental animals relative to controls. Neural responses to brief gaps in noise were reduced, but responses to other rapidly changing stimuli unaffected, in lemniscal and nonlemniscal (but not polysensory) subdivisions of the medial geniculate body. Through experiments and modeling, we demonstrate that the observed deficits in thalamic sensitivity to brief gaps in noise arise from reduced neural population activity following noise offsets, but not onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive channels underlying auditory temporal processing, and suggest that gap-detection deficits can arise from specific impairment of the sound-offset-sensitive channel. SIGNIFICANCE STATEMENT The experimental and modeling results reported here suggest a new hypothesis regarding the mechanisms of temporal processing in the auditory system. Using a mouse model of auditory temporal processing deficits, we demonstrate the existence of specific abnormalities in auditory thalamic activity following sound offsets, but not sound onsets. These results reveal dissociable sound-onset-sensitive and sound-offset-sensitive mechanisms underlying auditory processing of temporally varying sounds. Furthermore, the findings suggest that auditory temporal processing deficits, such as impairments in gap-in-noise detection, could arise from reduced brain sensitivity to sound offsets alone. PMID:26865621

  16. Neural Correlates of Natural Human Echolocation in Early and Late Blind Echolocation Experts

    PubMed Central

    Thaler, Lore; Arnott, Stephen R.; Goodale, Melvyn A.

    2011-01-01

    Background A small number of blind people are adept at echolocating silent objects simply by producing mouth clicks and listening to the returning echoes. Yet the neural architecture underlying this type of aid-free human echolocation has not been investigated. To tackle this question, we recruited echolocation experts, one early- and one late-blind, and measured functional brain activity in each of them while they listened to their own echolocation sounds. Results When we compared brain activity for sounds that contained both clicks and the returning echoes with brain activity for control sounds that did not contain the echoes, but were otherwise acoustically matched, we found activity in calcarine cortex in both individuals. Importantly, for the same comparison, we did not observe a difference in activity in auditory cortex. In the early-blind, but not the late-blind participant, we also found that the calcarine activity was greater for echoes reflected from surfaces located in contralateral space. Finally, in both individuals, we found activation in middle temporal and nearby cortical regions when they listened to echoes reflected from moving targets. Conclusions These findings suggest that processing of click-echoes recruits brain regions typically devoted to vision rather than audition in both early and late blind echolocation experts. PMID:21633496

  17. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  18. Music training alters the course of adolescent auditory development.

    PubMed

    Tierney, Adam T; Krizman, Jennifer; Kraus, Nina

    2015-08-11

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.

  19. Music training alters the course of adolescent auditory development

    PubMed Central

    Tierney, Adam T.; Krizman, Jennifer; Kraus, Nina

    2015-01-01

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes. PMID:26195739

  20. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    PubMed

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  1. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study

    PubMed Central

    Stefanidou, Chrysi; McCleery, Joseph P.

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6—year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age. PMID:28738063

  2. Test of a non-physical barrier consisting of light, sound, and bubble screen to block upstream movement of sea lamprey in an experimental raceway

    USGS Publications Warehouse

    Miehls, Scott M.; Johnson, Nicholas S.; Hrodey, Pete J.

    2017-01-01

    Control of the invasive Sea Lamprey Petromyzon marinus is critical for management of commercial and recreational fisheries in the Laurentian Great Lakes. Use of physical barriers to block Sea Lampreys from spawning habitat is a major component of the control program. However, the resulting interruption of natural streamflow and blockage of nontarget species present substantial challenges. Development of an effective nonphysical barrier would aid the control of Sea Lampreys by eliminating their access to spawning locations while maintaining natural streamflow. We tested the effect of a nonphysical barrier consisting of strobe lights, low-frequency sound, and a bubble screen on the movement of Sea Lampreys in an experimental raceway designed as a two-choice maze with a single main channel fed by two identical inflow channels (one control and one blocked). Sea Lampreys were more likely to move upstream during trials when the strobe light and low-frequency sound were active compared with control trials and trials using the bubble screen alone. For those Sea Lampreys that did move upstream to the confluence of inflow channels, no combination of stimuli or any individual stimulus significantly influenced the likelihood that Sea Lampreys would enter the blocked inflow channel, enter the control channel, or return downstream.

  3. Structural Acoustic Prediction and Interior Noise Control Technology

    NASA Technical Reports Server (NTRS)

    Mathur, G. P.; Chin, C. L.; Simpson, M. A.; Lee, J. T.; Palumbo, Daniel L. (Technical Monitor)

    2001-01-01

    This report documents the results of Task 14, "Structural Acoustic Prediction and Interior Noise Control Technology". The task was to evaluate the performance of tuned foam elements (termed Smart Foam) both analytically and experimentally. Results taken from a three-dimensional finite element model of an active, tuned foam element are presented. Measurements of sound absorption and sound transmission loss were taken using the model. These results agree well with published data. Experimental performance data were taken in Boeing's Interior Noise Test Facility where 12 smart foam elements were applied to a 757 sidewall. Several configurations were tested. Noise reductions of 5-10 dB were achieved over the 200-800 Hz bandwidth of the controller. Accelerometers mounted on the panel provided a good reference for the controller. Configurations with far-field error microphones outperformed near-field cases.

  4. Active Control of Fan-Generated Tone Noise

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.

    1995-01-01

    This paper reports on an experiment to control the noise radiated from the inlet of a ducted fan using a time domain active adaptive system. The control ,sound source consists of loudspeakers arranged in a ring around the fan duct. The error sensor location is in the fan duct. The purpose of this experiment is to demonstrate that the in-duct error sensor reduces the mode spillover in the far field, thereby increasing the efficiency of the control system. The control system is found to reduce the blade passage frequency tone significantly in the acoustic far field when the mode orders of the noise source and of the control source are the same, when the dominant wave in the duct is a plane wave. The presence of higher order modes in the duct reduces the noise reduction efficiency, particularly near the mode cut-on where the standing wave component is strong, but the control system converges stably. The control system is stable and converges when the first circumferential mode is generated in the duct. The control system is found to reduce the fan noise in the far field on an arc around the fan inlet by as much as 20 dB with none of the sound amplification associated with mode spillover.

  5. Effectiveness of an acoustical product in reducing high-frequency sound within unoccupied incubators.

    PubMed

    Kellam, Barbara; Bhatia, Jatinder

    2009-08-01

    Few noise measurement studies in the neonatal intensive care unit have reported sound frequencies within incubators. Sound frequencies within incubators are markedly different from sound frequencies within the gravid uterus. This article reports the results of sound spectral analysis (SSA) within unoccupied incubators under control and treatment conditions. SSA indicated that acoustical foam panels (treatment condition) markedly reduced sound frequencies > or =500 Hz when compared with the control condition. The main findings of this study (a) illustrate the need to monitor high-frequency sound within incubators and (b) indicate one method to reduce atypical sound exposure within incubators.

  6. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities.

    PubMed

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-03-01

    Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.

  7. The Role of Xylitol Gum Chewing in Restoring Postoperative Bowel Activity After Cesarean Section.

    PubMed

    Lee, Jian Tao; Hsieh, Mei-Hui; Cheng, Po-Jen; Lin, Jr-Rung

    2016-03-01

    The goal of this study was to evaluate the effects of xylitol gum chewing on gastrointestinal recovery after cesarean section. Women who underwent cesarean section (N = 120) were randomly allocated into Group A (xylitol gum), Group B (nonxylitol gum), or the control group (no chewing gum). Every 2 hr post-cesarean section and until first flatus, Groups A and B received two pellets of chewing gum and were asked to chew for 15 min. The times to first bowel sounds, first flatus, and first defecation were then compared among the three groups. Group A had the shortest mean time to first bowel sounds (6.9 ± 1.7 hr), followed by Group B (8 ± 1.6 hr) and the control group (12.8 ± 2.5 hr; one-way analysis of variance, p < .001; Scheffe's post hoc comparisons, p < .05). The gum-chewing groups demonstrated a faster return of flatus than the control group did (p < .001), but the time to flatus did not differ significantly between the gum-chewing groups. Additionally, the differences in the time to first defecation were not significant. After cesarean section, chewing gum increased participants' return of bowel activity, as measured by the appearance of bowel sounds and the passage of flatus. In this context, xylitol-containing gum may be superior to xylitol-free gum. © The Author(s) 2015.

  8. Detection of active noise control on the standard motorcycle exhaust Supra X 125 D using PVC pipe technique form Y

    NASA Astrophysics Data System (ADS)

    Isranuri, I.; Alfisyahrin; Nasution, A. R.

    2018-02-01

    This detection aims to obtain noise reduction on the supra X 125D motorcycle exhaust by using the Active Noise Control Method. The technique is done using a Y-shaped PVC pipe to be bolted on the exhaust, which then branch Y PVC is placed loudspeaker with impermeable conditions. The function of this loudspeaker is as a secondary noise to counter the primary noise of the sound of exhaust motorcycle Supra X 125D. The sound generator in this study is the ISD 4004 module, which serves to generate noise to counter the source noise. How this ISD 4004 module works is by recording source noise then recording the source noise and then reversed the phase 180° by phase reversing circuit. So that, the noise generated by the sound generator will hit the source noise and encounter or such as addition of two different phase of sound will result in noise reduction when detected at the end of the Y-shaped PVC pipe. Inverted phase reversed using feed-back resistor 1 kΩ and 2 kΩ input resistors, 16V capacitor 2500μf and as amplifier using ICL 7660 and TL 702 CP. Test results on the highest 1000 rpm rotation engine speed on the Z axis of 2 dB, and at the highest 2000 rpm rotation engine speed also occurs on the Z axis of 1.5 dB.

  9. Reduction of the Radiating Sound of a Submerged Finite Cylindrical Shell Structure by Active Vibration Control

    PubMed Central

    Kim, Heung Soo; Sohn, Jung Woo; Jeon, Juncheol; Choi, Seung-Bok

    2013-01-01

    In this work, active vibration control of an underwater cylindrical shell structure was investigated, to suppress structural vibration and structure-borne noise in water. Finite element modeling of the submerged cylindrical shell structure was developed, and experimentally evaluated. Modal reduction was conducted to obtain the reduced system equation for the active feedback control algorithm. Three Macro Fiber Composites (MFCs) were used as actuators and sensors. One MFC was used as an exciter. The optimum control algorithm was designed based on the reduced system equations. The active control performance was then evaluated using the lab scale underwater cylindrical shell structure. Structural vibration and structure-borne noise of the underwater cylindrical shell structure were reduced significantly by activating the optimal controller associated with the MFC actuators. The results provide that active vibration control of the underwater structure is a useful means to reduce structure-borne noise in water. PMID:23389344

  10. Reduction of the radiating sound of a submerged finite cylindrical shell structure by active vibration control.

    PubMed

    Kim, Heung Soo; Sohn, Jung Woo; Jeon, Juncheol; Choi, Seung-Bok

    2013-02-06

    In this work, active vibration control of an underwater cylindrical shell structure was investigated, to suppress structural vibration and structure-borne noise in water. Finite element modeling of the submerged cylindrical shell structure was developed, and experimentally evaluated. Modal reduction was conducted to obtain the reduced system equation for the active feedback control algorithm. Three Macro Fiber Composites (MFCs) were used as actuators and sensors. One MFC was used as an exciter. The optimum control algorithm was designed based on the reduced system equations. The active control performance was then evaluated using the lab scale underwater cylindrical shell structure. Structural vibration and structure-borne noise of the underwater cylindrical shell structure were reduced significantly by activating the optimal controller associated with the MFC actuators. The results provide that active vibration control of the underwater structure is a useful means to reduce structure-borne noise in water.

  11. Community Relations: DOD’s Approach for Using Resources Reflects Sound Management Principles

    DTIC Science & Technology

    2016-09-01

    COMMUNITY RELATIONS DOD’s Approach for Using Resources Reflects Sound Management Principles Report to...Sound Management Principles What GAO Found The Department of Defense’s (DOD) approach for determining which community relations activities to...undertake reflects sound management principles —both for activities requested by non-DOD entities and for activities initiated by the department. DOD and

  12. Different categories of living and non-living sound-sources activate distinct cortical networks

    PubMed Central

    Engel, Lauren R.; Frum, Chris; Puce, Aina; Walker, Nathan A.; Lewis, James W.

    2009-01-01

    With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places—categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left-lateralized fronto-parietal regions, bilateral insular cortices, and subcortical regions previously implicated in observation-execution matching, consistent with “embodied” and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception. PMID:19465134

  13. Mind-wandering and alterations to default mode network connectivity when listening to naturalistic versus artificial sounds.

    PubMed

    Gould van Praag, Cassandra D; Garfinkel, Sarah N; Sparasci, Oliver; Mees, Alex; Philippides, Andrew O; Ware, Mark; Ottaviani, Cristina; Critchley, Hugo D

    2017-03-27

    Naturalistic environments have been demonstrated to promote relaxation and wellbeing. We assess opposing theoretical accounts for these effects through investigation of autonomic arousal and alterations of activation and functional connectivity within the default mode network (DMN) of the brain while participants listened to sounds from artificial and natural environments. We found no evidence for increased DMN activity in the naturalistic compared to artificial or control condition, however, seed based functional connectivity showed a shift from anterior to posterior midline functional coupling in the naturalistic condition. These changes were accompanied by an increase in peak high frequency heart rate variability, indicating an increase in parasympathetic activity in the naturalistic condition in line with the Stress Recovery Theory of nature exposure. Changes in heart rate and the peak high frequency were correlated with baseline functional connectivity within the DMN and baseline parasympathetic tone respectively, highlighting the importance of individual neural and autonomic differences in the response to nature exposure. Our findings may help explain reported health benefits of exposure to natural environments, through identification of alterations to autonomic activity and functional coupling within the DMN when listening to naturalistic sounds.

  14. Mind-wandering and alterations to default mode network connectivity when listening to naturalistic versus artificial sounds

    PubMed Central

    Gould van Praag, Cassandra D.; Garfinkel, Sarah N.; Sparasci, Oliver; Mees, Alex; Philippides, Andrew O.; Ware, Mark; Ottaviani, Cristina; Critchley, Hugo D.

    2017-01-01

    Naturalistic environments have been demonstrated to promote relaxation and wellbeing. We assess opposing theoretical accounts for these effects through investigation of autonomic arousal and alterations of activation and functional connectivity within the default mode network (DMN) of the brain while participants listened to sounds from artificial and natural environments. We found no evidence for increased DMN activity in the naturalistic compared to artificial or control condition, however, seed based functional connectivity showed a shift from anterior to posterior midline functional coupling in the naturalistic condition. These changes were accompanied by an increase in peak high frequency heart rate variability, indicating an increase in parasympathetic activity in the naturalistic condition in line with the Stress Recovery Theory of nature exposure. Changes in heart rate and the peak high frequency were correlated with baseline functional connectivity within the DMN and baseline parasympathetic tone respectively, highlighting the importance of individual neural and autonomic differences in the response to nature exposure. Our findings may help explain reported health benefits of exposure to natural environments, through identification of alterations to autonomic activity and functional coupling within the DMN when listening to naturalistic sounds. PMID:28345604

  15. Feasibility of using fMRI to study mothers responding to infant cries.

    PubMed

    Lorberbaum, J P; Newman, J D; Dubno, J R; Horwitz, A R; Nahas, Z; Teneback, C C; Bloomer, C W; Bohning, D E; Vincent, D; Johnson, M R; Emmanuel, N; Brawman-Mintzer, O; Book, S W; Lydiard, R B; Ballenger, J C; George, M S

    1999-01-01

    While parenting is a universal human behavior, its neuroanatomic basis is currently unknown. Animal data suggest that the cingulate may play an important function in mammalian parenting behavior. For example, in rodents cingulate lesions impair maternal behavior. Here, in an attempt to understand the brain basis of human maternal behavior, we had mothers listen to recorded infant cries and white noise control sounds while they underwent functional MRI (fMRI) of the brain. We hypothesized that mothers would show significantly greater cingulate activity during the cries compared to the control sounds. Of 7 subjects scanned, 4 had fMRI data suitable for analysis. When fMRI data were averaged for these 4 subjects, the anterior cingulate and right medial prefrontal cortex were the only brain regions showing statistically increased activity with the cries compared to white noise control sounds (cluster analysis with one-tailed z-map threshold of P < 0.001 and spatial extent threshold of P < 0.05). These results demonstrate the feasibility of using fMRI to study brain activity in mothers listening to infant cries and that the anterior cingulate may be involved in mothers listening to crying babies. We are currently replicating this study in a larger group of mothers. Future work in this area may help (1) unravel the functional neuroanatomy of the parent-infant bond and (2) examine whether markers of this bond, such as maternal brain response to infant crying, can predict maternal style (i.e., child neglect), offspring temperament, or offspring depression or anxiety.

  16. A Hearing-Based, Frequency Domain Sound Quality Model for Combined Aerodynamic and Power Transmission Response with Application to Rotorcraft Interior Noise

    NASA Astrophysics Data System (ADS)

    Sondkar, Pravin B.

    The severity of combined aerodynamics and power transmission response in high-speed, high power density systems such as a rotorcraft is still a major cause of annoyance in spite of recent advancement in passive, semi-active and active control. With further increase in the capacity and power of this class of machinery systems, the acoustic noise levels are expected to increase even more. To achieve further improvements in sound quality, a more refined understanding of the factors and attributes controlling human perception is needed. In the case of rotorcraft systems, the perceived quality of the interior sound field is a major determining factor of passenger comfort. Traditionally, this sound quality factor is determined by measuring the response of a chosen set of juries who are asked to compare their qualitative reactions to two or more sounds based on their subjective impressions. This type of testing is very time-consuming, costly, often inconsistent, and not useful for practical design purposes. Furthermore, there is no known universal model for sound quality. The primary aim of this research is to achieve significant improvements in quantifying the sound quality of combined aerodynamic and power transmission response in high-speed, high power density machinery systems such as a rotorcraft by applying relevant objective measures related to the spectral characteristics of the sound field. Two models have been proposed in this dissertation research. First, a classical multivariate regression analysis model based on currently known sound quality metrics as well some new metrics derived in this study is presented. Even though the analysis resulted in the best possible multivariate model as a measure of the acoustic noise quality, it lacks incorporation of human judgment mechanism. The regression model can change depending on specific application, nature of the sounds and types of juries used in the study. Also, it predicts only the averaged preference scores and does not explain why two jury members differ in their judgment. To address the above shortcoming of applying regression analysis, a new human judgment model is proposed to further improve the ability to predict the degree of subjective annoyance. The human judgment model involves extraction of subjective attributes and their values using a proposed artificial jury processor. In this approach, a set of ear transfer functions are employed to compute the characteristics of sound pressure waves as perceived subjectively by human. The resulting basilar membrane displacement data from this proposed model is then applied to analyze the attribute values. Using this proposed human judgment model, the human judgment mechanism, which is highly sophisticated, will be examined. Since the human judgment model is essentially based on jury attributes that are not expected to change significantly with application or nature of the sound field, it gives a more common basis to evaluate sound quality. This model also attempts to explain the inter-juror differences in opinion, which is critical in understanding the variability in human response.

  17. A New Look at an Old Activity: Resonance Tubes Used to Teach Resonance

    NASA Astrophysics Data System (ADS)

    Nelson, Jim; Nelson, Jane

    2017-12-01

    There are several variations of resonance laboratory activities used to determine the speed of sound. This is not one of them. This activity uses the resonance tube idea to teach resonance, not to verify the speed of sound. Prior to this activity, the speed of sound has already been measured using computer sound-sensors and timing echoes produced in long tubes like carpet tubes. There are other methods to determine the speed of sound. Some methods are referenced at the end of this article. The students already know the speed of sound when they are confronted with data that contradict their prior knowledge. Here, the mystery is something the students solve with the help of a series of demonstrations by the instructor.

  18. Neuro-cognitive aspects of "OM" sound/syllable perception: A functional neuroimaging study.

    PubMed

    Kumar, Uttam; Guleria, Anupam; Khetrapal, Chunni Lal

    2015-01-01

    The sound "OM" is believed to bring mental peace and calm. The cortical activation associated with listening to sound "OM" in contrast to similar non-meaningful sound (TOM) and listening to a meaningful Hindi word (AAM) has been investigated using functional magnetic resonance imaging (MRI). The behaviour interleaved gradient technique was employed in order to avoid interference of scanner noise. The results reveal that listening to "OM" sound in contrast to the meaningful Hindi word condition activates areas of bilateral cerebellum, left middle frontal gyrus (dorsolateral middle frontal/BA 9), right precuneus (BA 5) and right supramarginal gyrus (SMG). Listening to "OM" sound in contrast to "non-meaningful" sound condition leads to cortical activation in bilateral middle frontal (BA9), right middle temporal (BA37), right angular gyrus (BA 40), right SMG and right superior middle frontal gyrus (BA 8). The conjunction analysis reveals that the common neural regions activated in listening to "OM" sound during both conditions are middle frontal (left dorsolateral middle frontal cortex) and right SMG. The results correspond to the fact that listening to "OM" sound recruits neural systems implicated in emotional empathy.

  19. Safety and walking ability of KAFO users with the C-Brace® Orthotronic Mobility System, a new microprocessor stance and swing control orthosis

    PubMed Central

    Pröbsting, Eva; Kannenberg, Andreas; Zacharias, Britta

    2016-01-01

    Background: There are clear indications for benefits of stance control orthoses compared to locked knee ankle foot orthoses. However, stance control orthoses still have limited function compared with a sound human leg. Objectives: The aim of this study was to evaluate the potential benefits of a microprocessor stance and swing control orthosis compared to stance control orthoses and locked knee ankle foot orthoses in activities of daily living. Study design: Survey of lower limb orthosis users before and after fitting of a microprocessor stance and swing control orthosis. Methods: Thirteen patients with various lower limb pareses completed a baseline survey for their current orthotic device (locked knee ankle foot orthosis or stance control orthosis) and a follow-up for the microprocessor stance and swing control orthosis with the Orthosis Evaluation Questionnaire, a new self-reported outcome measure devised by modifying the Prosthesis Evaluation Questionnaire for use in lower limb orthotics and the Activities of Daily Living Questionnaire. Results: The Orthosis Evaluation Questionnaire results demonstrated significant improvements by microprocessor stance and swing control orthosis use in the total score and the domains of ambulation (p = .001), paretic limb health (p = .04), sounds (p = .02), and well-being (p = .01). Activities of Daily Living Questionnaire results showed significant improvements with the microprocessor stance and swing control orthosis with regard to perceived safety and difficulty of activities of daily living. Conclusion: The microprocessor stance and swing control orthosis may facilitate an easier, more physiological, and safer execution of many activities of daily living compared to traditional leg orthosis technologies. Clinical relevance This study compared patient-reported outcomes of a microprocessor stance and swing control orthosis (C-Brace) to those with traditional knee ankle foot orthosis and stance control orthosis devices. The C-Brace offers new functions including controlled knee flexion during weight bearing and dynamic swing control, resulting in significant improvements in perceived orthotic mobility and safety. PMID:27151648

  20. Safety and walking ability of KAFO users with the C-Brace® Orthotronic Mobility System, a new microprocessor stance and swing control orthosis.

    PubMed

    Pröbsting, Eva; Kannenberg, Andreas; Zacharias, Britta

    2017-02-01

    There are clear indications for benefits of stance control orthoses compared to locked knee ankle foot orthoses. However, stance control orthoses still have limited function compared with a sound human leg. The aim of this study was to evaluate the potential benefits of a microprocessor stance and swing control orthosis compared to stance control orthoses and locked knee ankle foot orthoses in activities of daily living. Survey of lower limb orthosis users before and after fitting of a microprocessor stance and swing control orthosis. Thirteen patients with various lower limb pareses completed a baseline survey for their current orthotic device (locked knee ankle foot orthosis or stance control orthosis) and a follow-up for the microprocessor stance and swing control orthosis with the Orthosis Evaluation Questionnaire, a new self-reported outcome measure devised by modifying the Prosthesis Evaluation Questionnaire for use in lower limb orthotics and the Activities of Daily Living Questionnaire. The Orthosis Evaluation Questionnaire results demonstrated significant improvements by microprocessor stance and swing control orthosis use in the total score and the domains of ambulation ( p = .001), paretic limb health ( p = .04), sounds ( p = .02), and well-being ( p = .01). Activities of Daily Living Questionnaire results showed significant improvements with the microprocessor stance and swing control orthosis with regard to perceived safety and difficulty of activities of daily living. The microprocessor stance and swing control orthosis may facilitate an easier, more physiological, and safer execution of many activities of daily living compared to traditional leg orthosis technologies. Clinical relevance This study compared patient-reported outcomes of a microprocessor stance and swing control orthosis (C-Brace) to those with traditional knee ankle foot orthosis and stance control orthosis devices. The C-Brace offers new functions including controlled knee flexion during weight bearing and dynamic swing control, resulting in significant improvements in perceived orthotic mobility and safety.

  1. Comparison of various decentralised structural and cavity feedback control strategies for transmitted noise reduction through a double panel structure

    NASA Astrophysics Data System (ADS)

    Ho, Jen-Hsuan; Berkhoff, Arthur

    2014-03-01

    This paper compares various decentralised control strategies, including structural and acoustic actuator-sensor configuration designs, to reduce noise transmission through a double panel structure. The comparison is based on identical control stability indexes. The double panel structure consists of two panels with air in between and offers the advantages of low sound transmission at high frequencies, low heat transmission, and low weight. The double panel structure is widely used, such as in the aerospace and automotive industries. Nevertheless, the resonance of the cavity and the poor sound transmission loss at low frequencies limit the double panel's noise control performance. Applying active structural acoustic control to the panels or active noise control to the cavity has been discussed in many papers. In this paper, the resonances of the panels and the cavity are considered simultaneously to further reduce the transmitted noise through an existing double panel structure. A structural-acoustic coupled model is developed to investigate and compare various structural control and cavity control methods. Numerical analysis and real-time control results show that structural control should be applied to both panels. Three types of cavity control sources are presented and compared. The results indicate that the largest noise reduction is obtained with cavity control by loudspeakers modified to operate as incident pressure sources.

  2. Evaluation of the acoustic coordinated reset (CR®) neuromodulation therapy for tinnitus: study protocol for a double-blind randomized placebo-controlled trial.

    PubMed

    Hoare, Derek J; Pierzycki, Robert H; Thomas, Holly; McAlpine, David; Hall, Deborah A

    2013-07-10

    Current theories of tinnitus assume that the phantom sound is generated either through increased spontaneous activity of neurons in the auditory brain, or through pathological temporal firing patterns of the spontaneous neuronal discharge, or a combination of both factors. With this in mind, Tass and colleagues recently tested a number of temporally patterned acoustic stimulation strategies in a proof of concept study. Potential therapeutic sound regimes were derived according to a paradigm assumed to disrupt hypersynchronous neuronal activity, and promote plasticity mechanisms that stabilize a state of asynchronous spontaneous activity. This would correspond to a permanent reduction of tinnitus. The proof of concept study, conducted in Germany, confirmed the safety of the acoustic stimuli for use in tinnitus, and exploratory results indicated modulation of tinnitus-related pathological synchronous activity with potential therapeutic benefit. The most effective stimulation paradigm is now in clinical use as a sound therapy device, the acoustic coordinated reset (CR®) neuromodulation (Adaptive Neuromodulation GmbH (ANM), Köln, Germany). To measure the efficacy of CR® neuromodulation, we devised a powered, two-center, randomized controlled trial (RCT) compliant with the reporting standards defined in the Consolidated Standards of Reporting Trials (CONSORT) Statement. The RCT design also addresses the recent call for international standards within the tinnitus community for high-quality clinical trials. The design uses a between-subjects comparison with minimized allocation of participants to treatment and placebo groups. A minimization approach was selected to ensure that the two groups are balanced with respect to age, gender, hearing, and baseline tinnitus severity. The protocol ensures double blinding, with crossover of the placebo group to receive the proprietary intervention after 12 weeks. The primary endpoints are the pre- and post-treatment measures that provide the primary measures of efficacy, namely a validated and sensitive questionnaire measure of the functional impact of tinnitus. The trial is also designed to capture secondary changes in tinnitus handicap, quality (pitch, loudness, bandwidth), and changes in tinnitus-related pathological synchronous brain activity using electroencephalography (EEG). This RCT was designed to provide a confident high-level estimate of the efficacy of sound therapy using CR® neuromodulation compared to a well-matched placebo intervention, and uniquely in terms of sound therapy, examine the physiological effects of the intervention against its putative mechanism of action. ClinicalTrials.gov, NCT01541969.

  3. Interior sound field control using generalized singular value decomposition in the frequency domain.

    PubMed

    Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane

    2017-01-01

    The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.

  4. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS)

    PubMed Central

    Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology. PMID:28604786

  5. Knowledge and Practice of Nursing Staff towards Infection Control Measures in the Palestinian Hospitals

    ERIC Educational Resources Information Center

    Fashafsheh, Imad; Ayed, Ahmad; Eqtait, Faeda; Harazneh, Lubna

    2015-01-01

    Health care professionals are constantly exposed to microorganisms. Many of which can cause serious or even lethal infections. Nurses in particular are often exposed to various infections during the course of carrying out their nursing activities. Therefore nurses should have sound knowledge and strict adherence to infection control practice. Aim…

  6. Active control of fan noise

    NASA Astrophysics Data System (ADS)

    Yamasaki, Nobuhiko; Tajima, Hirotoshi

    2008-06-01

    In the wake-rotor interaction fan noise, a number of the interacting modes at the blade passing frequency (BPF) and its harmonics are generated which are prescribed by the number of stator and rotor blades etc. In the present study, the dominant mode is tried to be suppressed by the secondary sound from the loudspeaker actuators. One of the novel features of the present system is the adoption of the control board with the Field Programmable Gate Array (FPGA) hardware and the LabVIEW software to synchronize the circumferentially installed loudspeaker actuators with the relative location of rotational blades under arbitrary fan rotational speeds. The experiments were conducted under the conditions of three rotational speeds of 2004, 3150, and 4002 [rpm]. The reduction in the sound pressure level (SPL) was observed for all three rotational speeds. The sound pressure level at the BPF was reduced approximately 13 [dB] for 2004 [rpm] case, but not so large reduction was attained for other cases probably due to the inefficiency of the loudspeaker actuators at high frequencies

  7. A study of methods to predict and measure the transmission of sound through the walls of light aircraft

    NASA Technical Reports Server (NTRS)

    Bernhard, R. J.; Bolton, J. S.; Gardner, B.; Mickol, J.; Mollo, C.; Bruer, C.

    1986-01-01

    Progress was made in the following areas: development of a numerical/empirical noise source identification procedure using bondary element techniques; identification of structure-borne noise paths using structural intensity and finite element methods; development of a design optimization numerical procedure to be used to study active noise control in three-dimensional geometries; measurement of dynamic properties of acoustical foams and incorporation of these properties in models governing three-dimensional wave propagation in foams; and structure-borne sound path identification by use of the Wigner distribution.

  8. Acoustic processing method for MS/MS experiments

    NASA Technical Reports Server (NTRS)

    Whymark, R. R.

    1973-01-01

    Acoustical methods in which intense sound beams can be used to control the position of objects are considered. The position control arises from the radiation force experienced when a body is placed in a sound field. A description of the special properties of intense sound fields useful for position control is followed by a discussion of the more obvious methods of position, namely the use of multiple sound beams. A new type of acoustic position control device is reported that has advantages of simplicity and reliability and utilizes only a single sound beam. Finally a description is given of an experimental single beam levitator, and the results obtained in a number of key levitation experiments.

  9. Selective attention to sound location or pitch studied with fMRI.

    PubMed

    Degerman, Alexander; Rinne, Teemu; Salmi, Juha; Salonen, Oili; Alho, Kimmo

    2006-03-10

    We used 3-T functional magnetic resonance imaging to compare the brain mechanisms underlying selective attention to sound location and pitch. In different tasks, the subjects (N = 10) attended to a designated sound location or pitch or to pictures presented on the screen. In the Attend Location conditions, the sound location varied randomly (left or right), while the pitch was kept constant (high or low). In the Attend Pitch conditions, sounds of randomly varying pitch (high or low) were presented at a constant location (left or right). Both attention to location and attention to pitch produced enhanced activity (in comparison with activation caused by the same sounds when attention was focused on the pictures) in widespread areas of the superior temporal cortex. Attention to either sound feature also activated prefrontal and inferior parietal cortical regions. These activations were stronger during attention to location than during attention to pitch. Attention to location but not to pitch produced a significant increase of activation in the premotor/supplementary motor cortices of both hemispheres and in the right prefrontal cortex, while no area showed activity specifically related to attention to pitch. The present results suggest some differences in the attentional selection of sounds on the basis of their location and pitch consistent with the suggested auditory "what" and "where" processing streams.

  10. How to Design a Quiet School.

    ERIC Educational Resources Information Center

    The American School Board Journal, 1968

    1968-01-01

    Problems of sound insulation and control particularly in areas where there is a high level in exterior noise conditions, resulted in this study to determine approaches to sound control in school construction. After a general discussion of noise problems in school districts and teaching situations, two examples of sound control solutions are…

  11. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.

  12. Activation of auditory cortex by anticipating and hearing emotional sounds: an MEG study.

    PubMed

    Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina

    2013-01-01

    To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period.

  13. Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEG Study

    PubMed Central

    Yokosawa, Koichi; Pamilo, Siina; Hirvenkari, Lotta; Hari, Riitta; Pihko, Elina

    2013-01-01

    To study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recorded auditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional or neutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by 100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcoming emotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipation period (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of the same polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on the upcoming sound: towards the end of the anticipation period the activity became stronger when the subject was anticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields were predominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing of emotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortex activity is modulated by the upcoming sound category during the anticipation period. PMID:24278270

  14. Affective Evaluations of and Reactions to Exterior and Interior Vehicle Auditory Quality

    NASA Astrophysics Data System (ADS)

    Västfjäll, D.; Gulbol, M.-A.; Kleiner, M.; Gärling, T.

    2002-08-01

    Affective reactions to and evaluations of auditory stimuli are fundamental components of human perception. In three experiments, participants rated their affective reactions (how pleasant I feel) and preferences for these affective reactions (how much I like the way I feel) as well as affective evaluations (how pleasant the sound is) to interior and exterior binaurally recorded vehicle sounds varying in physical properties. Consistent with previous research, it was found that the orthogonal affect dimensions of valence (unpleasant-pleasant) and arousal or activation (deactivation-activation) discriminated between affective reactions induced by the different qualities of the sounds. Moreover, preference for affective reactions was related to both valence and activation. Affective evaluations (powerful-powerless/passive-active and unpleasant-pleasant) correlated significantly with affective reactions to the same sounds in both within-subjects and between-subjects designs. Standard sound quality metrics derived from the sounds correlated, however, poorly with the affective ratings of interior sounds and only moderately with affective ratings of exterior sounds. Taken together, the results suggest that affect is an important component in product auditory quality optimization.

  15. Dissociating verbal and nonverbal audiovisual object processing.

    PubMed

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  16. Letter names and phonological awareness help children to learn letter-sound relations.

    PubMed

    Cardoso-Martins, Cláudia; Mesquita, Tereza Cristina Lara; Ehri, Linnea

    2011-05-01

    Two experimental training studies with Portuguese-speaking preschoolers in Brazil were conducted to investigate whether children benefit from letter name knowledge and phonological awareness in learning letter-sound relations. In Experiment 1, two groups of children were compared. The experimental group was taught the names of letters whose sounds occur either at the beginning (e.g., the letter /be/) or in the middle (e.g., the letter /'eli/) of the letter name. The control group was taught the shapes of the letters but not their names. Then both groups were taught the sounds of the letters. Results showed an advantage for the experimental group, but only for beginning-sound letters. Experiment 2 investigated whether training in phonological awareness could boost the learning of letter sounds, particularly middle-sound letters. In addition to learning the names of beginning- and middle-sound letters, children in the experimental group were taught to categorize words according to rhyme and alliteration, whereas controls were taught to categorize the same words semantically. All children were then taught the sounds of the letters. Results showed that children who were given phonological awareness training found it easier to learn letter sounds than controls. This was true for both types of letters, but especially for middle-sound letters. Copyright © 2011. Published by Elsevier Inc.

  17. Cortical Activation Patterns Evoked by Temporally Asymmetric Sounds and Their Modulation by Learning

    PubMed Central

    Horikawa, Junsei

    2017-01-01

    When complex sounds are reversed in time, the original and reversed versions are perceived differently in spectral and temporal dimensions despite their identical duration and long-term spectrum-power profiles. Spatiotemporal activation patterns evoked by temporally asymmetric sound pairs demonstrate how the temporal envelope determines the readout of the spectrum. We examined the patterns of activation evoked by a temporally asymmetric sound pair in the primary auditory field (AI) of anesthetized guinea pigs and determined how discrimination training modified these patterns. Optical imaging using a voltage-sensitive dye revealed that a forward ramped-down natural sound (F) consistently evoked much stronger responses than its time-reversed, ramped-up counterpart (revF). The spatiotemporal maximum peak (maxP) of F-evoked activation was always greater than that of revF-evoked activation, and these maxPs were significantly separated within the AI. Although discrimination training did not affect the absolute magnitude of these maxPs, the revF-to-F ratio of the activation peaks calculated at the location where hemispheres were maximally activated (i.e., F-evoked maxP) was significantly smaller in the trained group. The F-evoked activation propagated across the AI along the temporal axis to the ventroanterior belt field (VA), with the local activation peak within the VA being significantly larger in the trained than in the naïve group. These results suggest that the innate network is more responsive to natural sounds of ramped-down envelopes than their time-reversed, unnatural sounds. The VA belt field activation might play an important role in emotional learning of sounds through its connections with amygdala. PMID:28451640

  18. Sound representation in higher language areas during language generation

    PubMed Central

    Magrassi, Lorenzo; Aromataris, Giuseppe; Cabrini, Alessandro; Annovazzi-Lodi, Valerio; Moro, Andrea

    2015-01-01

    How language is encoded by neural activity in the higher-level language areas of humans is still largely unknown. We investigated whether the electrophysiological activity of Broca’s area correlates with the sound of the utterances produced. During speech perception, the electric cortical activity of the auditory areas correlates with the sound envelope of the utterances. In our experiment, we compared the electrocorticogram recorded during awake neurosurgical operations in Broca’s area and in the dominant temporal lobe with the sound envelope of single words versus sentences read aloud or mentally by the patients. Our results indicate that the electrocorticogram correlates with the sound envelope of the utterances, starting before any sound is produced and even in the absence of speech, when the patient is reading mentally. No correlations were found when the electrocorticogram was recorded in the superior parietal gyrus, an area not directly involved in language generation, or in Broca’s area when the participants were executing a repetitive motor task, which did not include any linguistic content, with their dominant hand. The distribution of suprathreshold correlations across frequencies of cortical activities varied whether the sound envelope derived from words or sentences. Our results suggest the activity of language areas is organized by sound when language is generated before any utterance is produced or heard. PMID:25624479

  19. Effect of nature-based sound therapy on agitation and anxiety in coronary artery bypass graft patients during the weaning of mechanical ventilation: A randomised clinical trial.

    PubMed

    Aghaie, Bahman; Rejeh, Nahid; Heravi-Karimooi, Majideh; Ebadi, Abbas; Moradian, Seyed Tayeb; Vaismoradi, Mojtaba; Jasper, Melanie

    2014-04-01

    Weaning from mechanical ventilation is a frequent nursing activity in critical care. Nature-based sound as a non-pharmacological and nursing intervention effective in other contexts may be an efficient approach to alleviating anxiety, agitation and adverse effects of sedative medication in patients undergoing weaning from mechanical ventilation. This study identified the effect of nature-based sound therapy on agitation and anxiety on coronary artery bypass graft patients during weaning from mechanical ventilation. A randomised clinical trial design was used. 120 coronary artery bypass graft patients aged 45-65 years undergoing weaning from mechanical ventilation were randomly assigned to intervention and control groups. Patients in the intervention group listened to nature-based sounds through headphones; the control group had headphones with no sound. Haemodynamic variables, anxiety levels and agitation were assessed using the Faces Anxiety Scale and Richmond Agitation Sedation Scale, respectively. Patients in both groups had vital signs recorded after the first trigger, at 20 min intervals throughout the procedure, immediately after the procedure, 20 min after extubation, and 30 min after extubation. Data were collected over 5 months from December 2012 to April 2013. The intervention group had significantly lower anxiety and agitation levels than the control group. Regarding haemodynamic variables, a significant time trend and interaction was reported between time and group (p<0.001). A significant difference was also found between the anxiety (p<0.002) and agitation (p<0.001) scores in two groups. Nature-based sound can provide an effective method of decreasing potential adverse haemodynamic responses arising from anxiety and agitation in weaning from mechanical ventilation in coronary artery bypass graft patients. Nurses can incorporate this intervention as a non-pharmacological intervention into the daily care of patients undergoing weaning from mechanical ventilation in order to reduce their anxiety and agitation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    PubMed

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions regarding the direction of change for error type proportions. The current findings argued for an alternative concept of the role of activation and decay in influencing types of serial-order sound errors. Rather than a slow activation decay rate (Dell, 1986), the results of the current study were more compatible with an alternative explanation of rapid activation decay or slow build-up of residual activation.

  1. The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects.

    PubMed

    Giraud, Anne Lise; Truy, Eric

    2002-01-01

    Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.

  2. Discriminating between auditory and motor cortical responses to speech and non-speech mouth sounds

    PubMed Central

    Agnew, Z.K.; McGettigan, C.; Scott, S.K.

    2012-01-01

    Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds, or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and non-speech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: anterior temporal lobe areas are sensitive to the acoustic/phonetic properties while motor responses may show more generalised responses to the acoustic stimuli. PMID:21812557

  3. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2017-06-01

    The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.

  4. A study of methods to predict and measure the transmission of sound through the walls of light aircraft

    NASA Technical Reports Server (NTRS)

    Bernhard, R. J.; Bolton, J. S.

    1988-01-01

    The objectives are: measurement of dynamic properties of acoustical foams and incorporation of these properties in models governing three-dimensional wave propagation in foams; tests to measure sound transmission paths in the HP137 Jetstream 3; and formulation of a finite element energy model. In addition, the effort to develop a numerical/empirical noise source identification technique was completed. The investigation of a design optimization technique for active noise control was also completed. Monthly progress reports which detail the progress made toward each of the objectives are summarized.

  5. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  6. Relations among pure-tone sound stimuli, neural activity, and the loudness sensation

    NASA Technical Reports Server (NTRS)

    Howes, W. L.

    1972-01-01

    Both the physiological and psychological responses to pure-tone sound stimuli are used to derive formulas which: (1) relate the loudness, loudness level, and sound-pressure level of pure tones; (2) apply continuously over most of the acoustic regime, including the loudness threshold; and (3) contain no undetermined coefficients. Some of the formulas are fundamental for calculating the loudness of any sound. Power-law formulas relating the pure-tone sound stimulus, neural activity, and loudness are derived from published data.

  7. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  8. Fibro-vascular coupling in the control of cochlear blood flow.

    PubMed

    Dai, Min; Shi, Xiaorui

    2011-01-01

    Transduction of sound in the cochlea is metabolically demanding. The lateral wall and hair cells are critically vulnerable to hypoxia, especially at high sound levels, and tight control over cochlear blood flow (CBF) is a physiological necessity. Yet despite the importance of CBF for hearing, consensus on what mechanisms are involved has not been obtained. We report on a local control mechanism for regulating inner ear blood flow involving fibrocyte signaling. Fibrocytes in the super-strial region are spatially distributed near pre-capillaries of the spiral ligament of the albino guinea pig cochlear lateral wall, as demonstrably shown in transmission electron microscope and confocal images. Immunohistochemical techniques reveal the inter-connected fibrocytes to be positive for Na+/K+ ATPase β1 and S100. The connected fibrocytes display more Ca(2+) signaling than other cells in the cochlear lateral wall as indicated by fluorescence of a Ca(2+) sensor, fluo-4. Elevation of Ca(2+) in fibrocytes, induced by photolytic uncaging of the divalent ion chelator o-nitrophenyl EGTA, results in propagation of a Ca(2+) signal to neighboring vascular cells and vasodilation in capillaries. Of more physiological significance, fibrocyte to vascular cell coupled signaling was found to mediate the sound stimulated increase in cochlear blood flow (CBF). Cyclooxygenase-1 (COX-1) was required for capillary dilation. The findings provide the first evidence that signaling between fibrocytes and vascular cells modulates CBF and is a key mechanism for meeting the cellular metabolic demand of increased sound activity.

  9. PAH bioconcentration in Mytilus sp from Sinclair Inlet, WA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frazier, J.; Young, D.; Ozretich, R.

    1995-12-31

    Approximately 20 polynuclear aromatic hydrocarbons (PAH) were measured by GC/MS in seawater and whole soft tissues of the intertidal mussel Mytilus sp. collected in July 1991 within and around Puget Sound`s Sinclair Inlet. Low variability was observed in the water concentrations collected over three days at control sites, yielding reliable values for the exposure levels experienced by this bioindicator mollusk. Mean water concentrations of acenaphthene, phenanthrene, and fluoranthene in the control region were 2.7 {+-} 0.8, 2.8 {+-} 0.8, and 3.1 {+-} 0.7 ng/liter, respectively. Levels measured near sites of vessel activity were higher but much more variable; this reducedmore » the reliability of the tissue/water bioconcentration factors (BCF) obtained from these samples. An empirical model relating values of Log BCF and Log Kow for the control zone samples supports the utility of this estuarine bioindicator for monitoring general levels of PAH in nearshore surface waters.« less

  10. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS)

    PubMed Central

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception. PMID:27042360

  11. Active control of wake/blade-row interaction noise through the use of blade surface actuators

    NASA Technical Reports Server (NTRS)

    Kousen, Kenneth A.; Verdon, Joseph M.

    1993-01-01

    A combined analytical/computational approach for controlling of the noise generated by wake/blade-row interaction through the use of anti-sound actuators on the blade surfaces is described. A representative two-dimensional section of a fan stage, composed of an upstream fan rotor and a downstream fan exit guide vane (FEGV), is examined. An existing model for the wakes generated by the rotor is analyzed to provide realistic magnitudes for the vortical excitations imposed at the inlet to the FEGV. The acoustic response of the FEGV is determined at multiples of the blade passing frequency (BPF) by using the linearized unsteady flow analysis, LINFLO. Acoustic field contours are presented at each multiple of BPF illustrating the generated acoustic response disturbances. Anti-sound is then provided by placing oscillating control surfaces, whose lengths and locations are specified arbitrarily, on the blades. An analysis is then conducted to determine the complex amplitudes required for the control surface motions to best reduce the noise. It is demonstrated that if the number of acoustic response modes to be controlled is equal to the number of available independent control surfaces, complete noise cancellation can be achieved. A weighted least squares minimization procedure for the control equations is given for cases in which the number of acoustic modes exceeds the number of available control surfaces. The effectiveness of the control is measured by the magnitude of a propagating acoustic response vector, which is related to the circumferentially averaged sound pressure level (SPL), and is minimized by a standard least-squares minimization procedure.

  12. Active control of wake/blade-row interaction noise through the use of blade surface actuators

    NASA Astrophysics Data System (ADS)

    Kousen, Kenneth A.; Verdon, Joseph M.

    1993-12-01

    A combined analytical/computational approach for controlling of the noise generated by wake/blade-row interaction through the use of anti-sound actuators on the blade surfaces is described. A representative two-dimensional section of a fan stage, composed of an upstream fan rotor and a downstream fan exit guide vane (FEGV), is examined. An existing model for the wakes generated by the rotor is analyzed to provide realistic magnitudes for the vortical excitations imposed at the inlet to the FEGV. The acoustic response of the FEGV is determined at multiples of the blade passing frequency (BPF) by using the linearized unsteady flow analysis, LINFLO. Acoustic field contours are presented at each multiple of BPF illustrating the generated acoustic response disturbances. Anti-sound is then provided by placing oscillating control surfaces, whose lengths and locations are specified arbitrarily, on the blades. An analysis is then conducted to determine the complex amplitudes required for the control surface motions to best reduce the noise. It is demonstrated that if the number of acoustic response modes to be controlled is equal to the number of available independent control surfaces, complete noise cancellation can be achieved. A weighted least squares minimization procedure for the control equations is given for cases in which the number of acoustic modes exceeds the number of available control surfaces. The effectiveness of the control is measured by the magnitude of a propagating acoustic response vector, which is related to the circumferentially averaged sound pressure level (SPL), and is minimized by a standard least-squares minimization procedure.

  13. Pain empathy in schizophrenia: an fMRI study

    PubMed Central

    Jimenez, Amy M.; Lee, Junghee; Wynn, Jonathan K.; Eisenberger, Naomi I.; Green, Michael F.

    2016-01-01

    Abstract Although it has been proposed that schizophrenia is characterized by impaired empathy, several recent studies found intact neural responses on tasks measuring the affective subdomain of empathy. This study further examined affective empathy in 21 schizophrenia outpatients and 21 healthy controls using a validated pain empathy paradigm with two components: (i) observing videos of people described as medical patients who were receiving a painful sound stimulation treatment; (ii) listening to the painful sounds (to create regions of interest). The observing videos component incorporated experimental manipulations of perspective taking (instructions to imagine ‘Self’ vs ‘Other’ experiencing pain) and cognitive appraisal (information about whether treatment was ‘Effective’ vs ‘Not Effective’). When considering activation across experimental conditions, both groups showed similar dorsal anterior cingulate cortex (dACC) and anterior insula (AI) activation while merely observing others in pain. However, there were group differences associated with perspective taking: controls showed relatively greater dACC and AI activation for the Self vs Other contrast whereas patients showed relatively greater activation in these and additional regions for the Other vs Self contrast. Although patients demonstrated grossly intact neural activity while observing others in pain, they showed more subtle abnormalities when required to toggle between imagining themselves vs others experiencing pain. PMID:26746181

  14. Pain empathy in schizophrenia: an fMRI study.

    PubMed

    Horan, William P; Jimenez, Amy M; Lee, Junghee; Wynn, Jonathan K; Eisenberger, Naomi I; Green, Michael F

    2016-05-01

    Although it has been proposed that schizophrenia is characterized by impaired empathy, several recent studies found intact neural responses on tasks measuring the affective subdomain of empathy. This study further examined affective empathy in 21 schizophrenia outpatients and 21 healthy controls using a validated pain empathy paradigm with two components: (i) observing videos of people described as medical patients who were receiving a painful sound stimulation treatment; (ii) listening to the painful sounds (to create regions of interest). The observing videos component incorporated experimental manipulations of perspective taking (instructions to imagine 'Self' vs 'Other' experiencing pain) and cognitive appraisal (information about whether treatment was 'Effective' vs 'Not Effective'). When considering activation across experimental conditions, both groups showed similar dorsal anterior cingulate cortex (dACC) and anterior insula (AI) activation while merely observing others in pain. However, there were group differences associated with perspective taking: controls showed relatively greater dACC and AI activation for the Self vs Other contrast whereas patients showed relatively greater activation in these and additional regions for the Other vs Self contrast. Although patients demonstrated grossly intact neural activity while observing others in pain, they showed more subtle abnormalities when required to toggle between imagining themselves vs others experiencing pain. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  16. Mechanosensory hair cells express two molecularly distinct mechanotransduction channels

    PubMed Central

    Zhao, Bo; Cunningham, Christopher; Harkins-Perry, Sarah; Coste, Bertrand; Ranade, Sanjeev; Zebarjadi, Navid; Beurg, Maryline; Fettiplace, Robert; Patapoutian, Ardem; Mueller, Ulrich

    2016-01-01

    Auditory hair cells contain mechanotransduction channels that rapidly open in response to sound-induced vibrations. Surprisingly, we report here that auditory hair cells contain two molecularly distinct mechanotransduction channels. One ion channel is activated by sound and is responsible for sensory transduction. This sensory transduction channel is expressed in hair-cell stereocilia and previous studies show that its activity is affected by mutations in the genes encoding the transmembrane proteins TMHS/LHFPL5, TMIE and TMC1/2. We show here that the second ion channel is expressed at the apical surface of hair cells and contains the Piezo2 protein. The activity of the Piezo2-dependent channel is controlled by the intracellular Ca2+ concentration and can be recorded following disruption of the sensory transduction machinery or more generally by disruption of the sensory epithelium. We thus conclude that hair cells express two molecularly and functionally distinct mechanotransduction channels with different subcellular distribution. PMID:27893727

  17. Assessment of transfemoral amputees using a passive microprocessor-controlled knee versus an active powered microprocessor-controlled knee for level walking.

    PubMed

    Creylman, Veerle; Knippels, Ingrid; Janssen, Paul; Biesbrouck, Evelyne; Lechler, Knut; Peeraer, Louis

    2016-12-19

    In transfemoral (TF) amputees, the forward propulsion of the prosthetic leg in swing has to be mainly carried out by hip muscles. With hip strength being the strongest predictor to ambulation ability, an active powered knee joint could have a positive influence, lowering hip loading and contributing to ambulation mobility. To assess this, gait of four TF amputees was measured for level walking, first while using a passive microprocessor-controlled prosthetic knee (P-MPK), subsequently while using an active powered microprocessor-controlled prosthetic knee (A-MPK). Furthermore, to assess long-term effects of the use of an A-MPK, a 4-weeks follow-up case study was performed. The kinetics and kinematics of the gait of four TF amputees were assessed while walking with subsequently the P-MPK and the A-MPK. For one amputee, a follow-up study was performed: he used the A-MPK for 4 weeks, his gait was measured weekly. The range of motion of the knee was higher on both the prosthetic and the sound leg in the A-MPK compared to the P-MPK. Maximum hip torque (HT) during early stance increased for the prosthetic leg and decreased for the sound leg with the A-MPK compared to the P-MPK. During late stance, the maximum HT decreased for the prosthetic leg. The difference between prosthetic and sound leg for HT disappeared when using the A-MPK. Also, an increase in stance phase duration was observed. The follow-up study showed an increase in confidence with the A-MPK over time. Results suggested that, partially due to an induced knee flexion during stance, HT can be diminished when walking with the A-MPK compared to the P-MPK. The single case follow-up study showed positive trends indicating that an adaptation time is beneficial for the A-MPK.

  18. Prevalence of different temporomandibular joint sounds, with emphasis on disc-displacement, in patients with temporomandibular disorders and controls.

    PubMed

    Elfving, Lars; Helkimo, Martti; Magnusson, Tomas

    2002-01-01

    Temporomandibular joint (TMJ) sounds are very common among patients with temporomandibular disorders (TMD), but also in non-patient populations. A variety of different causes to TMJ-sounds have been suggested e.g. arthrotic changes in the TMJs, anatomical variations, muscular incoordination and disc displacement. In the present investigation, the prevalence and type of different joint sounds were registered in 125 consecutive patients with suspected TMD and in 125 matched controls. Some kind of joint sound was recorded in 56% of the TMD patients and in 36% of the controls. The awareness of joint sounds was higher among TMD patients as compared to controls (88% and 60% respectively). The most common sound recorded in both groups was reciprocal clickings indicative of a disc displacement, while not one single case fulfilling the criteria for clicking due to a muscular incoordination was found. In the TMD group women with disc displacement reported sleeping on the stomach significantly more often than women without disc displacement did. An increased general joint laxity was found in 39% of the TMD patients with disc displacement, while this was found in only 9% of the patients with disc displacement in the control group. To conclude, disc displacement is probably the most common cause to TMJ sounds, while the existence of TMJ sounds due to a muscular incoordination can be questioned. Furthermore, sleeping on the stomach might be associated with disc displacement, while general joint laxity is probably not a causative factor, but a seeking care factor in patients with disc displacement.

  19. The Shock and Vibration Digest. Volume 18, Number 8

    DTIC Science & Technology

    1986-08-01

    the swash plate . This is an active that vibration can be reduced by separation of control system...element program model . ture-borne sound intensity has been tried earlier The agreement is shown to be very good. A on thin- plate constructions in ...predicting the response of two displacement controlled laboratory tests that were used for the determination of the model parameters. 86-1532

  20. Combination sound and vibration isolation curb for rooftop air-handling systems

    NASA Astrophysics Data System (ADS)

    Paige, Thomas S.

    2005-09-01

    This paper introduces the new Model ESSR Sound and Vibration Isolation Curb manufactured by Kinetics Noise Control, Inc. This product was specially designed to address all of the common transmission paths associated with noise and vibration sources from roof-mounted air-handling equipment. These include: reduction of airborne fan noise in supply and return air ductwork, reduction of duct rumble and breakout noise, reduction of direct airborne sound transmission through the roof deck, and reduction of vibration and structure-borne noise transmission to the building structure. Upgrade options are available for increased seismic restraint and wind-load protection. The advantages of this new system over the conventional approach of installing separate duct silencers in the room ceiling space below the rooftop unit are discussed. Several case studies are presented with the emphasis on completed projects pertaining to classrooms and school auditorium applications. Some success has also been achieved by adding active noise control components to improve low-frequency attenuation. This is an innovative product designed for conformance with the new classroom acoustics standard ANSI S12.60.

  1. 78 FR 59095 - Agency Information Collection Activities: Information Collection Renewal; Submission for OMB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-25

    ... Activities: Information Collection Renewal; Submission for OMB Review; Guidance on Sound Incentive... concerning renewal of an information collection titled, ``Guidance on Sound Incentive Compensation Practices... following collection: Title: Guidance on Sound Incentive Compensation Policies. OMB Number: 1557-0245...

  2. Threshold for onset of injury in Chinook salmon from exposure to impulsive pile driving sounds.

    PubMed

    Halvorsen, Michele B; Casper, Brandon M; Woodley, Christa M; Carlson, Thomas J; Popper, Arthur N

    2012-01-01

    The risk of effects to fishes and other aquatic life from impulsive sound produced by activities such as pile driving and seismic exploration is increasing throughout the world, particularly with the increased exploitation of oceans for energy production. At the same time, there are few data that provide insight into the effects of these sounds on fishes. The goal of this study was to provide quantitative data to define the levels of impulsive sound that could result in the onset of barotrauma to fish. A High Intensity Controlled Impedance Fluid filled wave Tube was developed that enabled laboratory simulation of high-energy impulsive sound that were characteristic of aquatic far-field, plane-wave acoustic conditions. The sounds used were based upon the impulsive sounds generated by an impact hammer striking a steel shell pile. Neutrally buoyant juvenile Chinook salmon (Oncorhynchus tshawytscha) were exposed to impulsive sounds and subsequently evaluated for barotrauma injuries. Observed injuries ranged from mild hematomas at the lowest sound exposure levels to organ hemorrhage at the highest sound exposure levels. Frequency of observed injuries were used to compute a biological response weighted index (RWI) to evaluate the physiological impact of injuries at the different exposure levels. As single strike and cumulative sound exposure levels (SEL(ss), SEL(cum) respectively) increased, RWI values increased. Based on the results, tissue damage associated with adverse physiological costs occurred when the RWI was greater than 2. In terms of sound exposure levels a RWI of 2 was achieved for 1920 strikes by 177 dB re 1 µPa(2)⋅s SEL(ss) yielding a SEL(cum) of 210 dB re 1 µPa(2)⋅s, and for 960 strikes by 180 dB re 1 µPa(2)⋅s SEL(ss) yielding a SEL(cum) of 210 dB re 1 µPa(2)⋅s. These metrics define thresholds for onset of injury in juvenile Chinook salmon.

  3. Threshold for Onset of Injury in Chinook Salmon from Exposure to Impulsive Pile Driving Sounds

    PubMed Central

    Halvorsen, Michele B.; Casper, Brandon M.; Woodley, Christa M.; Carlson, Thomas J.; Popper, Arthur N.

    2012-01-01

    The risk of effects to fishes and other aquatic life from impulsive sound produced by activities such as pile driving and seismic exploration is increasing throughout the world, particularly with the increased exploitation of oceans for energy production. At the same time, there are few data that provide insight into the effects of these sounds on fishes. The goal of this study was to provide quantitative data to define the levels of impulsive sound that could result in the onset of barotrauma to fish. A High Intensity Controlled Impedance Fluid filled wave Tube was developed that enabled laboratory simulation of high-energy impulsive sound that were characteristic of aquatic far-field, plane-wave acoustic conditions. The sounds used were based upon the impulsive sounds generated by an impact hammer striking a steel shell pile. Neutrally buoyant juvenile Chinook salmon (Oncorhynchus tshawytscha) were exposed to impulsive sounds and subsequently evaluated for barotrauma injuries. Observed injuries ranged from mild hematomas at the lowest sound exposure levels to organ hemorrhage at the highest sound exposure levels. Frequency of observed injuries were used to compute a biological response weighted index (RWI) to evaluate the physiological impact of injuries at the different exposure levels. As single strike and cumulative sound exposure levels (SELss, SELcum respectively) increased, RWI values increased. Based on the results, tissue damage associated with adverse physiological costs occurred when the RWI was greater than 2. In terms of sound exposure levels a RWI of 2 was achieved for 1920 strikes by 177 dB re 1 µPa2⋅s SELss yielding a SELcum of 210 dB re 1 µPa2⋅s, and for 960 strikes by 180 dB re 1 µPa2⋅s SELss yielding a SELcum of 210 dB re 1 µPa2⋅s. These metrics define thresholds for onset of injury in juvenile Chinook salmon. PMID:22745695

  4. Active Damping Using Distributed Anisotropic Actuators

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Quinones, Juan D.; Wier, Nathan C.

    2010-01-01

    A helicopter structure experiences substantial high-frequency mechanical excitation from powertrain components such as gearboxes and drive shafts. The resulting structure-borne vibration excites the windows which then radiate sound into the passenger cabin. In many cases the radiated sound power can be reduced by adding damping. This can be accomplished using passive or active approaches. Passive treatments such as constrained layer damping tend to reduce window transparency. Therefore this paper focuses on an active approach utilizing compact decentralized control units distributed around the perimeter of the window. Each control unit consists of a triangularly shaped piezoelectric actuator, a miniature accelerometer, and analog electronics. Earlier work has shown that this type of system can increase damping up to approximately 1 kHz. However at higher frequencies the mismatch between the distributed actuator and the point sensor caused control spillover. This paper describes new anisotropic actuators that can be used to improve the bandwidth of the control system. The anisotropic actuators are composed of piezoelectric material sandwiched between interdigitated electrodes, which enables the application of the electric field in a preferred in-plane direction. When shaped correctly the anisotropic actuators outperform traditional isotropic actuators by reducing the mismatch between the distributed actuator and point sensor at high frequencies. Testing performed on a Plexiglas panel, representative of a helicopter window, shows that the control units can increase damping at low frequencies. However high frequency performance was still limited due to the flexible boundary conditions present on the test structure.

  5. Differential pathologies resulting from sound exposure: Tinnitus vs hearing loss

    NASA Astrophysics Data System (ADS)

    Longenecker, Ryan James

    The first step in identifying the mechanism(s) responsible for tinnitus development would be to discover a neural correlate that is differentially expressed in tinnitus-positive compared to tinnitus negative animals. Previous research has identified several neural correlates of tinnitus in animals that have tested positive for tinnitus. However it is unknown whether all or some of these correlates are linked to tinnitus or if they are a byproduct of hearing loss, a common outcome of tinnitus induction. Abnormally high spontaneous activity has frequently been linked to tinnitus. However, while some studies demonstrate that hyperactivity positively correlates with behavioral evidence of tinnitus, others show that when all animals develop hyperactivity to sound exposure, not all exposed animals show evidence of tinnitus. My working hypothesis is that certain aspects of hyperactivity are linked to tinnitus while other aspects are linked to hearing loss. The first specific aim utilized the gap induced prepulse inhibition of the acoustic startle reflex (GIPAS) to monitor the development of tinnitus in CBA/CaJ mice during one year following sound exposure. Immediately after sound exposure, GIPAS testing revealed widespread gap detection deficits across all frequencies, which was likely due to temporary threshold shifts. However, three months after sound exposure these deficits were limited to a narrow frequency band and were consistently detected up to one year after exposure. This suggests the development of chronic tinnitus is a long lasting and highly dynamic process. The second specific aim assessed hearing loss in sound exposed mice using several techniques. Acoustic brainstem responses recorded initially after sound exposure reveal large magnitude deficits in all exposed mice. However, at the three month period, thresholds return to control levels in all mice suggesting that ABRs are not a reliable tool for assessing permanent hearing loss. Input/output functions of the acoustic startle reflex show that after sound exposure the magnitude of startle responses decrease in most mice, to varying degrees. Lastly, PPI audiometry was able to detect specific behavioral threshold deficits for each mouse after sound exposure. These deficits persist past initial threshold shifts and are able to detect frequency specific permanent threshold shifts. The third specific aim examined hyperactivity and increased bursting activity in the inferior colliculus after sound exposure in relation to tinnitus and hearing loss. Spontaneous firing rates were increased in all mice after sound exposure regardless of behavioral evidence of tinnitus. However, abnormal increased bursting activity was not found in the animals identified with tinnitus but was exhibited in a mouse with broad-band severe threshold deficits. CBA/CaJ mice are a good model for both tinnitus development and noise-induced hearing loss studies. Hyperactivity which was evident in all exposed animals does not seem to be well correlated with behavioral evidence of tinnitus but more likely to be a general result of acoustic over exposure. Data from one animal strongly suggest that wide-spread severe threshold deficits are linked to an elevation of bursting activity predominantly ipsilateral to the side of sound exposure. This result is intriguing and should be followed up in further studies. Data obtained in this study provide new insights into underlying neural pathologies following sound exposure and have possible clinical applications for development of effective treatments and diagnostic tools for tinnitus and hearing loss.

  6. A Generative Model of Speech Production in Broca’s and Wernicke’s Areas

    PubMed Central

    Price, Cathy J.; Crinion, Jenny T.; MacSweeney, Mairéad

    2011-01-01

    Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalization, auditory feedback, and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping, and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words “one” and “three.” We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca’s area in the left dorsal pars opercularis and Wernicke’s area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca’s and Wernicke’s areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations. PMID:21954392

  7. Optimization of low frequency sound absorption by cell size control and multiscale poroacoustics modeling

    NASA Astrophysics Data System (ADS)

    Park, Ju Hyuk; Yang, Sei Hyun; Lee, Hyeong Rae; Yu, Cheng Bin; Pak, Seong Yeol; Oh, Chi Sung; Kang, Yeon June; Youn, Jae Ryoun

    2017-06-01

    Sound absorption of a polyurethane (PU) foam was predicted for various geometries to fabricate the optimum microstructure of a sound absorbing foam. Multiscale numerical analysis for sound absorption was carried out by solving flow problems in representative unit cell (RUC) and the pressure acoustics equation using Johnson-Champoux-Allard (JCA) model. From the numerical analysis, theoretical optimum cell diameter for low frequency sound absorption was evaluated in the vicinity of 400 μm under the condition of 2 cm-80 K (thickness of 2 cm and density of 80 kg/m3) foam. An ultrasonic foaming method was employed to modulate microcellular structure of PU foam. Mechanical activation was only employed to manipulate the internal structure of PU foam without any other treatment. A mean cell diameter of PU foam was gradually decreased with increase in the amplitude of ultrasonic waves. It was empirically found that the reduction of mean cell diameter induced by the ultrasonic wave enhances acoustic damping efficiency in low frequency ranges. Moreover, further analyses were performed with several acoustic evaluation factors; root mean square (RMS) values, noise reduction coefficients (NRC), and 1/3 octave band spectrograms.

  8. Acute Warm-up Effects in Submaximal Athletes: An EMG Study of Skilled Violinists.

    PubMed

    McCrary, J Matt; Halaki, Mark; Sorkin, Evgeny; Ackermann, Bronwen J

    2016-02-01

    Warm-up is commonly recommended for injury prevention and performance enhancement across all activities, yet this recommendation is not supported by evidence for repetitive submaximal activities such as instrumental music performance. The objective of this study is to quantify the effects of cardiovascular, core muscle, and musical warm-ups on muscle activity levels, musical performance, and subjective experience in skilled violinists. Fifty-five undergraduate, postgraduate, or professional violinists performed five randomly ordered 45-s musical excerpts of varying physical demands both before and after a randomly assigned 15-min, moderate-intensity cardiovascular, core muscle, musical (technical violin exercises), or inactive control warm-up protocol. Surface EMG data were obtained for 16 muscles of the trunk, shoulders, and right arm during each musical performance. Sound recording and perceived exertion (RPE) data were also obtained. Sound recordings were randomly ordered and rated for performance quality by blinded adjudicators. Questionnaire data regarding participant pain sites and fitness levels were used to stratify participants according to pain and fitness levels. Data were analyzed using two- and three-factor ANCOVA (surface EMG and sound recording) and Wilcoxon matched pairs tests (RPE). None of the three warm-up protocols had significant effects on muscle activity levels (P ≥ 0.10). Performance quality did not significantly increase (P ≥ 0.21). RPE significantly decreased (P < 0.05) after warm-up for each of the three experimental warm-ups; control condition RPE did not significantly decrease (P > 0.23). Acute physiological and musical benefits from cardiovascular, core muscle, and musical warm-ups in skilled violinists are limited to decreases in RPE. This investigation provides data from the performing arts in support of sports medical evidence suggesting that warm-up only effectively enhances maximal strength and power performance.

  9. High resolution 1H NMR-based metabonomic study of the auditory cortex analogue of developing chick (Gallus gallus domesticus) following prenatal chronic loud music and noise exposure.

    PubMed

    Kumar, Vivek; Nag, Tapas Chandra; Sharma, Uma; Mewar, Sujeet; Jagannathan, Naranamangalam R; Wadhwa, Shashi

    2014-10-01

    Proper functional development of the auditory cortex (ACx) critically depends on early relevant sensory experiences. Exposure to high intensity noise (industrial/traffic) and music, a current public health concern, may disrupt the proper development of the ACx and associated behavior. The biochemical mechanisms associated with such activity dependent changes during development are poorly understood. Here we report the effects of prenatal chronic (last 10 days of incubation), 110dB sound pressure level (SPL) music and noise exposure on metabolic profile of the auditory cortex analogue/field L (AuL) in domestic chicks. Perchloric acid extracts of AuL of post hatch day 1 chicks from control, music and noise groups were subjected to high resolution (700MHz) (1)H NMR spectroscopy. Multivariate regression analysis of the concentration data of 18 metabolites revealed a significant class separation between control and loud sound exposed groups, indicating a metabolic perturbation. Comparison of absolute concentration of metabolites showed that overstimulation with loud sound, independent of spectral characteristics (music or noise) led to extensive usage of major energy metabolites, e.g., glucose, β-hydroxybutyrate and ATP. On the other hand, high glutamine levels and sustained levels of neuromodulators and alternate energy sources, e.g., creatine, ascorbate and lactate indicated a systems restorative measure in a condition of neuronal hyperactivity. At the same time, decreased aspartate and taurine levels in the noise group suggested a differential impact of prenatal chronic loud noise over music exposure. Thus prenatal exposure to loud sound especially noise alters the metabolic activity in the AuL which in turn can affect the functional development and later auditory associated behaviour. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Correlation between Identification Accuracy and Response Confidence for Common Environmental Sounds

    DTIC Science & Technology

    set of environmental sounds with stimulus control and precision. The present study is one in a series of efforts to provide a baseline evaluation of a...sounds from six broad categories: household items, alarms, animals, human generated, mechanical, and vehicle sounds. Each sound was presented five times

  11. Salient sounds activate human visual cortex automatically.

    PubMed

    McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A

    2013-05-22

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.

  12. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  13. Attentional Capacity Limits Gap Detection during Concurrent Sound Segregation.

    PubMed

    Leung, Ada W S; Jolicoeur, Pierre; Alain, Claude

    2015-11-01

    Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.

  14. Annelids. A Multimedia CD-ROM. [CD-ROM].

    ERIC Educational Resources Information Center

    2001

    This CD-ROM is designed for classroom and individual use to teach and learn about annelids. Integrated animations, custom graphics, three-dimensional representations, photographs, and sound are featured for use in user-controlled activities. Interactive lessons are available to reinforce the subject material. Pre- and post-testing sections are…

  15. Management of Student Aid.

    ERIC Educational Resources Information Center

    Nevin, Jeanne, Ed.

    The principles, practices, responsibilities, and controls in student financial aid are described in this manual. It traces the flow of funds, management activities, and legal issues as they occur in the process. The emphasis is on sound management principles of a general and permanent nature rather than on specific government requirements that may…

  16. 46 CFR 63.25-7 - Exhaust gas boilers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... results in inadequate heat transfer, a high temperature alarm or low flow alarm must be activated. An... insufficient to ensure proper heat transfer. Additionally, an audible alarm must automatically sound, and a... water level, the control system must supply the feed water at a rate sufficient to ensure proper heat...

  17. 46 CFR 63.25-7 - Exhaust gas boilers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... results in inadequate heat transfer, a high temperature alarm or low flow alarm must be activated. An... insufficient to ensure proper heat transfer. Additionally, an audible alarm must automatically sound, and a... water level, the control system must supply the feed water at a rate sufficient to ensure proper heat...

  18. Environmental Involvement. . . A Teacher's Guide (2nd Edition).

    ERIC Educational Resources Information Center

    1971

    Presented in this teacher's guide are ideas and projects to help students develop an awareness and appreciation of their environment. Sharpening the senses is emphasized through activities dealing with water quality, sound qualities, and noise, air quality, solid waste control, and soil management. The text is divided into four levels roughly…

  19. Long-term high-intensity sound stimulation inhibits h current (Ih ) in CA1 pyramidal neurons.

    PubMed

    Cunha, A O S; Ceballos, C C; de Deus, J L; Leão, R M

    2018-05-19

    Afferent neurotransmission to hippocampal pyramidal cells can lead to long-term changes to their intrinsic membrane properties and affect many ion currents. One of the most plastic neuronal currents is the hyperpolarization activated cationic current (I h ), which changes in CA1 pyramidal cells in response to many types of physiological and pathological processes, including auditory stimulation. Recently we demonstrated that long-term potentiation (LTP) in rat hippocampal Schaffer-CA1 synapses is depressed by high-intensity sound stimulation. Here we investigated if a long-term high-intensity sound stimulation could affect intrinsic membrane properties of rat CA1 pyramidal neurons. Our results showed that I h is depressed by long-term high intensity sound exposure (1 minute of 110 dB sound, applied two times per day for 10 days). This resulted in a decreased resting membrane potential, increased membrane input resistance and time constant, and decreased action potential threshold. In addition, CA1 pyramidal neurons from sound-exposed animals fired more action potentials than neurons from control animals; However, this effect was not caused by a decreased I h . Interestingly, a single episode (1 minute) of 110 dB sound stimulation which also inhibits hippocampal LTP did not affect I h and firing in pyramidal neurons, suggesting that effects on I h are long-term responses to high intensity sound exposure. Our results show that prolonged exposure to high-intensity sound affects intrinsic membrane properties of hippocampal pyramidal neurons, mainly by decreasing the amplitude of I h . This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  20. Overview on the diversity of sounds produced by clownfishes (Pomacentridae): importance of acoustic signals in their peculiar way of life.

    PubMed

    Colleye, Orphal; Parmentier, Eric

    2012-01-01

    Clownfishes (Pomacentridae) are brightly colored coral reef fishes well known for their mutualistic symbiosis with tropical sea anemones. These fishes live in social groups in which there is a size-based dominance hierarchy. In this structure where sex is socially controlled, agonistic interactions are numerous and serve to maintain size differences between individuals adjacent in rank. Clownfishes are also prolific callers whose sounds seem to play an important role in the social hierarchy. Here, we aim to review and to synthesize the diversity of sounds produced by clownfishes in order to emphasize the importance of acoustic signals in their way of life. Recording the different acoustic behaviors indicated that sounds are divided into two main categories: aggressive sounds produced in conjunction with threat postures (charge and chase), and submissive sounds always emitted when fish exhibited head shaking movements (i.e. a submissive posture). Both types of sounds showed size-related intraspecific variation in dominant frequency and pulse duration: smaller individuals produce higher frequency and shorter duration pulses than larger ones, and inversely. Consequently, these sonic features might be useful cues for individual recognition within the group. This observation is of significant importance due to the size-based hierarchy in clownfish group. On the other hand, no acoustic signal was associated with the different reproductive activities. Unlike other pomacentrids, sounds are not produced for mate attraction in clownfishes but to reach and to defend the competition for breeding status, which explains why constraints are not important enough for promoting call diversification in this group.

  1. Deficient multisensory integration in schizophrenia: an event-related potential study.

    PubMed

    Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean

    2013-07-01

    In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.

  2. Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex.

    PubMed

    Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C

    2006-03-20

    In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.

  3. Sounds activate visual cortex and improve visual discrimination.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2014-07-16

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.

  4. Recording and Analysis of Bowel Sounds.

    PubMed

    Zaborski, Daniel; Halczak, Miroslaw; Grzesiak, Wilhelm; Modrzejewski, Andrzej

    2015-01-01

    The aim of this study was to construct an electronic bowel sound recording system and determine its usefulness for the diagnosis of appendicitis, mechanical ileus and diffuse peritonitis. A group of 67 subjects aged 17 to 88 years including 15 controls was examined. Bowel sounds were recorded using an electret microphone placed on the right side of the hypogastrium and connected to a laptop computer. The method of adjustable grids (converted into binary matrices) was used for bowel sounds analysis. Significantly, fewer (p ≤ 0.05) sounds were found in the mechanical ileus (1004.4) and diffuse peritonitis (466.3) groups than in the controls (2179.3). After superimposing adjustable binary matrices on combined sounds (interval between sounds <0.01 s), significant relationships (p ≤ 0.05) were found between particular positions in the matrices (row-column) and the patient groups. These included the A1_T1 and A1_T2 positions and mechanical ileus as well as the A1_T2 and A1_T4 positions and appendicitis. For diffuse peritonitis, significant positions were A5_T4 and A1_T4. Differences were noted in the number of sounds and binary matrices in the groups of patients with acute abdominal diseases. Certain features of bowel sounds characteristic of individual abdominal diseases were indicated. BS: bowel sound; APP: appendicitis; IL: mechanical ileus; PE: diffuse peritonitis; CG: control group; NSI: number of sound impulses; NCI: number of combined sound impulses; MBS: mean bit-similarity; TMIN: minimum time between impulses; TMAX: maximum time between impulses; TMEAN: mean time between impulses. Zaborski D, Halczak M, Grzesiak W, Modrzejewski A. Recording and Analysis of Bowel Sounds. Euroasian J Hepato-Gastroenterol 2015;5(2):67-73.

  5. Technology and Music Education in a Digitized, Disembodied, Posthuman World

    ERIC Educational Resources Information Center

    Thwaites, Trevor

    2014-01-01

    Digital forms of sound manipulation are eroding traditional methods of sound development and transmission, causing a disjuncture in the ontology of music. Sound, the ambient phenomenon, is becoming disrupted and decentred by the struggles between long established controls, beliefs and desires as well as controls from within technologized contexts.…

  6. Cognitive Control of Involuntary Distraction by Deviant Sounds

    ERIC Educational Resources Information Center

    Parmentier, Fabrice B. R.; Hebrero, Maria

    2013-01-01

    It is well established that a task-irrelevant sound (deviant sound) departing from an otherwise repetitive sequence of sounds (standard sounds) elicits an involuntary capture of attention and orienting response toward the deviant stimulus, resulting in the lengthening of response times in an ongoing task. Some have argued that this type of…

  7. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    PubMed Central

    2015-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant–vowel–consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching pre-school children to decode, or read, single letters. The study compared a control group, which received the preschool’s standard letter-sound instruction, to an intervention group which received a 3-step letter-sound instruction intervention. The children’s growth in letter-sound reading and CVC word decoding abilities were assessed at baseline and 2, 4, 6 and 8 weeks. When compared to the control group, the growth of letter-sound reading ability was slightly higher for the intervention group. The rate of increase in letter-sound reading was significantly faster for the intervention group. In both groups, too few children learned to decode any CVC words to allow for analysis. Results of this study support the use of the intervention strategy in preschools for teaching children print-to-sound processing. PMID:26839494

  8. Auditory orientation in crickets: Pattern recognition controls reactive steering

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2005-10-01

    Many groups of insects are specialists in exploiting sensory cues to locate food resources or conspecifics. To achieve orientation, bees and ants analyze the polarization pattern of the sky, male moths orient along the females' odor plume, and cicadas, grasshoppers, and crickets use acoustic signals to locate singing conspecifics. In comparison with olfactory and visual orientation, where learning is involved, auditory processing underlying orientation in insects appears to be more hardwired and genetically determined. In each of these examples, however, orientation requires a recognition process identifying the crucial sensory pattern to interact with a localization process directing the animal's locomotor activity. Here, we characterize this interaction. Using a sensitive trackball system, we show that, during cricket auditory behavior, the recognition process that is tuned toward the species-specific song pattern controls the amplitude of auditory evoked steering responses. Females perform small reactive steering movements toward any sound patterns. Hearing the male's calling song increases the gain of auditory steering within 2-5 s, and the animals even steer toward nonattractive sound patterns inserted into the speciesspecific pattern. This gain control mechanism in the auditory-to-motor pathway allows crickets to pursue species-specific sound patterns temporarily corrupted by environmental factors and may reflect the organization of recognition and localization networks in insects. localization | phonotaxis

  9. Facilitating emergent literacy: efficacy of a model that partners speech-language pathologists and educators.

    PubMed

    Girolametto, Luigi; Weitzman, Elaine; Greenberg, Janice

    2012-02-01

    This study examined the efficacy of a professional development program for early childhood educators that facilitated emergent literacy skills in preschoolers. The program, led by a speech-language pathologist, focused on teaching alphabet knowledge, print concepts, sound awareness, and decontextualized oral language within naturally occurring classroom interactions. Twenty educators were randomly assigned to experimental and control groups. Educators each recruited 3 to 4 children from their classrooms to participate. The experimental group participated in 18 hr of group training and 3 individual coaching sessions with a speech-language pathologist. The effects of intervention were examined in 30 min of videotaped interaction, including storybook reading and a post-story writing activity. At posttest, educators in the experimental group used a higher rate of utterances that included print/sound references and decontextualized language than the control group. Similarly, the children in the experimental group used a significantly higher rate of utterances that included print/sound references and decontextualized language compared to the control group. These findings suggest that professional development provided by a speech-language pathologist can yield short-term changes in the facilitation of emergent literacy skills in early childhood settings. Future research is needed to determine the impact of this program on the children's long-term development of conventional literacy skills.

  10. Prey survival by predator intimidation: an experimental study of peacock butterfly defence against blue tits

    PubMed Central

    Vallin, Adrian; Jakobsson, Sven; Lind, Johan; Wiklund, Christer

    2005-01-01

    Long-lived butterflies that hibernate as adults are expected to have well-developed antipredation devices as a result of their long exposure to natural enemies. The peacock butterfly, Inachis io, for instance, is a cryptic leaf mimic when resting, but shifts to active defence when disturbed, performing a repeated sequence of movements exposing major eyespots on the wings accompanied by a hissing noise. We studied the effect of visual and auditory defence by staging experiments in which wild-caught blue tits, Parus caeruleus, were presented with one of six kinds of experimentally manipulated living peacock butterflies as follows: butterflies with eyespots painted over and their controls (painted on another part of the wing), butterflies with their sound production aborted (small part of wings removed) and their controls, and butterflies with eyespots painted over and sound production aborted and their controls. The results showed that eyespots alone, or in combination with sound, constituted an effective defence; only 1 out of 34 butterflies with intact eyespots was killed, whereas 13 out of 20 butterflies without eyespots were killed. The killed peacocks were eaten, indicating that they are not distasteful. Hence, intimidation by bluffing can be an efficient means of defence for an edible prey. PMID:16024383

  11. Understanding speech when wearing communication headsets and hearing protectors with subband processing.

    PubMed

    Brammer, Anthony J; Yu, Gongqiang; Bernstein, Eric R; Cherniack, Martin G; Peterson, Donald R; Tufts, Jennifer B

    2014-08-01

    An adaptive, delayless, subband feed-forward control structure is employed to improve the speech signal-to-noise ratio (SNR) in the communication channel of a circumaural headset/hearing protector (HPD) from 90 Hz to 11.3 kHz, and to provide active noise control (ANC) from 50 to 800 Hz to complement the passive attenuation of the HPD. The task involves optimizing the speech SNR for each communication channel subband, subject to limiting the maximum sound level at the ear, maintaining a speech SNR preferred by users, and reducing large inter-band gain differences to improve speech quality. The performance of a proof-of-concept device has been evaluated in a pseudo-diffuse sound field when worn by human subjects under conditions of environmental noise and speech that do not pose a risk to hearing, and by simulation for other conditions. For the environmental noises employed in this study, subband speech SNR control combined with subband ANC produced greater improvement in word scores than subband ANC alone, and improved the consistency of word scores across subjects. The simulation employed a subject-specific linear model, and predicted that word scores are maintained in excess of 90% for sound levels outside the HPD of up to ∼115 dBA.

  12. Maintenance of neuronal size gradient in MNTB requires sound-evoked activity.

    PubMed

    Weatherstone, Jessica H; Kopp-Scheinpflug, Conny; Pilati, Nadia; Wang, Yuan; Forsythe, Ian D; Rubel, Edwin W; Tempel, Bruce L

    2017-02-01

    The medial nucleus of the trapezoid body (MNTB) is an important source of inhibition during the computation of sound location. It transmits fast and precisely timed action potentials at high frequencies; this requires an efficient calcium clearance mechanism, in which plasma membrane calcium ATPase 2 (PMCA2) is a key component. Deafwaddler ( dfw 2J ) mutant mice have a null mutation in PMCA2 causing deafness in homozygotes ( dfw 2J / dfw 2J ) and high-frequency hearing loss in heterozygotes (+/ dfw 2J ). Despite the deafness phenotype, no significant differences in MNTB volume or cell number were observed in dfw 2J homozygous mutants, suggesting that PMCA2 is not required for MNTB neuron survival. The MNTB tonotopic axis encodes high to low sound frequencies across the medial to lateral dimension. We discovered a cell size gradient along this axis: lateral neuronal somata are significantly larger than medially located somata. This size gradient is decreased in +/ dfw 2J and absent in dfw 2J / dfw 2J The lack of acoustically driven input suggests that sound-evoked activity is required for maintenance of the cell size gradient. This hypothesis was corroborated by selective elimination of auditory hair cell activity with either hair cell elimination in Pou4f3 DTR mice or inner ear tetrodotoxin (TTX) treatment. The change in soma size was reversible and recovered within 7 days of TTX treatment, suggesting that regulation of the gradient is dependent on synaptic activity and that these changes are plastic rather than permanent. NEW & NOTEWORTHY Neurons of the medial nucleus of the trapezoid body (MNTB) act as fast-spiking inhibitory interneurons within the auditory brain stem. The MNTB is topographically organized, with low sound frequencies encoded laterally and high frequencies medially. We discovered a cell size gradient along this axis: lateral neurons are larger than medial neurons. The absence of this gradient in deaf mice lacking plasma membrane calcium ATPase 2 suggests an activity-dependent, calcium-mediated mechanism that controls neuronal soma size. Copyright © 2017 the American Physiological Society.

  13. An investigation of the auditory perception of western lowland gorillas in an enrichment study.

    PubMed

    Brooker, Jake S

    2016-09-01

    Previous research has highlighted the varied effects of auditory enrichment on different captive animals. This study investigated how manipulating musical components can influence the behavior of a group of captive western lowland gorillas (Gorilla gorilla gorilla) at Bristol Zoo. The gorillas were observed during exposure to classical music, rock-and-roll music, and rainforest sounds. The two music conditions were modified to create five further conditions: unmanipulated, decreased pitch, increased pitch, decreased tempo, and increased tempo. We compared the prevalence of activity, anxiety, and social behaviors between the standard conditions. We also compared the prevalence of each of these behaviors across the manipulated conditions of each type of music independently and collectively. Control observations with no sound exposure were regularly scheduled between the observations of the 12 auditory conditions. The results suggest that naturalistic rainforest sounds had no influence on the anxiety of captive gorillas, contrary to past research. The tempo of music appears to be significantly associated with activity levels among this group, and social behavior may be affected by pitch. Low tempo music also may be effective at reducing anxiety behavior in captive gorillas. Regulated auditory enrichment may provide effective means of calming gorillas, or for facilitating active behavior. Zoo Biol. 35:398-408, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Behaviours Associated with Acoustic Communication in Nile Tilapia (Oreochromis niloticus)

    PubMed Central

    Longrie, Nicolas; Poncin, Pascal; Denoël, Mathieu; Gennotte, Vincent; Delcourt, Johann; Parmentier, Eric

    2013-01-01

    Background Sound production is widespread among fishes and accompanies many social interactions. The literature reports twenty-nine cichlid species known to produce sounds during aggressive and courtship displays, but the precise range in behavioural contexts is unclear. This study aims to describe the various Oreochromis niloticus behaviours that are associated with sound production in order to delimit the role of sound during different activities, including agonistic behaviours, pit activities, and reproduction and parental care by males and females of the species. Methodology/Principal Findings Sounds mostly occur during the day. The sounds recorded during this study accompany previously known behaviours, and no particular behaviour is systematically associated with sound production. Males and females make sounds during territorial defence but not during courtship and mating. Sounds support visual behaviours but are not used alone. During agonistic interactions, a calling Oreochromis niloticus does not bite after producing sounds, and more sounds are produced in defence of territory than for dominating individuals. Females produce sounds to defend eggs but not larvae. Conclusion/Significance Sounds are produced to reinforce visual behaviours. Moreover, comparisons with O. mossambicus indicate two sister species can differ in their use of sound, their acoustic characteristics, and the function of sound production. These findings support the role of sounds in differentiating species and promoting speciation. They also make clear that the association of sounds with specific life-cycle roles cannot be generalized to the entire taxa. PMID:23620756

  15. Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions

    PubMed Central

    Leech, Robert; Holt, Lori L.; Devlin, Joseph T.; Dick, Frederic

    2009-01-01

    Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial non-linguistic sounds. Before and after training, we used functional MRI to measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the non-speech sounds predicted the change in pre- to post-training activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based restructuring of high-dimensional perceptual space. PMID:19386919

  16. Clicks, whistles and pulses: Passive and active signal use in dolphin communication

    NASA Astrophysics Data System (ADS)

    Herzing, Denise L.

    2014-12-01

    The search for signals out of noise is a problem not only with radio signals from the sky but in the study of animal communication. Dolphins use multiple modalities to communicate including body postures, touch, vision, and most elaborately sound. Like SETI radio signal searches, dolphin sound analysis includes the detection, recognition, analysis, and interpretation of signals. Dolphins use both passive listening and active production to communicate. Dolphins use three main types of acoustic signals: frequency modulated whistles (narrowband with harmonics), echolocation (broadband clicks) and burst pulsed sounds (packets of closely spaced broadband clicks). Dolphin sound analysis has focused on frequency-modulated whistles, yet the most commonly used signals are burst-pulsed sounds which, due to their graded and overlapping nature and bimodal inter-click interval (ICI) rates are hard to categorize. We will look at: 1) the mechanism of sound production and categories of sound types, 2) sound analysis techniques and information content, and 3) examples of lessons learned in the study of dolphin acoustics. The goal of this paper is to provide perspective on how animal communication studies might provide insight to both passive and active SETI in the larger context of searching for life signatures.

  17. Integrating sensorimotor systems in a robot model of cricket behavior

    NASA Astrophysics Data System (ADS)

    Webb, Barbara H.; Harrison, Reid R.

    2000-10-01

    The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.

  18. Sound recordings of road maintenance equipment on the Lincoln National Forest, New Mexico

    Treesearch

    D. K. Delaney; T. G. Grubb

    2004-01-01

    The purpose of this pilot study was to record, characterize, and quantify road maintenance activity in Mexican spotted owl (Strix occidentalis lucida) habitat to gauge potential sound level exposure for owls during road maintenance activities. We measured sound levels from three different types of road maintenance equipment (rock crusherlloader,...

  19. Attentional Capture by Deviant Sounds: A Noncontingent Form of Auditory Distraction?

    ERIC Educational Resources Information Center

    Vachon, François; Labonté, Katherine; Marsh, John E.

    2017-01-01

    The occurrence of an unexpected, infrequent sound in an otherwise homogeneous auditory background tends to disrupt the ongoing cognitive task. This "deviation effect" is typically explained in terms of attentional capture whereby the deviant sound draws attention away from the focal activity, regardless of the nature of this activity.…

  20. Cortical network differences in the sighted versus early blind for recognition of human-produced action sounds

    PubMed Central

    Lewis, James W.; Frum, Chris; Brefczynski-Lewis, Julie A.; Talkington, William J.; Walker, Nathan A.; Rapuano, Kristina M.; Kovach, Amanda L.

    2012-01-01

    Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, while the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when attempting to recognize action sounds. PMID:21305666

  1. Active noise control: a review of the field.

    PubMed

    Gordon, R T; Vining, W D

    1992-11-01

    Active noise control (ANC) is the application of the principle of the superposition of waves to noise attenuation problems. Much progress has been made toward applying ANC to narrow-band, low-frequency noise in confined spaces. During this same period, the application of ANC to broad-band noise or noise in three-dimensional spaces has seen little progress because of the recent quantification of serious physical limitations, most importantly, noncausality, stability, spatial mismatch, and the infinite gain controller requirement. ANC employs superposition to induce destructive interference to affect the attenuation of noise. ANC was believed to utilize the mechanism of phase cancellation to achieve the desired attenuation. However, current literature points to other mechanisms that may be operating in ANC. Categories of ANC are one-dimensional field and duct noise, enclosed spaces and interior noise, noise in three-dimensional spaces, and personal hearing protection. Development of active noise control stems from potential advantages in cost, size, and effectiveness. There are two approaches to ANC. In the first, the original sound is processed and injected back into the sound field in antiphase. The second approach is to synthesize a cancelling waveform. ANC of turbulent flow in pipes and ducts is the largest area in the field. Much work into the actual mechanism involved and the causal versus noncausal aspects of system controllers has been done. Fan and propeller noise can be divided into two categories: noise generated directly as the blade passing tones and noise generated as a result of blade tip turbulence inducing vibration in structures. Three-dimensional spaces present a noise environment where physical limitations are magnified and the infinite gain controller requirement is confronted. Personal hearing protection has been shown to be best suited to the control of periodic, low-frequency noise.

  2. Acoustic transistor: Amplification and switch of sound by sound

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Kan, Wei-wei; Zou, Xin-ye; Yin, Lei-lei; Cheng, Jian-chun

    2014-08-01

    We designed an acoustic transistor to manipulate sound in a manner similar to the manipulation of electric current by its electrical counterpart. The acoustic transistor is a three-terminal device with the essential ability to use a small monochromatic acoustic signal to control a much larger output signal within a broad frequency range. The output and controlling signals have the same frequency, suggesting the possibility of cascading the structure to amplify an acoustic signal. Capable of amplifying and switching sound by sound, acoustic transistors have various potential applications and may open the way to the design of conceptual devices such as acoustic logic gates.

  3. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    PubMed Central

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  4. Effects of sounds generated by a dental turbine and a stream on regional cerebral blood flow and cardiovascular responses.

    PubMed

    Mishima, Riho; Kudo, Takumu; Tsunetsugu, Yuko; Miyazaki, Yoshifumi; Yamamura, Chie; Yamada, Yoshiaki

    2004-09-01

    Effects of sound generated by a dental turbine and a small stream (murmur) and the effects of no sound (null, control) on heart rate, systolic and diastolic blood pressure, and hemodynamic changes (oxygenated, deoxygenated, and total hemoglobin concentrations) in the frontal cortex were measured in 18 young volunteers. Questionnaires completed by the volunteers were also evaluated. Near-infrared spectroscopy and the Finapres technique were employed to measure hemodynamic and vascular responses, respectively. The subjects assessed the murmur, null, and turbine sounds as "pleasant," "natural," and "unpleasant," respectively. Blood pressures changed in response to the murmur, null, and turbine sound stimuli as expected: lower than the control level, unchanged, and higher than the control level, respectively. Mean blood pressure values tended to increase gradually over the recording time even during the null sound stimulation, possibly because of the recording environment. Oxygenated hemoglobin concentrations decreased drastically in response to the dental turbine sound, while deoxygenated hemoglobin concentrations remained unchanged and thus total hemoglobin concentrations decreased (due to the decreased oxygenated hemoglobin concentrations). Hemodynamic responses to the murmuring sound and the null sound were slight or unchanged, respectively. Surprisingly, heart rate measurements remained fairly stable in response to the stimulatory noises. In conclusion, we demonstrate here that sound generated by a dental turbine may affect cerebral blood flow and metabolism as well as autonomic responses. Copyright 2004 The Society of the Nippon Dental University

  5. Applications of the TIROS-N sounding and cloud motion wind enhancement for the FGGE 'special effort'. [Global Weather Experiment

    NASA Technical Reports Server (NTRS)

    Atlas, R.

    1980-01-01

    In January of 1978, a panel of experts recommended that a 'special effort' be made to enhance and edit satellite soundings and cloud tracked winds in data sparse regions. It was felt that these activities would be necessary to obtain maximum benefits from an evaluation of satellite data during the Global Weather Experiment (FGGE). The 'special effort' is being conducted for the two special observing periods of FGGE. More than sixty cases have been selected for enhancement on the basis of meteorological interest. These cases include situations of blocking, cutoff low development, cyclogenesis, and tropical circulations. The sounding data enhancement process consists of supplementing the operational satellite sounding data set with higher resolution soundings in meteorologically active regions, and with new soundings where data voids or soundings of questionable quality exist.

  6. Fine manipulation of sound via lossy metamaterials with independent and arbitrary reflection amplitude and phase.

    PubMed

    Zhu, Yifan; Hu, Jie; Fan, Xudong; Yang, Jing; Liang, Bin; Zhu, Xuefeng; Cheng, Jianchun

    2018-04-24

    The fine manipulation of sound fields is critical in acoustics yet is restricted by the coupled amplitude and phase modulations in existing wave-steering metamaterials. Commonly, unavoidable losses make it difficult to control coupling, thereby limiting device performance. Here we show the possibility of tailoring the loss in metamaterials to realize fine control of sound in three-dimensional (3D) space. Quantitative studies on the parameter dependence of reflection amplitude and phase identify quasi-decoupled points in the structural parameter space, allowing arbitrary amplitude-phase combinations for reflected sound. We further demonstrate the significance of our approach for sound manipulation by producing self-bending beams, multifocal focusing, and a single-plane two-dimensional hologram, as well as a multi-plane 3D hologram with quality better than the previous phase-controlled approach. Our work provides a route for harnessing sound via engineering the loss, enabling promising device applications in acoustics and related fields.

  7. Suppression of noise-induced hyperactivity in the dorsal cochlear nucleus following application of the cholinergic agonist, carbachol

    PubMed Central

    Manzoor, N.F.; Chen, G.; Kaltenbach, J.A.

    2013-01-01

    Increased spontaneous firing (hyperactivity) is induced in fusiform cells of the dorsal cochlear nucleus (DCN) following intense sound exposure and is implicated as a possible neural correlate of noise-induced tinnitus. Previous studies have shown that in normal hearing animals, fusiform cell activity can be modulated by activation of parallel fibers, which represent the axons of granule cells. The modulation consists of a transient excitation followed by a more prolonged period of inhibition, presumably reflecting direct excitatory inputs to fusiform cells and an indirect inhibitory input to fusiform cells from the granule cell-cartwheel cell system. We hypothesized that since granule cells can be activated by cholinergic inputs, it might be possible to suppress tinnitus-related hyperactivity of fusiform cells using the cholinergic agonist, carbachol. To test this hypothesis, we recorded multiunit spontaneous activity in the fusiform soma layer (FSL) of the DCN in control and tone-exposed hamsters (10 kHz, 115 dB SPL, 4 h) before and after application of carbachol to the DCN surface. In both exposed and control animals, 100 µM carbachol had a transient excitatory effect on spontaneous activity followed by a rapid weakening of activity to near or below normal levels. In exposed animals, the weakening of activity was powerful enough to completely abolish the hyperactivity induced by intense sound exposure. This suppressive effect was partially reversed by application of atropine and was not associated with significant changes in neural best frequencies (BF) or BF thresholds. These findings demonstrate that noise-induced hyperactivity can be pharmacologically controlled and raise the possibility that attenuation of tinnitus may be achievable by using an agonist of the cholinergic system. PMID:23721928

  8. Suppression of noise-induced hyperactivity in the dorsal cochlear nucleus following application of the cholinergic agonist, carbachol.

    PubMed

    Manzoor, N F; Chen, G; Kaltenbach, J A

    2013-07-26

    Increased spontaneous firing (hyperactivity) is induced in fusiform cells of the dorsal cochlear nucleus (DCN) following intense sound exposure and is implicated as a possible neural correlate of noise-induced tinnitus. Previous studies have shown that in normal hearing animals, fusiform cell activity can be modulated by activation of parallel fibers, which represent the axons of granule cells. The modulation consists of a transient excitation followed by a more prolonged period of inhibition, presumably reflecting direct excitatory inputs to fusiform cells and an indirect inhibitory input to fusiform cells from the granule cell-cartwheel cell system. We hypothesized that since granule cells can be activated by cholinergic inputs, it might be possible to suppress tinnitus-related hyperactivity of fusiform cells using the cholinergic agonist, carbachol. To test this hypothesis, we recorded multiunit spontaneous activity in the fusiform soma layer (FSL) of the DCN in control and tone-exposed hamsters (10 kHz, 115 dB SPL, 4h) before and after application of carbachol to the DCN surface. In both exposed and control animals, 100 μM carbachol had a transient excitatory effect on spontaneous activity followed by a rapid weakening of activity to near or below normal levels. In exposed animals, the weakening of activity was powerful enough to completely abolish the hyperactivity induced by intense sound exposure. This suppressive effect was partially reversed by application of atropine and was usually not associated with significant changes in neural best frequencies (BF) or BF thresholds. These findings demonstrate that noise-induced hyperactivity can be pharmacologically controlled and raise the possibility that attenuation of tinnitus may be achievable by using an agonist of the cholinergic system. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. 76 FR 34627 - Proposed Modification of Offshore Airspace Areas: Norton Sound Low, Control 1234L and Control...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ...: Norton Sound Low, Control 1234L and Control 1487L; Alaska AGENCY: Federal Aviation Administration (FAA... Low, Control 1234L, and Control 1487L Offshore Airspace Areas in Alaska. The airspace floors would be... there is a requirement to provide Instrument Flight Rules (IFR) en route Air Traffic Control (ATC...

  10. Food Chains & Webs. A Multimedia CD-ROM. [CD-ROM].

    ERIC Educational Resources Information Center

    2001

    This CD-ROM is designed for classroom and individual use to teach and learn about food chains and food webs. Integrated animations, custom graphics, three-dimensional representations, photographs, and sound are featured for use in user-controlled activities. Interactive lessons are available to reinforce the subject material. Pre- and post-testing…

  11. DNA: The Molecule of Life. A Multimedia CD-ROM. [CD-ROM].

    ERIC Educational Resources Information Center

    2001

    This CD-ROM is designed for classroom and individual use to teach and learn about DNA. Integrated animations, custom graphics, three-dimensional representations, photographs, and sound are featured for use in user-controlled activities. Interactive lessons are available to reinforce the subject material. Pre- and post-testing sections are also…

  12. Litter Control, Waste Management, and Recycling Resource Unit, K-6. Bulletin 1722.

    ERIC Educational Resources Information Center

    Louisiana State Dept. of Education, Baton Rouge.

    This unit provides elementary teachers with ideas for assisting their students in developing an understanding and appreciation of sound resource use. It contains projects and activities that focus on both the litter problem and on waste management solutions. These materials can be adapted and modified to accommodate different grade levels and…

  13. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    PubMed Central

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-01-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas. PMID:28232739

  14. An intelligent artificial throat with sound-sensing ability based on laser induced graphene.

    PubMed

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-24

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  15. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    NASA Astrophysics Data System (ADS)

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  16. Effects of musical expertise on oscillatory brain activity in response to emotional sounds.

    PubMed

    Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L

    2017-08-01

    Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. 77 FR 49412 - Takes of Marine Mammals Incidental to Specified Activities; Navy Research, Development, Test and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-16

    ...; high-pitched sounds contain high frequencies and low-pitched sounds contain low frequencies. Natural... estimated to occur between approximately 150 Hz and 160 kHz. High-frequency cetaceans (eight species of true... masking by high frequency sound. Human data indicate low-frequency sound can mask high-frequency sounds (i...

  18. Assessing the potential for passive radio sounding of Europa and Ganymede with RIME and REASON

    NASA Astrophysics Data System (ADS)

    Schroeder, Dustin M.; Romero-Wolf, Andrew; Carrer, Leonardo; Grima, Cyril; Campbell, Bruce A.; Kofman, Wlodek; Bruzzone, Lorenzo; Blankenship, Donald D.

    2016-12-01

    Recent work has raised the potential for Jupiter's decametric radiation to be used as a source for passive radio sounding of its icy moons. Two radar sounding instruments, the Radar for Icy Moon Exploration (RIME) and the Radar for Europa Assessment and Sounding: Ocean to Near-surface (REASON) have been selected for ESA and NASA missions to Ganymede and Europa. Here, we revisit the projected performance of the passive sounding concept and assess the potential for its implementation as an additional mode for RIME and REASON. We find that the Signal to Noise Ratio (SNR) of passive sounding can approach or exceed that of active sounding in a noisy sub-Jovian environment, but that active sounding achieves a greater SNR in the presence of quiescent noise and outperforms passive sounding in terms of clutter. We also compare the performance of passive sounding at the 9 MHz HF center frequency of RIME and REASON to other frequencies within the Jovian decametric band. We conclude that the addition of a passive sounding mode on RIME or REASON stands to enhance their science return by enabling sub-Jovian HF sounding in the presence of decametric noise, but that there is not a compelling case for implementation at a different frequency.

  19. Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization

    PubMed Central

    2018-01-01

    Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it. PMID:29440556

  20. Overview on the Diversity of Sounds Produced by Clownfishes (Pomacentridae): Importance of Acoustic Signals in Their Peculiar Way of Life

    PubMed Central

    Colleye, Orphal; Parmentier, Eric

    2012-01-01

    Background Clownfishes (Pomacentridae) are brightly colored coral reef fishes well known for their mutualistic symbiosis with tropical sea anemones. These fishes live in social groups in which there is a size-based dominance hierarchy. In this structure where sex is socially controlled, agonistic interactions are numerous and serve to maintain size differences between individuals adjacent in rank. Clownfishes are also prolific callers whose sounds seem to play an important role in the social hierarchy. Here, we aim to review and to synthesize the diversity of sounds produced by clownfishes in order to emphasize the importance of acoustic signals in their way of life. Methodology/Principal Findings Recording the different acoustic behaviors indicated that sounds are divided into two main categories: aggressive sounds produced in conjunction with threat postures (charge and chase), and submissive sounds always emitted when fish exhibited head shaking movements (i.e. a submissive posture). Both types of sounds showed size-related intraspecific variation in dominant frequency and pulse duration: smaller individuals produce higher frequency and shorter duration pulses than larger ones, and inversely. Consequently, these sonic features might be useful cues for individual recognition within the group. This observation is of significant importance due to the size-based hierarchy in clownfish group. On the other hand, no acoustic signal was associated with the different reproductive activities. Conclusions/Significance Unlike other pomacentrids, sounds are not produced for mate attraction in clownfishes but to reach and to defend the competition for breeding status, which explains why constraints are not important enough for promoting call diversification in this group. PMID:23145114

  1. Salient, Irrelevant Sounds Reflexively Induce Alpha Rhythm Desynchronization in Parallel with Slow Potential Shifts in Visual Cortex.

    PubMed

    Störmer, Viola; Feng, Wenfeng; Martinez, Antigona; McDonald, John; Hillyard, Steven

    2016-03-01

    Recent findings suggest that a salient, irrelevant sound attracts attention to its location involuntarily and facilitates processing of a colocalized visual event [McDonald, J. J., Störmer, V. S., Martinez, A., Feng, W. F., & Hillyard, S. A. Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33, 9194-9201, 2013]. Associated with this cross-modal facilitation is a sound-evoked slow potential over the contralateral visual cortex termed the auditory-evoked contralateral occipital positivity (ACOP). Here, we further tested the hypothesis that a salient sound captures visual attention involuntarily by examining sound-evoked modulations of the occipital alpha rhythm, which has been strongly associated with visual attention. In two purely auditory experiments, lateralized irrelevant sounds triggered a bilateral desynchronization of occipital alpha-band activity (10-14 Hz) that was more pronounced in the hemisphere contralateral to the sound's location. The timing of the contralateral alpha-band desynchronization overlapped with that of the ACOP (∼240-400 msec), and both measures of neural activity were estimated to arise from neural generators in the ventral-occipital cortex. The magnitude of the lateralized alpha desynchronization was correlated with ACOP amplitude on a trial-by-trial basis and between participants, suggesting that they arise from or are dependent on a common neural mechanism. These results support the hypothesis that the sound-induced alpha desynchronization and ACOP both reflect the involuntary cross-modal orienting of spatial attention to the sound's location.

  2. Behavioral response of manatees to variations in environmental sound levels

    USGS Publications Warehouse

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  3. Geologic interpretation and multibeam bathymetry of the sea floor in southeastern Long Island Sound

    USGS Publications Warehouse

    Poppe, Lawrence J.; Ackerman, Seth D.; Doran, Elizabeth F.; Moser, Marc S.; Stewart, Helen F.; Forfinski, Nicholas A.; Gardner, Uther L.; Keene, Jennifer A.

    2006-01-01

    Digital terrain models (DTMs) produced from multibeam echosounder (MBES) bathymetric data provide valuable base maps for marine geological interpretations (e.g. Todd and others, 1999; Mosher and Thomson, 2002; ten Brink and others, 2004; Poppe and others, 2006a,b). These maps help define the geological variability of the sea floor (one of the primary controls of benthic habitat diversity); improve our understanding of the processes that control the distribution and transport of bottom sediments, the distribution of benthic habitats and associated infaunal community structures; and provide a detailed framework for future research, monitoring, and management activities. The bathymetric survey interpreted herein (National Oceanic and Atmospheric Administration (NOAA) survey H11255) covers roughly 95 km? of sea floor in southeastern Long Island Sound (fig. 1). This bathymetry has been examined in relation to seismic reflection data collected concurrently, as well as archived seismic profiles acquired as part of a long-standing geologic mapping partnership between the State of Connecticut and the U.S. Geological Survey (USGS). The objective of this work was to use these geophysical data sets to interpret geomorphological attributes of the sea floor in terms of the Quaternary geologic history and modern sedimentary processes within Long Island Sound.

  4. Distinct sensory representations of wind and near-field sound in the Drosophila brain

    PubMed Central

    Yorozu, Suzuko; Wong, Allan; Fischer, Brian J.; Dankert, Heiko; Kernan, Maurice J.; Kamikouchi, Azusa; Ito, Kei; Anderson, David J.

    2009-01-01

    Behavioral responses to wind are thought to play a critical role in controlling the dispersal and population genetics of wild Drosophila species1,2, as well as their navigation in flight3, but their underlying neurobiological basis is unknown. We show that Drosophila melanogaster, like wild-caught Drosophila strains4, exhibits robust wind-induced suppression of locomotion (WISL), in response to air currents delivered at speeds normally encountered in nature1,2. Here we identify wind-sensitive neurons in Johnston’s Organ (JO), an antennal mechanosensory structure previously implicated in near-field sound detection (reviewed in5,6). Using Gal4 lines targeted to different subsets of JO neurons7, and a genetically encoded calcium indicator8, we show that wind and near-field sound (courtship song) activate distinct populations of JO neurons, which project to different regions of the antennal and mechanosensory motor center (AMMC) in the central brain. Selective genetic ablation of wind-sensitive JO neurons in the antenna abolishes WISL behavior, without impairing hearing. Different neuronal subsets within the wind-sensitive population, moreover, respond to different directions of arista deflection caused by airflow and project to different regions of the AMMC, providing a rudimentary map of wind-direction in the brain. Importantly, sound- and wind-sensitive JO neurons exhibit different intrinsic response properties: the former are phasically activated by small, bi-directional, displacements of the aristae, while the latter are tonically activated by unidirectional, static deflections of larger magnitude. These different intrinsic properties are well suited to the detection of oscillatory pulses of near-field sound and laminar airflow, respectively. These data identify wind-sensitive neurons in JO, a structure that has been primarily associated with hearing, and reveal how the brain can distinguish different types of air particle movements, using a common sensory organ. PMID:19279637

  5. 77 FR 19301 - Prince William Sound Regional Citizens' Advisory Council Charter Renewal

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-30

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard [USCG-2012-0099] Prince William Sound Regional... Prince William Sound Regional Citizens' Advisory Council (PWSRCAC) as an alternative voluntary advisory group for Prince William Sound, Alaska. This certification allows the PWSRCAC to monitor the activities...

  6. Aquatic Acoustic Metrics Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-12-18

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals.more » In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.« less

  7. Differential presence of anthropogenic compounds dissolved in the marine waters of Puget Sound, WA and Barkley Sound, BC.

    PubMed

    Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles

    2011-11-01

    Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.

  8. Auditory processing assessment suggests that Wistar audiogenic rat neural networks are prone to entrainment.

    PubMed

    Pinto, Hyorrana Priscila Pereira; Carvalho, Vinícius Rezende; Medeiros, Daniel de Castro; Almeida, Ana Flávia Santos; Mendes, Eduardo Mazoni Andrade Marçal; Moraes, Márcio Flávio Dutra

    2017-04-07

    Epilepsy is a neurological disease related to the occurrence of pathological oscillatory activity, but the basic physiological mechanisms of seizure remain to be understood. Our working hypothesis is that specific sensory processing circuits may present abnormally enhanced predisposition for coordinated firing in the dysfunctional brain. Such facilitated entrainment could share a similar mechanistic process as those expediting the propagation of epileptiform activity throughout the brain. To test this hypothesis, we employed the Wistar audiogenic rat (WAR) reflex animal model, which is characterized by having seizures triggered reliably by sound. Sound stimulation was modulated in amplitude to produce an auditory steady-state-evoked response (ASSR; -53.71Hz) that covers bottom-up and top-down processing in a time scale compatible with the dynamics of the epileptic condition. Data from inferior colliculus (IC) c-Fos immunohistochemistry and electrographic recordings were gathered for both the control Wistar group and WARs. Under 85-dB SLP auditory stimulation, compared to controls, the WARs presented higher number of Fos-positive cells (at IC and auditory temporal lobe) and a significant increase in ASSR-normalized energy. Similarly, the 110-dB SLP sound stimulation also statistically increased ASSR-normalized energy during ictal and post-ictal periods. However, at the transition from the physiological to pathological state (pre-ictal period), the WAR ASSR analysis demonstrated a decline in normalized energy and a significant increase in circular variance values compared to that of controls. These results indicate an enhanced coordinated firing state for WARs, except immediately before seizure onset (suggesting pre-ictal neuronal desynchronization with external sensory drive). These results suggest a competing myriad of interferences among different networks that after seizure onset converge to a massive oscillatory circuit. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Active localization of virtual sounds

    NASA Technical Reports Server (NTRS)

    Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.

    1991-01-01

    We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.

  10. Listening panel agreement and characteristics of lung sounds digitally recorded from children aged 1–59 months enrolled in the Pneumonia Etiology Research for Child Health (PERCH) case–control study

    PubMed Central

    Park, Daniel E; Watson, Nora L; Buck, W Chris; Bunthi, Charatdao; Devendra, Akash; Ebruke, Bernard E; Elhilali, Mounya; Emmanouilidou, Dimitra; Garcia-Prats, Anthony J; Githinji, Leah; Hossain, Lokman; Madhi, Shabir A; Moore, David P; Mulindwa, Justin; Olson, Dan; Awori, Juliet O; Vandepitte, Warunee P; Verwey, Charl; West, James E; Knoll, Maria D; O'Brien, Katherine L; Feikin, Daniel R; Hammit, Laura L

    2017-01-01

    Introduction Paediatric lung sound recordings can be systematically assessed, but methodological feasibility and validity is unknown, especially from developing countries. We examined the performance of acoustically interpreting recorded paediatric lung sounds and compared sound characteristics between cases and controls. Methods Pneumonia Etiology Research for Child Health staff in six African and Asian sites recorded lung sounds with a digital stethoscope in cases and controls. Cases aged 1–59 months had WHO severe or very severe pneumonia; age-matched community controls did not. A listening panel assigned examination results of normal, crackle, wheeze, crackle and wheeze or uninterpretable, with adjudication of discordant interpretations. Classifications were recategorised into any crackle, any wheeze or abnormal (any crackle or wheeze) and primary listener agreement (first two listeners) was analysed among interpretable examinations using the prevalence-adjusted, bias-adjusted kappa (PABAK). We examined predictors of disagreement with logistic regression and compared case and control lung sounds with descriptive statistics. Results Primary listeners considered 89.5% of 792 case and 92.4% of 301 control recordings interpretable. Among interpretable recordings, listeners agreed on the presence or absence of any abnormality in 74.9% (PABAK 0.50) of cases and 69.8% (PABAK 0.40) of controls, presence/absence of crackles in 70.6% (PABAK 0.41) of cases and 82.4% (PABAK 0.65) of controls and presence/absence of wheeze in 72.6% (PABAK 0.45) of cases and 73.8% (PABAK 0.48) of controls. Controls, tachypnoea, >3 uninterpretable chest positions, crying, upper airway noises and study site predicted listener disagreement. Among all interpretable examinations, 38.0% of cases and 84.9% of controls were normal (p<0.0001); wheezing was the most common sound (49.9%) in cases. Conclusions Listening panel and case–control data suggests our methodology is feasible, likely valid and that small airway inflammation is common in WHO pneumonia. Digital auscultation may be an important future pneumonia diagnostic in developing countries. PMID:28883927

  11. Listening panel agreement and characteristics of lung sounds digitally recorded from children aged 1-59 months enrolled in the Pneumonia Etiology Research for Child Health (PERCH) case-control study.

    PubMed

    McCollum, Eric D; Park, Daniel E; Watson, Nora L; Buck, W Chris; Bunthi, Charatdao; Devendra, Akash; Ebruke, Bernard E; Elhilali, Mounya; Emmanouilidou, Dimitra; Garcia-Prats, Anthony J; Githinji, Leah; Hossain, Lokman; Madhi, Shabir A; Moore, David P; Mulindwa, Justin; Olson, Dan; Awori, Juliet O; Vandepitte, Warunee P; Verwey, Charl; West, James E; Knoll, Maria D; O'Brien, Katherine L; Feikin, Daniel R; Hammit, Laura L

    2017-01-01

    Paediatric lung sound recordings can be systematically assessed, but methodological feasibility and validity is unknown, especially from developing countries. We examined the performance of acoustically interpreting recorded paediatric lung sounds and compared sound characteristics between cases and controls. Pneumonia Etiology Research for Child Health staff in six African and Asian sites recorded lung sounds with a digital stethoscope in cases and controls. Cases aged 1-59 months had WHO severe or very severe pneumonia; age-matched community controls did not. A listening panel assigned examination results of normal, crackle, wheeze, crackle and wheeze or uninterpretable, with adjudication of discordant interpretations. Classifications were recategorised into any crackle, any wheeze or abnormal (any crackle or wheeze) and primary listener agreement (first two listeners) was analysed among interpretable examinations using the prevalence-adjusted, bias-adjusted kappa (PABAK). We examined predictors of disagreement with logistic regression and compared case and control lung sounds with descriptive statistics. Primary listeners considered 89.5% of 792 case and 92.4% of 301 control recordings interpretable. Among interpretable recordings, listeners agreed on the presence or absence of any abnormality in 74.9% (PABAK 0.50) of cases and 69.8% (PABAK 0.40) of controls, presence/absence of crackles in 70.6% (PABAK 0.41) of cases and 82.4% (PABAK 0.65) of controls and presence/absence of wheeze in 72.6% (PABAK 0.45) of cases and 73.8% (PABAK 0.48) of controls. Controls, tachypnoea, >3 uninterpretable chest positions, crying, upper airway noises and study site predicted listener disagreement. Among all interpretable examinations, 38.0% of cases and 84.9% of controls were normal (p<0.0001); wheezing was the most common sound (49.9%) in cases. Listening panel and case-control data suggests our methodology is feasible, likely valid and that small airway inflammation is common in WHO pneumonia. Digital auscultation may be an important future pneumonia diagnostic in developing countries.

  12. Sounds Alive: A Noise Workbook.

    ERIC Educational Resources Information Center

    Dickman, Donna McCord

    Sarah Screech, Danny Decibel, Sweetie Sound and Neil Noisy describe their experiences in the world of sound and noise to elementary students. Presented are their reports, games and charts which address sound measurement, the effects of noise on people, methods of noise control, and related areas. The workbook is intended to stimulate students'…

  13. Hearing Tests on Mobile Devices: Evaluation of the Reference Sound Level by Means of Biological Calibration.

    PubMed

    Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz

    2016-05-30

    Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3.93-4.11). Statistically significant differences were found across models. Reference sound levels determined in the uncontrolled group are comparable to the values obtained in the controlled group. This validates the use of biological calibration in the uncontrolled group for determining the predefined reference sound level for new devices. Moreover, due to a relatively small deviation of the reference sound level for devices of the same model, it is feasible to conduct hearing screening on devices calibrated with the predefined reference sound level.

  14. Is 1/f sound more effective than simple resting in reducing stress response?

    PubMed

    Oh, Eun-Joo; Cho, Il-Young; Park, Soon-Kwon

    2014-01-01

    It has been previously demonstrated that listening to 1/f sound effectively reduces stress. However, these findings have been inconsistent and further study on the relationship between 1/f sound and the stress response is consequently necessary. The present study examined whether sound with 1/f properties (1/f sound) affects stress-induced electroencephalogram (EEG) changes. Twenty-six subjects who voluntarily participated in the study were randomly assigned to the experimental or control group. Data from four participants were excluded because of EEG artifacts. A mental arithmetic task was used as a stressor. Participants in the experiment group listened to 1/f sound for 5 minutes and 33 seconds, while participants in the control group sat quietly for the same duration. EEG recordings were obtained at various points throughout the experiment. After the experiment, participants completed a questionnaire on the affective impact of the 1/f sound. The results indicated that the mental arithmetic task effectively induced a stress response measurable by EEG. Relative theta power at all electrode sites was significantly lower than baseline in both the control and experimental group. Relative alpha power was significantly lower, and relative beta power was significantly higher in the T3 and T4 areas. Secondly, 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Finally, participants reported in the questionnaire that they experienced a positive feeling in response to the 1/f sound. Our results suggest that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response.

  15. Simplified Rotation In Acoustic Levitation

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.

    1989-01-01

    New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.

  16. Cluster-Randomized Controlled Trial Evaluating the Effectiveness of Computer-Assisted Intervention Delivered by Educators for Children with Speech Sound Disorders

    ERIC Educational Resources Information Center

    McLeod, Sharynne; Baker, Elise; McCormack, Jane; Wren, Yvonne; Roulstone, Sue; Crowe, Kathryn; Masso, Sarah; White, Paul; Howland, Charlotte

    2017-01-01

    Purpose: The aim was to evaluate the effectiveness of computer-assisted input-based intervention for children with speech sound disorders (SSD). Method: The Sound Start Study was a cluster-randomized controlled trial. Seventy-nine early childhood centers were invited to participate, 45 were recruited, and 1,205 parents and educators of 4- and…

  17. Active structural acoustic control of noise transmission through double panel systems

    NASA Astrophysics Data System (ADS)

    Carneal, James P.; Fuller, Chris R.

    1995-04-01

    A preliminary parametric study of active control of sound transmission through double panel systems has been experimentally performed. The technique used is the active structural acoustic control (ASAC) approach where control inputs, in the form of piezoelectric actuators, were applied to the structure while the radiated pressure field was minimized. Results indicate the application of control inputs to the radiating panel resulted in greater transmission loss due to its direct effect on the nature of the structural-acoustic coupling between the radiating panel and the receiving chamber. Increased control performance was seen in a double panel system consisting of a stiffer radiating panel with a lower modal density. As expected, more effective control of a radiating panel excited on-resonance is achieved over one excited off-resonance. In general, the results validate the ASAC approach for double panel systems and demonstrate that it is possible to take advantage of double panel behavior to enhance control performance, although it is clear that further research must be done to understand the physics involved.

  18. Comparison of Tooth Color Change After Bleaching With Conventional and Different Light-Activated Methods.

    PubMed

    Shahabi, Sima; Assadian, Hadi; Mahmoudi Nahavandi, Alireza; Nokhbatolfoghahaei, Hanieh

    2018-01-01

    Introduction: The demand for esthetic dental treatments is increasing in recent years mainly due to improved oral hygiene and better maintenance of oral health and teeth in older individuals. Bleaching of discolored anterior teeth is the most popular among esthetic dental treatments. Even individuals with sound teeth and adequate esthetics seek to have whiter teeth in the anterior region. The aim of this study was to evaluate tooth color changes following conventional in-office bleaching techniques compared to light-activated methods using different light sources. Methods: Seventy sound anterior teeth (devoided of caries and/or fracture), extracted for periodontal and orthodontic reasons were selected and allocated to 7 groups: (A) control, (B) conventional bleaching (C) LED-activated bleaching, (D) KTP laser-activated bleaching, (E) diode laser-activated bleaching, (F) Nd:YAG laser-activated bleaching and (G) CO2 laser-activated bleaching. Colorimetric evaluation was carried out before and after treatment using a spectrophotoradiometer. Data were analyzed by one- and two-way analysis of variance (ANOVA) as well as multiple comparison methods. Results: The results showed that all bleaching procedures were effective in reducing the yellowness index. However, the KTP laser-activated bleaching was significantly more effective than the other techniques in 95% confidence level. It was also seen that CO2 laser activated method has outperformed groups E, F and G and the conventional bleaching without light activation was not effective at all and represented similar results with the control group. Furthermore, the groups E and G had almost the same results in decreasing the yellowness index. Conclusion: The results showed that all bleaching techniques were effective however, the KTP laser-activated bleaching was significantly more efficient, closely followed by the CO2 laser-activated bleaching technique.

  19. STS-96 FD Highlights and Crew Activities Report: Flight Day 06

    NASA Technical Reports Server (NTRS)

    1999-01-01

    On this sixth day of the STS-96 Discovery mission, the flight crew, Commander Kent V. Rominger, Pilot Rick D. Husband, and Mission Specialists Ellen Ochoa, Tamara E. Jernigan, Daniel T. Barry, Julie Payette, and Valery Ivanovich Tokarev are seen performing logistics transfer activities within the Discovery/International Space Station orbiting complex. Ochoa, Jernigan, Husband and Barry devote a significant part of their day to the transfer of bags of different sizes and shapes from the SPACEHAB module in Discovery's cargo bay to resting places inside the International Space Station. Payette and Tokarev complete the maintenance on the storage batteries. Barry and Tokarev complete installation of the remaining sound mufflers over the fans in Zarya. Barry then measures the sound levels at different positions inside the module. Rominger and Tokarev conduct a news conference with Russian reporters from the Mission Control Center in Moscow.

  20. Quantification of sound instability in embouchure tremor based on the time-varying fundamental frequency.

    PubMed

    Lee, André; Voget, Jakob; Furuya, Shinichi; Morise, Masanori; Altenmüller, Eckart

    2016-05-01

    Task-specific tremor in musicians is an involuntary oscillating muscular activity mostly of the hand or the embouchure, which predominantly occurs while playing the instrument. In contrast to arm or hand tremors, which have been examined and objectified based on movement kinematics and muscular activity, embouchure tremor has not yet been investigated. To quantify and describe embouchure tremor we analysed sound production and investigated the fluctuation of the time-varying fundamental frequency of sustained notes. A comparison between patients with embouchure tremor and healthy controls showed a significantly higher fluctuation of the fundamental frequency for the patients in the high pitch with a tremor frequency range between 3 and 8 Hz. The present findings firstly provide further information about a scarcely described movement disorder and secondly further evaluate a new quantification method for embouchure tremor, which has recently been established for embouchure dystonia.

  1. Orofacial muscular activity and related skin movement during the preparatory and sustained phases of tone production on the French horn.

    PubMed

    Hirano, Takeshi; Kudo, Kazutoshi; Ohtsuki, Tatsuyuki; Kinoshita, Hiroshi

    2013-07-01

    This study investigated activity of the embouchure-related orofacial muscles during pre- and postattack phases of sound production by 10 trained French-horn players. Surface electromyogram (EMG) from five selected facial muscles, and related facial skin kinematics were examined in relation to pitch and intensity of a tone produced. No difference in EMGs and facial kinematics between the two phases was found, indicating importance of appropriate formation of preattack embouchure. EMGs in all muscles during the postattack phase increased linearly with an increase in pitch, and they also increased with tone intensity without interacting with the pitch effect. Orofacial skin movement remained constant across all pitches and intensities except for lateral retraction of the lips during high-pitch tone production. Contraction of the orofacial muscles is fundamentally isometric by which tension on the lips and the cheeks is regulated for flexible sound parameter control.

  2. A Sound Therapy-Based Intervention to Expand the Auditory Dynamic Range for Loudness among Persons with Sensorineural Hearing Losses: A Randomized Placebo-Controlled Clinical Trial

    PubMed Central

    Formby, Craig; Hawley, Monica L.; Sherlock, LaGuinn P.; Gold, Susan; Payne, JoAnne; Brooks, Rebecca; Parton, Jason M.; Juneau, Roger; Desporte, Edward J.; Siegle, Gregory R.

    2015-01-01

    The primary aim of this research was to evaluate the validity, efficacy, and generalization of principles underlying a sound therapy–based treatment for promoting expansion of the auditory dynamic range (DR) for loudness. The basic sound therapy principles, originally devised for treatment of hyperacusis among patients with tinnitus, were evaluated in this study in a target sample of unsuccessfully fit and/or problematic prospective hearing aid users with diminished DRs (owing to their elevated audiometric thresholds and reduced sound tolerance). Secondary aims included: (1) delineation of the treatment contributions from the counseling and sound therapy components to the full-treatment protocol and, in turn, the isolated treatment effects from each of these individual components to intervention success; and (2) characterization of the respective dynamics for full, partial, and control treatments. Thirty-six participants with bilateral sensorineural hearing losses and reduced DRs, which affected their actual or perceived ability to use hearing aids, were enrolled in and completed a placebo-controlled (for sound therapy) randomized clinical trial. The 2 × 2 factorial trial design was implemented with or without various assignments of counseling and sound therapy. Specifically, participants were assigned randomly to one of four treatment groups (nine participants per group), including: (1) group 1—full treatment achieved with scripted counseling plus sound therapy implemented with binaural sound generators; (2) group 2—partial treatment achieved with counseling and placebo sound generators (PSGs); (3) group 3—partial treatment achieved with binaural sound generators alone; and (4) group 4—a neutral control treatment implemented with the PSGs alone. Repeated measurements of categorical loudness judgments served as the primary outcome measure. The full-treatment categorical-loudness judgments for group 1, measured at treatment termination, were significantly greater than the corresponding pretreatment judgments measured at baseline at 500, 2,000, and 4,000 Hz. Moreover, increases in their “uncomfortably loud” judgments (∼12 dB over the range from 500 to 4,000 Hz) were superior to those measured for either of the partial-treatment groups 2 and 3 or for control group 4. Efficacy, assessed by treatment-related criterion increases ≥ 10 dB for judgments of uncomfortable loudness, was superior for full treatment (82% efficacy) compared with that for either of the partial treatments (25% and 40% for counseling combined with the placebo sound therapy and sound therapy alone, respectively) or for the control treatment (50%). The majority of the group 1 participants achieved their criterion improvements within 3 months of beginning treatment. The treatment effect from sound therapy was much greater than that for counseling, which was statistically indistinguishable in most of our analyses from the control treatment. The basic principles underlying the full-treatment protocol are valid and have general applicability for expanding the DR among individuals with sensorineural hearing losses, who may often report aided loudness problems. The positive full-treatment effects were superior to those achieved for either counseling or sound therapy in virtual or actual isolation, respectively; however, the delivery of both components in the full-treatment approach was essential for an optimum treatment outcome. PMID:27516711

  3. A Sound Therapy-Based Intervention to Expand the Auditory Dynamic Range for Loudness among Persons with Sensorineural Hearing Losses: A Randomized Placebo-Controlled Clinical Trial.

    PubMed

    Formby, Craig; Hawley, Monica L; Sherlock, LaGuinn P; Gold, Susan; Payne, JoAnne; Brooks, Rebecca; Parton, Jason M; Juneau, Roger; Desporte, Edward J; Siegle, Gregory R

    2015-05-01

    The primary aim of this research was to evaluate the validity, efficacy, and generalization of principles underlying a sound therapy-based treatment for promoting expansion of the auditory dynamic range (DR) for loudness. The basic sound therapy principles, originally devised for treatment of hyperacusis among patients with tinnitus, were evaluated in this study in a target sample of unsuccessfully fit and/or problematic prospective hearing aid users with diminished DRs (owing to their elevated audiometric thresholds and reduced sound tolerance). Secondary aims included: (1) delineation of the treatment contributions from the counseling and sound therapy components to the full-treatment protocol and, in turn, the isolated treatment effects from each of these individual components to intervention success; and (2) characterization of the respective dynamics for full, partial, and control treatments. Thirty-six participants with bilateral sensorineural hearing losses and reduced DRs, which affected their actual or perceived ability to use hearing aids, were enrolled in and completed a placebo-controlled (for sound therapy) randomized clinical trial. The 2 × 2 factorial trial design was implemented with or without various assignments of counseling and sound therapy. Specifically, participants were assigned randomly to one of four treatment groups (nine participants per group), including: (1) group 1-full treatment achieved with scripted counseling plus sound therapy implemented with binaural sound generators; (2) group 2-partial treatment achieved with counseling and placebo sound generators (PSGs); (3) group 3-partial treatment achieved with binaural sound generators alone; and (4) group 4-a neutral control treatment implemented with the PSGs alone. Repeated measurements of categorical loudness judgments served as the primary outcome measure. The full-treatment categorical-loudness judgments for group 1, measured at treatment termination, were significantly greater than the corresponding pretreatment judgments measured at baseline at 500, 2,000, and 4,000 Hz. Moreover, increases in their "uncomfortably loud" judgments (∼12 dB over the range from 500 to 4,000 Hz) were superior to those measured for either of the partial-treatment groups 2 and 3 or for control group 4. Efficacy, assessed by treatment-related criterion increases ≥ 10 dB for judgments of uncomfortable loudness, was superior for full treatment (82% efficacy) compared with that for either of the partial treatments (25% and 40% for counseling combined with the placebo sound therapy and sound therapy alone, respectively) or for the control treatment (50%). The majority of the group 1 participants achieved their criterion improvements within 3 months of beginning treatment. The treatment effect from sound therapy was much greater than that for counseling, which was statistically indistinguishable in most of our analyses from the control treatment. The basic principles underlying the full-treatment protocol are valid and have general applicability for expanding the DR among individuals with sensorineural hearing losses, who may often report aided loudness problems. The positive full-treatment effects were superior to those achieved for either counseling or sound therapy in virtual or actual isolation, respectively; however, the delivery of both components in the full-treatment approach was essential for an optimum treatment outcome.

  4. Processes controlling the remobilization of surficial sediment and formation of sedimentary furrows in north-central Long Island Sound

    USGS Publications Warehouse

    Poppe, L.J.; Knebel, H.J.; Lewis, R.S.; DiGiacomo-Cohen, M. L.

    2002-01-01

    Sidescan sonar, bathymetric, subbottom, and bottom-photographic surveys and sediment sampling have improved our understanding of the processes that control the complex distribution of bottom sediments and benthic habitats in Long Island Sound. Although the deeper (>20 m) waters of the central Sound are long-term depositional areas characterized by relatively weak bottom-current regimes, our data reveal the localized presence of sedimentary furrows. These erosional bedforms occur in fine-grained cohesive sediments (silts and clayey silts), trend east-northeast, are irregularly spaced, and have indistinct troughs with gently sloping walls. The average width and relief of the furrows is 9.2 m and 0.4 m, respectively. The furrows average about 206 m long, but range in length from 30 m to over 1,300 m. Longitudinal ripples, bioturbation, and nutclam shell debris are common within the furrows. Although many of the furrows appear to end by gradually narrowing, some furrows show a "tuning fork" joining pattern. Most of these junctions open toward the east, indicating net westward sediment transport. However, a few junctions open toward the west suggesting that oscillating tidal currents are the dominant mechanism controlling furrow formation. Sedimentary furrows and longitudinal ripples typically form in environments which have recurring, directionally stable, and occasionally strong currents. The elongate geometry and regional bathymetry of Long Island Sound combine to constrain the dominant tidal and storm currents to east-west flow directions and permit the development of these bedforms. Through resuspension due to biological activity and the subsequent development of erosional bedforms, fine-grained cohesive sediment can be remobilized and made available for transport farther westward into the estuary.

  5. Processes controlling the remobilization of surficial sediment and formation of sedimentary furrows in North-Central Long Island Sound

    USGS Publications Warehouse

    Poppe, L.J.; Knebel, H.J.; Lewis, R.S.; DiGiacomo-Cohen, M. L.

    2002-01-01

    Sidescan sonar, bathymetric, subbottom, and bottom-photographic surveys and sediment sampling have improved our understanding of the processes that control the complex distribution of bottom sediments and benthic habitats in Long Island Sound. Although the deeper (>20 m) waters of the central Sound are long-term depositional areas characterized by relatively weak bottom-current regimes, our data reveal the localized presence of sedimentary furrows. These erosional bedforms occur in fine-grained cohesive sediments (silts and clayey silts), trend east-northeast, are irregularly spaced, and have indistinct troughs with gently sloping walls. The average width and relief of the furrows is 9.2 m and 0.4 m, respectively. The furrows average about 206 m long, but range in length from 30 m to over 1,300 m. Longitudinal ripples, bioturbation, and nutclam shell debris are common within the furrows. Although many of the furrows appear to end by gradually narrowing, some furrows show a "tuning fork" joining pattern. Most of these junctions open toward the east, indicating net westward sediment transport. However, a few junctions open toward the west suggesting that oscillating tidal currents are the dominant mechanism controlling furrow formation. Sedimentary furrows and longitudinal ripples typically form in environments which have recurring, directionally stable, and occasionally strong currents. The elongate geometry and regional bathymetry of Long Island Sound combine to constrain the dominant tidal and storm currents to east-west flow directions and permit the development of these bedforms. Through resuspension due to biological activity and the subsequent development of erosional bedforms, fine-grained cohesive sediment can be remobilized and made available for transport farther westward into the estuary.

  6. Phytoremediation: An Environmentally Sound Technology for Pollution Prevention, Control and Remediation in Developing Countries

    ERIC Educational Resources Information Center

    Erakhrumen, Andrew Agbontalor

    2007-01-01

    The problem of environmental pollution has assumed an unprecedented proportion in many parts of the world especially in Nigeria and its Niger-Delta region in particular. This region is bedeviled with this problem perhaps owing to interplay of demographic and socio-economic forces coupled with the various activities that revolve round the…

  7. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  8. Numerical investigation of sound transmission through double wall cylinders with respect to active noise control

    NASA Astrophysics Data System (ADS)

    Coats, T. J.; Silcox, R. J.; Lester, H. C.

    Market pressure for more fuel efficient air travel has led to increased use of turboprop and higher bypass turbofan engines. The low frequency components of propeller, jet and boundary layer noise are difficult to attenuate with conventional passive techniques. Weight and geometric restrictions for sound absorbing meterials limit the amount and type of treatment that may be applied. An active noise control (ANC) method is providing to be an attractive alternative. The approach taken in this paper uses a numerical finite/boundary element method (FEM/BEM) that may be easilty adapted to arbitrary geometries. A double walled cylinder is modeled using commercially available software. The outer shell is modeled as an aluminum cylinder, similar to that of aircraft skins. The inner shell is modeled as a composite material representative of a lightweight, stiff trim panel. Two different inner shell materials are used. The first is representative of current trim structure, the second a much stiffer composite. The primary source is generated by an exterior acoustic monopole. Control fields are generated using normal force inputs to the inner cylindrical shell. A linear least mean square (LMS) algorithm is used to determine amplitudes of control forces that minimize the interior acoustic field. Coupling of acoustic and structural modes and noise reductions are discussed for each of the inner shell materials.

  9. Numerical investigation of sound transmission through double wall cylinders with respect to active noise control

    NASA Technical Reports Server (NTRS)

    Coats, T. J.; Silcox, R. J.; Lester, H. C.

    1993-01-01

    Market pressure for more fuel efficient air travel has led to increased use of turboprop and higher bypass turbofan engines. The low frequency components of propeller, jet and boundary layer noise are difficult to attenuate with conventional passive techniques. Weight and geometric restrictions for sound absorbing meterials limit the amount and type of treatment that may be applied. An active noise control (ANC) method is providing to be an attractive alternative. The approach taken in this paper uses a numerical finite/boundary element method (FEM/BEM) that may be easilty adapted to arbitrary geometries. A double walled cylinder is modeled using commercially available software. The outer shell is modeled as an aluminum cylinder, similar to that of aircraft skins. The inner shell is modeled as a composite material representative of a lightweight, stiff trim panel. Two different inner shell materials are used. The first is representative of current trim structure, the second a much stiffer composite. The primary source is generated by an exterior acoustic monopole. Control fields are generated using normal force inputs to the inner cylindrical shell. A linear least mean square (LMS) algorithm is used to determine amplitudes of control forces that minimize the interior acoustic field. Coupling of acoustic and structural modes and noise reductions are discussed for each of the inner shell materials.

  10. Ambient Noise in Emergency Rooms and Its Health Hazards

    PubMed Central

    Filus, Walderes; Lacerda, Adriana Bender Moreira de; Albizu, Evelyn

    2014-01-01

    Introduction The occupational risk due to high levels of noise in the hospital environment has been recognized, and the National Agency of Sanitary Surveillance of the Ministry of Health recommends evaluation and control of noise in hospital areas. Objectives To assess the sound environment in the emergency ward of a general trauma reference hospital in the city of Curitiba, Parana State, Brazil. Methods In this descriptive study, noise levels were assessed on mornings, afternoons, and evenings using an integrating Bruel & Kjaer (Denmark) calibrated sound level meter, type 2230. Ten indoor points in the emergency ward were assessed; the helicopter as well as several available pieces of equipment in the ward were assessed individually. Results Noise levels in sound pressure level ambiance [dBA] ranged from 56.6 to 68.8. The afternoon period was the noisiest. The helicopter at 119 dBA and the cast saw at 90 dBA were the noisiest equipment, and the lowest noise level found was the activated oximeter at 61.0 dBA. Conclusion In all assessed points, noise levels were above the comfort levels recommended by the Brazilian Association of Technical Standards (1987), which may harm users' and professionals' health as well as influence professional performance in the emergency ward. Sound pressure levels of the helicopter and cast saw reach high hearing hazard levels, requiring professionals to use individual protection equipment, and point to the need for creation and implementation of effective control measures of noise levels in emergency wards. PMID:26157493

  11. Geologic interpretation and multibeam bathymetry of the sea floor in the vicinity of the Race, eastern Long Island Sound

    USGS Publications Warehouse

    Poppe, L.J.; DiGiacomo-Cohen, M. L.; Doran, E.F.; Smith, S.M.; Stewart, H.F.; Forfinski, N.A.

    2007-01-01

    Digital terrain models (DTMs) produced from multibeam bathymetric data provide valuable base maps for marine geological interpretations (Todd and others, 1999; Mosher and Thomson, 2002; ten Brink and others, 2004; Poppe and others, 2006a, b, c, d). These maps help define the geological variability of the sea floor (one of the primary controls of benthic habitat diversity), improve our understanding of the processes that control the distribution and transport of bottom sediments and the distribution of benthic habitats and associated infaunal community structures, and provide a detailed framework for future research, monitoring, and management activities. The bathymetric survey interpreted herein (National Oceanic and Atmospheric Administration (NOAA) survey H11250) covers roughly 94 km² of sea floor in an area where a depression along the Orient Point-Fishers Island segment of the Harbor Hill-Roanoke Point-Charlestown Moraine forms the Race, the eastern opening to Long Island Sound. The Race also divides easternmost Long Island Sound from northwestern Block Island Sound (fig. 1). This bathymetry has been examined in relation to seismic reflection data collected concurrently, as well as archived seismic profiles acquired as part of a long-standing geologic mapping partnership between the State of Connecticut and the U.S. Geological Survey (USGS). The objective of this work was to use these acoustic data sets to interpret geomorphological attributes of the sea floor, and to use these interpretations to better understand the Quaternary geologic history and modern sedimentary processes.

  12. Experimental active control of sound in the ATR 42

    NASA Astrophysics Data System (ADS)

    Paonessa, A.; Sollo, A.; Paxton, M.; Purver, M.; Ross, C. F.

    Passenger comfort is becoming day by day an important issue for the market of the regional turboprop aircraft and also for the future high speed propeller driven aircraft. In these aircraft the main contribution to the passenger annoyance is due to the propeller noise blade passing frequency (BPF) and its harmonics. In the recent past a detailed theoretical and experimental work has been done by Alenia Aeronautica in order to reduce the noise level in the ATR aircraft passenger cabin by means of conventional passive treatments: synchrophasing of propellers, dynamic vibration absorbers, structural reinforcements, damping materials. The application of these treatments has been introduced on production aircraft with a remarkable improvement of noise comfort but with a significant weight increase. For these reasons, a major technology step is required for reaching passenger comfort comparable to that of jet aircraft with the minimum weight increase. The most suitable approach to this problem has been envisaged in the active noise control which consists in generating an anti-sound field in the passenger cabin to reduce the noise at propeller BPF and its harmonics. The attenuation is reached by means of a control system which acquires information about the cabin noise distribution and the propeller speed during flight and simultaneously generates the signals to drive the speakers.

  13. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  14. Light-induced vibration in the hearing organ

    PubMed Central

    Ren, Tianying; He, Wenxuan; Li, Yizeng; Grosh, Karl; Fridberger, Anders

    2014-01-01

    The exceptional sensitivity of mammalian hearing organs is attributed to an active process, where force produced by sensory cells boost sound-induced vibrations, making soft sounds audible. This process is thought to be local, with each section of the hearing organ capable of amplifying sound-evoked movement, and nearly instantaneous, since amplification can work for sounds at frequencies up to 100 kHz in some species. To test these fundamental precepts, we developed a method for focally stimulating the living hearing organ with light. Light pulses caused intense and highly damped mechanical responses followed by traveling waves that developed with considerable delay. The delayed response was identical to movements evoked by click-like sounds. This shows that the active process is neither local nor instantaneous, but requires mechanical waves traveling from the cochlear base toward its apex. A physiologically-based mathematical model shows that such waves engage the active process, enhancing hearing sensitivity. PMID:25087606

  15. The emotional symbolism of two English e-sounds: /i/ as in "cheap" is pleasant and /I/ as in "chip" active.

    PubMed

    Whissell, Cynthia

    2003-02-01

    This article aligns the symbolism of the long (/i/) and short (/I/) e sounds in English with the two dimensions of emotional space-Pleasantness and Activation. On the basis of this alignment, the four quadrants of emotional space are labelled Cheerful (high /i/, high /I/), Cheerless (low /i/, low /I/), Tough (low /i/, high /I/), and Tender (high /i/, low /I/). In four phases, data from over 50 samples (mainly, poetry, song lyrics, and names) were plotted and compared in terms of their use of the two e sounds. Significant and meaningful differences among samples were discovered in all phases. The placement of samples in quadrants was additionally informative. Data samples including many long e sounds (/i/) tended to be more Pleasant and those including many short e sounds (/I/) tended to be more Active.

  16. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence.

    PubMed

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.

  17. The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    PubMed Central

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928

  18. Multichannel sound reinforcement systems at work in a learning environment

    NASA Astrophysics Data System (ADS)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  19. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  20. Effects of Soundscapes on Perceived Crowding and Encounter Norms

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Oh; Shelby, Bo

    2011-07-01

    Soundscapes in recreation settings are becoming an important issue, but there are few studies of the effects of sounds on recreation experiences, especially crowding perceptions and encounter norms. This study compared effects of six types of sounds (an airplane, a truck engine, children playing, birds, water, and a control) on perceived crowding (PC) and encounter norms for hikers. Data were collected from 47 college students through lab experiments using simulated images, with moving hikers inserted in the original photo taken in the Jungmeori area of Mudeungsan Provincial Park in Korea. Overall, the motor-made sounds of the airplane and truck engine increased PC and decreased acceptability ratings, and the natural sounds of birds and water decreased PC and increased acceptability ratings. Ratings of the sound of children playing were similar to those in the control (i.e., no sound). In addition, as numbers of hikers increased, the overall effects of sounds decreased, and there were few significant differences in PC or acceptability ratings at the highest encounter levels. Theoretical and methodological implications are discussed.

  1. Effects of soundscapes on perceived crowding and encounter norms.

    PubMed

    Kim, Sang-Oh; Shelby, Bo

    2011-07-01

    Soundscapes in recreation settings are becoming an important issue, but there are few studies of the effects of sounds on recreation experiences, especially crowding perceptions and encounter norms. This study compared effects of six types of sounds (an airplane, a truck engine, children playing, birds, water, and a control) on perceived crowding (PC) and encounter norms for hikers. Data were collected from 47 college students through lab experiments using simulated images, with moving hikers inserted in the original photo taken in the Jungmeori area of Mudeungsan Provincial Park in Korea. Overall, the motor-made sounds of the airplane and truck engine increased PC and decreased acceptability ratings, and the natural sounds of birds and water decreased PC and increased acceptability ratings. Ratings of the sound of children playing were similar to those in the control (i.e., no sound). In addition, as numbers of hikers increased, the overall effects of sounds decreased, and there were few significant differences in PC or acceptability ratings at the highest encounter levels. Theoretical and methodological implications are discussed.

  2. The effects of acoustical refurbishment of classrooms on teachers' perceived noise exposure and noise-related health symptoms.

    PubMed

    Kristiansen, Jesper; Lund, Søren Peter; Persson, Roger; Challi, Rasmus; Lindskov, Janni Moon; Nielsen, Per Møberg; Larsen, Per Knudgaard; Toftum, Jørn

    2016-02-01

    To investigate whether acoustical refurbishment of classrooms for elementary and lower secondary grade pupils affected teachers' perceived noise exposure during teaching and noise-related health symptoms. Two schools (A and B) with a total of 102 teachers were subjected to an acoustical intervention. Accordingly, 36 classrooms (20 and 16 in school A and school B, respectively) were acoustically refurbished and 31 classrooms (16 and 15 in school A and school B, respectively) were not changed. Thirteen classrooms in school A were interim "sham" refurbished. Control measurements of RT and activity sound levels were measured before and after refurbishment. Data on perceived noise exposure, disturbance attributed to different noise sources, voice symptoms, and fatigue after work were collected over a year in a total of six consecutive questionnaires. Refurbished classrooms were associated with lower perceived noise exposure and lower ratings of disturbance attributed to noise from equipment in the class compared with unrefurbished classrooms. No associations between the classroom refurbishment and health symptoms were observed. Before acoustical refurbishment, the mean classroom reverberation time was 0.68 (school A) and 0.57 (school B) and 0.55 s in sham refurbished classrooms. After refurbishment, the RT was approximately 0.4 s in both schools. Activity sound level measurements confirmed that the intervention had reduced the equivalent sound levels during lessons with circa 2 dB(A) in both schools. The acoustical refurbishment was associated with a reduction in classroom reverberation time and activity sound levels in both schools. The acoustical refurbishment was associated with a reduction in the teachers' perceived noise exposure, and reports of disturbance from equipment in the classroom decreased. There was no significant effect of the refurbishment on the teachers' voice symptoms or fatigue after work.

  3. The neural basis of involuntary episodic memories.

    PubMed

    Hall, Shana A; Rubin, David C; Miles, Amanda; Davis, Simon W; Wing, Erik A; Cabeza, Roberto; Berntsen, Dorthe

    2014-10-01

    Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that, because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented, panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a postscan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial-temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that, although there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems.

  4. The Neural Basis of Involuntary Episodic Memories

    PubMed Central

    Hall, Shana A.; Rubin, David C.; Miles, Amanda; Davis, Simon W.; Wing, Erik A.; Cabeza, Roberto; Berntsen, Dorthe

    2014-01-01

    Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a post-scan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that while there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems. PMID:24702453

  5. Cortical activity patterns predict robust speech discrimination ability in noise

    PubMed Central

    Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.

    2012-01-01

    The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331

  6. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    PubMed

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  7. Vulnerability to the Irrelevant Sound Effect in Adult ADHD.

    PubMed

    Pelletier, Marie-France; Hodgetts, Helen M; Lafleur, Martin F; Vincent, Annick; Tremblay, Sébastien

    2016-04-01

    An ecologically valid adaptation of the irrelevant sound effect paradigm was employed to examine the relative roles of short-term memory, selective attention, and sustained attention in ADHD. In all, 32 adults with ADHD and 32 control participants completed a serial recall task in silence or while ignoring irrelevant background sound. Serial recall performance in adults with ADHD was reduced relative to controls in both conditions. The degree of interference due to irrelevant sound was greater for adults with ADHD. Furthermore, a positive correlation was observed between task performance under conditions of irrelevant sound and the extent of attentional problems reported by patients on a clinical symptom scale. The results demonstrate that adults with ADHD exhibit impaired short-term memory and a low resistance to distraction; however, their capacity for sustained attention is preserved as the impact of irrelevant sound diminished over the course of the task. © The Author(s) 2013.

  8. Sensory-Motor Networks Involved in Speech Production and Motor Control: An fMRI Study

    PubMed Central

    Behroozmand, Roozbeh; Shebek, Rachel; Hansen, Daniel R.; Oya, Hiroyuki; Robin, Donald A.; Howard, Matthew A.; Greenlee, Jeremy D.W.

    2015-01-01

    Speaking is one of the most complex motor behaviors developed to facilitate human communication. The underlying neural mechanisms of speech involve sensory-motor interactions that incorporate feedback information for online monitoring and control of produced speech sounds. In the present study, we adopted an auditory feedback pitch perturbation paradigm and combined it with functional magnetic resonance imaging (fMRI) recordings in order to identify brain areas involved in speech production and motor control. Subjects underwent fMRI scanning while they produced a steady vowel sound /a/ (speaking) or listened to the playback of their own vowel production (playback). During each condition, the auditory feedback from vowel production was either normal (no perturbation) or perturbed by an upward (+600 cents) pitch shift stimulus randomly. Analysis of BOLD responses during speaking (with and without shift) vs. rest revealed activation of a complex network including bilateral superior temporal gyrus (STG), Heschl's gyrus, precentral gyrus, supplementary motor area (SMA), Rolandic operculum, postcentral gyrus and right inferior frontal gyrus (IFG). Performance correlation analysis showed that the subjects produced compensatory vocal responses that significantly correlated with BOLD response increases in bilateral STG and left precentral gyrus. However, during playback, the activation network was limited to cortical auditory areas including bilateral STG and Heschl's gyrus. Moreover, the contrast between speaking vs. playback highlighted a distinct functional network that included bilateral precentral gyrus, SMA, IFG, postcentral gyrus and insula. These findings suggest that speech motor control involves feedback error detection in sensory (e.g. auditory) cortices that subsequently activate motor-related areas for the adjustment of speech parameters during speaking. PMID:25623499

  9. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds.

    PubMed

    Shinn-Cunningham, Barbara

    2017-10-17

    This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.

  10. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers: Comparative study.

    PubMed

    Cambi, Jacopo; Livi, Ludovica; Livi, Walter

    2017-05-01

    Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness ( P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation ( P <0.0001). Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions.

  11. How Sound Symbolism Is Processed in the Brain: A Study on Japanese Mimetic Words

    PubMed Central

    Okuda, Jiro; Okada, Hiroyuki; Matsuda, Tetsuya

    2014-01-01

    Sound symbolism is the systematic and non-arbitrary link between word and meaning. Although a number of behavioral studies demonstrate that both children and adults are universally sensitive to sound symbolism in mimetic words, the neural mechanisms underlying this phenomenon have not yet been extensively investigated. The present study used functional magnetic resonance imaging to investigate how Japanese mimetic words are processed in the brain. In Experiment 1, we compared processing for motion mimetic words with that for non-sound symbolic motion verbs and adverbs. Mimetic words uniquely activated the right posterior superior temporal sulcus (STS). In Experiment 2, we further examined the generalizability of the findings from Experiment 1 by testing another domain: shape mimetics. Our results show that the right posterior STS was active when subjects processed both motion and shape mimetic words, thus suggesting that this area may be the primary structure for processing sound symbolism. Increased activity in the right posterior STS may also reflect how sound symbolic words function as both linguistic and non-linguistic iconic symbols. PMID:24840874

  12. Personality traits modulate subcortical and cortical vestibular and anxiety responses to sound-evoked otolithic receptor stimulation.

    PubMed

    Indovina, Iole; Riccelli, Roberta; Staab, Jeffrey P; Lacquaniti, Francesco; Passamonti, Luca

    2014-11-01

    Strong links between anxiety, space-motion perception, and vestibular symptoms have been recognized for decades. These connections may extend to anxiety-related personality traits. Psychophysical studies showed that high trait anxiety affected postural control and visual scanning strategies under stress. Neuroticism and introversion were identified as risk factors for chronic subjective dizziness (CSD), a common psychosomatic syndrome. This study examined possible relationships between personality traits and activity in brain vestibular networks for the first time using functional magnetic resonance imaging (fMRI). Twenty-six right-handed healthy individuals underwent fMRI during sound-evoked vestibular stimulation. Regional brain activity and functional connectivity measures were correlated with personality traits of the Five Factor Model (neuroticism, extraversion-introversion, openness, agreeableness, consciousness). Neuroticism correlated positively with activity in the pons, vestibulo-cerebellum, and para-striate cortex, and negatively with activity in the supra-marginal gyrus. Neuroticism also correlated positively with connectivity between pons and amygdala, vestibulo-cerebellum and amygdala, inferior frontal gyrus and supra-marginal gyrus, and inferior frontal gyrus and para-striate cortex. Introversion correlated positively with amygdala activity and negatively with connectivity between amygdala and inferior frontal gyrus. Neuroticism and introversion correlated with activity and connectivity in cortical and subcortical vestibular, visual, and anxiety systems during vestibular stimulation. These personality-related changes in brain activity may represent neural correlates of threat sensitivity in posture and gaze control mechanisms in normal individuals. They also may reflect risk factors for anxiety-related morbidity in patients with vestibular disorders, including previously observed associations of neuroticism and introversion with CSD. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Aquatic Acoustic Metrics Interface Utility for Underwater Sound Monitoring and Analysis

    PubMed Central

    Ren, Huiying; Halvorsen, Michele B.; Deng, Zhiqun Daniel; Carlson, Thomas J.

    2012-01-01

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. In this paper, we provide a detailed description of a new software package, the Aquatic Acoustic Metrics Interface (AAMI), specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame. The features of the AAMI software are discussed, and several case studies are presented to illustrate its functionality. PMID:22969353

  14. Using therapeutic sound with progressive audiologic tinnitus management.

    PubMed

    Henry, James A; Zaugg, Tara L; Myers, Paula J; Schechter, Martin A

    2008-09-01

    Management of tinnitus generally involves educational counseling, stress reduction, and/or the use of therapeutic sound. This article focuses on therapeutic sound, which can involve three objectives: (a) producing a sense of relief from tinnitus-associated stress (using soothing sound); (b) passively diverting attention away from tinnitus by reducing contrast between tinnitus and the acoustic environment (using background sound); and (c) actively diverting attention away from tinnitus (using interesting sound). Each of these goals can be accomplished using three different types of sound-broadly categorized as environmental sound, music, and speech-resulting in nine combinations of uses of sound and types of sound to manage tinnitus. The authors explain the uses and types of sound, how they can be combined, and how the different combinations are used with Progressive Audiologic Tinnitus Management. They also describe how sound is used with other sound-based methods of tinnitus management (Tinnitus Masking, Tinnitus Retraining Therapy, and Neuromonics).

  15. NASA Rat Acoustic Tolerance Test 1994-1995: 8 kHz, 16 kHz, 32 kHz Experiments

    NASA Technical Reports Server (NTRS)

    Mele, Gary D.; Holley, Daniel C.; Naidu, Sujata

    1996-01-01

    Adult male Sprague-Dawley rats were exposed to chronic applied sound (74 to 79 dB, SPL) with octave band center frequencies of either 8, 16 or 32 kHz for up to 60 days. Control cages had ambient sound levels of about 62 dB (SPL). Groups of rats (test vs. control; N=9 per group) were euthanized after 0. 5. 14, 30, and 60 days. On each euthanasia day, objective evaluation of their physiology and behavior was performed using a Stress Assessment Battery (SAB) of measures. In addition, rat hearing was assessed using the brain stem auditory evoked potential (BAER) method after 60 days of exposure. No statistically significant differences in mean daily food use could be attributed to the presence of the applied test sound. Test rats used 5% more water than control rats. In the 8 kHz and 32 kHz tests this amount was statistically significant(P less than .05). This is a minor difference of questionable physiological significance. However, it may be an indication of a small reaction to the constant applied sound. Across all test frequencies, day 5 test rats had 6% larger spleens than control rats. No other body or organ weight differences were found to be statistically significant with respect to the application of sound. This spleen effect may be a transient adaptive process related to adaptation to the constant applied noise. No significant test effect on differential white blood cell counts could be demonstrated. One group demonstrated a low eosinophil count (16 kHz experiment, day 14 test group). However this was highly suspect. Across all test frequencies studied, day 5 test rats had 17% fewer total leukocytes than day 5 control rats. Sound exposed test rats exhibited 44% lower plasma corticosterone concentrations than did control rats. Note that the plasma corticosterone concentration was lower in the sound exposed test animals than the control animals in every instance (frequency exposure and number of days exposed).

  16. Light-weight low-frequency loudspeaker

    NASA Astrophysics Data System (ADS)

    Corsaro, Robert; Tressler, James

    2002-05-01

    In an aerospace application, we require a very low-mass sound generator with good performance at low audio frequencies (i.e., 30-400 Hz). A number of device configurations have been explored using various actuation technologies. Two particularly interesting devices have been developed, both using ``Thunder'' transducers (Face Intl. Corp.) as the actuation component. One of these devices has the advantage of high sound output but a complex phase spectrum, while the other has somewhat lower output but a highly uniform phase. The former is particularly novel in that the actuator is coupled to a flat, compliant diaphragm supported on the edges by an inflatable tube. This results in a radiating surface with very high modal complexity. Sound pressure levels measured in the far field (25 cm) using only 200-V peak drive (one-third or its rating) were nominally 74 6 dB over the band from 38 to 330 Hz. The second device essentially operates as a stiff low-mass piston, and is more suitable for our particular application, which is exploring the use of active controlled surface covers for reducing sound levels in payload fairing regions. [Work supported by NRL/ONR Smart Blanket program.

  17. The Technique of the Sound Studio: Radio, Record Production, Television, and Film. Revised Edition.

    ERIC Educational Resources Information Center

    Nisbett, Alec

    Detailed explanations of the studio techniques used in radio, record, television, and film sound production are presented in as non-technical language as possible. An introductory chapter discusses the physics and physiology of sound. Subsequent chapters detail standards for sound control in the studio; explain the planning and routine of a sound…

  18. Active Noise Control of Radiated Noise from Jets Originating NASA

    NASA Technical Reports Server (NTRS)

    Doty, Michael J.; Fuller, Christopher R.; Schiller, Noah H.; Turner, Travis L.

    2013-01-01

    The reduction of jet noise using a closed-loop active noise control system with highbandwidth active chevrons was investigated. The high frequency energy introduced by piezoelectrically-driven chevrons was demonstrated to achieve a broadband reduction of jet noise, presumably due to the suppression of large-scale turbulence. For a nozzle with one active chevron, benefits of up to 0.8 dB overall sound pressure level (OASPL) were observed compared to a static chevron nozzle near the maximum noise emission angle, and benefits of up to 1.9 dB OASPL were observed compared to a baseline nozzle with no chevrons. The closed-loop actuation system was able to effectively reduce noise at select frequencies by 1-3 dB. However, integrated OASPL did not indicate further reduction beyond the open-loop benefits, most likely due to the preliminary controller design, which was focused on narrowband performance.

  19. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    NASA Technical Reports Server (NTRS)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  20. Lung Sounds in Children before and after Respiratory Physical Therapy for Right Middle Lobe Atelectasis

    PubMed Central

    Adachi, Satoshi; Nakano, Hiroshi; Odajima, Hiroshi; Motomura, Chikako; Yoshioka, Yukiko

    2016-01-01

    Background Chest auscultation is commonly performed during respiratory physical therapy (RPT). However, the changes in breath sounds in children with atelectasis have not been previously reported. The aim of this study was to clarify the characteristics of breath sounds in children with atelectasis using acoustic measurements. Method The subjects of this study were 13 children with right middle lobe atelectasis (3–7 years) and 14 healthy children (3–7 years). Lung sounds at the bilateral fifth intercostal spaces on the midclavicular line were recorded. The right-to-left ratio (R/L ratio) and the expiration to inspiration ratio (E/I ratio) of the breath sound sound pressure were calculated separately for three octave bands (100–200 Hz, 200–400 Hz, and 400–800 Hz). These data were then compared between the atelectasis and control groups. In addition, the same measurements were repeated after treatment, including RPT, in the atelectasis group. Result Before treatment, the inspiratory R/L ratios for all the frequency bands were significantly lower in the atelectasis group than in the control group, and the E/I ratios for all the frequency bands were significantly higher in the atelectasis group than in the control group. After treatment, the inspiratory R/L ratios of the atelectasis group did not increase significantly, but the E/I ratios decreased for all the frequency bands and became similar to those of the control group. Conclusion Breath sound attenuation in the atelectatic area remained unchanged even after radiographical resolution, suggesting a continued decrease in local ventilation. On the other hand, the elevated E/I ratio for the atelectatic area was normalized after treatment. Therefore, the differences between inspiratory and expiration sound intensities may be an important marker of atelectatic improvement in children. PMID:27611433

  1. Lung Sounds in Children before and after Respiratory Physical Therapy for Right Middle Lobe Atelectasis.

    PubMed

    Adachi, Satoshi; Nakano, Hiroshi; Odajima, Hiroshi; Motomura, Chikako; Yoshioka, Yukiko

    2016-01-01

    Chest auscultation is commonly performed during respiratory physical therapy (RPT). However, the changes in breath sounds in children with atelectasis have not been previously reported. The aim of this study was to clarify the characteristics of breath sounds in children with atelectasis using acoustic measurements. The subjects of this study were 13 children with right middle lobe atelectasis (3-7 years) and 14 healthy children (3-7 years). Lung sounds at the bilateral fifth intercostal spaces on the midclavicular line were recorded. The right-to-left ratio (R/L ratio) and the expiration to inspiration ratio (E/I ratio) of the breath sound sound pressure were calculated separately for three octave bands (100-200 Hz, 200-400 Hz, and 400-800 Hz). These data were then compared between the atelectasis and control groups. In addition, the same measurements were repeated after treatment, including RPT, in the atelectasis group. Before treatment, the inspiratory R/L ratios for all the frequency bands were significantly lower in the atelectasis group than in the control group, and the E/I ratios for all the frequency bands were significantly higher in the atelectasis group than in the control group. After treatment, the inspiratory R/L ratios of the atelectasis group did not increase significantly, but the E/I ratios decreased for all the frequency bands and became similar to those of the control group. Breath sound attenuation in the atelectatic area remained unchanged even after radiographical resolution, suggesting a continued decrease in local ventilation. On the other hand, the elevated E/I ratio for the atelectatic area was normalized after treatment. Therefore, the differences between inspiratory and expiration sound intensities may be an important marker of atelectatic improvement in children.

  2. Hybrid Active/Passive Jet Engine Noise Suppression System

    NASA Technical Reports Server (NTRS)

    Parente, C. A.; Arcas, N.; Walker, B. E.; Hersh, A. S.; Rice, E. J.

    1999-01-01

    A novel adaptive segmented liner concept has been developed that employs active control elements to modify the in-duct sound field to enhance the tone-suppressing performance of passive liner elements. This could potentially allow engine designs that inherently produce more tone noise but less broadband noise, or could allow passive liner designs to more optimally address high frequency broadband noise. A proof-of-concept validation program was undertaken, consisting of the development of an adaptive segmented liner that would maximize attenuation of two radial modes in a circular or annular duct. The liner consisted of a leading active segment with dual annuli of axially spaced active Helmholtz resonators, followed by an optimized passive liner and then an array of sensing microphones. Three successively complex versions of the adaptive liner were constructed and their performances tested relative to the performance of optimized uniform passive and segmented passive liners. The salient results of the tests were: The adaptive segmented liner performed well in a high flow speed model fan inlet environment, was successfully scaled to a high sound frequency and successfully attenuated three radial modes using sensor and active resonator arrays that were designed for a two mode, lower frequency environment.

  3. A noise assessment and prediction system

    NASA Technical Reports Server (NTRS)

    Olsen, Robert O.; Noble, John M.

    1990-01-01

    A system has been designed to provide an assessment of noise levels that result from testing activities at Aberdeen Proving Ground, Md. The system receives meteorological data from surface stations and an upper air sounding system. The data from these systems are sent to a meteorological model, which provides forecasting conditions for up to three hours from the test time. The meteorological data are then used as input into an acoustic ray trace model which projects sound level contours onto a two-dimensional display of the surrounding area. This information is sent to the meteorological office for verification, as well as the range control office, and the environmental office. To evaluate the noise level predictions, a series of microphones are located off the reservation to receive the sound and transmit this information back to the central display unit. The computer models are modular allowing for a variety of models to be utilized and tested to achieve the best agreement with data. This technique of prediction and model validation will be used to improve the noise assessment system.

  4. Echolocation versus echo suppression in humans

    PubMed Central

    Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz

    2013-01-01

    Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105

  5. Knowledge About Sounds—Context-Specific Meaning Differently Activates Cortical Hemispheres, Auditory Cortical Fields, and Layers in House Mice

    PubMed Central

    Geissler, Diana B.; Schmidt, H. Sabine; Ehret, Günter

    2016-01-01

    Activation of the auditory cortex (AC) by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF), the ultrasonic field (UF), the secondary field (AII), and the dorsoposterior field (DP) suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers) and brains which acquired knowledge via implicit learning (naïve females). In this way, auditory cortical activation discriminates between instinctive (mothers) and learned (naïve females) cognition. PMID:27013959

  6. Temporal processing deficit leads to impaired multisensory binding in schizophrenia.

    PubMed

    Zvyagintsev, Mikhail; Parisi, Carmen; Mathiak, Klaus

    2017-09-01

    Schizophrenia has been characterised by neurodevelopmental dysconnectivity resulting in cognitive and perceptual dysmetria. Hence patients with schizophrenia may be impaired to detect the temporal relationship between stimuli in different sensory modalities. However, only a few studies described deficit in perception of temporally asynchronous multisensory stimuli in schizophrenia. We examined the perceptual bias and the processing time of synchronous and delayed sounds in the streaming-bouncing illusion in 16 patients with schizophrenia and a matched control group of 18 participants. Equal for patients and controls, the synchronous sound biased the percept of two moving squares towards bouncing as opposed to the more frequent streaming percept in the condition without sound. In healthy controls, a delay of the sound presentation significantly reduced the bias and led to prolonged processing time whereas patients with schizophrenia did not differentiate between this condition and the condition with synchronous sound. Schizophrenia leads to a prolonged window of simultaneity for audiovisual stimuli. Therefore, temporal processing deficit in schizophrenia can lead to hyperintegration of temporally unmatched multisensory stimuli.

  7. The auditory P50 component to onset and offset of sound

    PubMed Central

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi

    2008-01-01

    Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255

  8. [Effects of infrasound on activities of 3beta hydroxysteroid dehydrogenase and acid phosphatase of polygonal cells in adrenal cortex zona fasciculate in mice].

    PubMed

    Dang, Wei-min; Wang, Sheng; Tian, Shi-xiu; Chen, Bing; Sun, Fei; Li, Wei; Jiao, Yan; He, Li-hua

    2007-02-01

    To explore the biological effects of infrasound on the polygonal cells in adrenal cortex zona fasciculation in mice. The biological effects of infrasound on the activities of 3beta hydroxysteroid dehydrogenase (3-betaHSDH) and acid phosphatase(ACP) of the polygonal cells in adrenal cortex zona fasciculate were observed when exposure to 8 and 16 Hz infrasound at 80, 90, 100, 110, 120 and 130 dB for 1 day, 7 days and 14 days or 14 days after the exposure. When exposure to 8 Hz infrasound, the enzyme activities of 3-betaHSDH increase as the sound pressure levels increase. Only when the sound pressure levels reach 130 dB, the enzyme activities began to decrease exceptionally. When exposure to 16 Hz, 80 dB infrasound, no significant difference between the treatment and control group in the activities of 3-betaHSDH could be observed, but the injury of the polygonal cells had appeared. When exposure to 16 Hz, 100 dB infrasound, the activities of 3-betaHSDH started to increase. The cell injury still existed. When exposed to 16 Hz, 120 dB infrasound, the local tissue damage represented. Fourteen days after the mice exposure to 8 Hz, 90 dB and 130 dB infrasound for 14 days continuously, the local tissue injury of the adrenal cortex zona fasciculation began to recover at certain extent, but the higher the exposure sound pressure level, the poorer the tissue recovery. The biological effects of infrasound on the polygonal cells in adrenal cortex zona fasciculation response to the frequency of the infrasound are found at certain action strength range, but this characteristic usually is covered by the severe tissue injury. When exposure to infrasound is stopped for a period of time, the local tissue injury of the adrenal cortex zona fasciculation could recovers at certain extent, but the higher the exposure sound pressure level, the more poorer the tissue recovery.

  9. Echoes of the spoken past: how auditory cortex hears context during speech perception

    PubMed Central

    Skipper, Jeremy I.

    2014-01-01

    What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we ‘hear’ during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds. PMID:25092665

  10. Sound Levels and Risk Perceptions of Music Students During Classes.

    PubMed

    Rodrigues, Matilde A; Amorim, Marta; Silva, Manuela V; Neves, Paula; Sousa, Aida; Inácio, Octávio

    2015-01-01

    It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians' exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.

  11. Hologlyphics: volumetric image synthesis performance system

    NASA Astrophysics Data System (ADS)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  12. The windowed sound therapy: a new empirical approach for an effectiv personalized treatment of tinnitus.

    PubMed

    Lugli, Marco; Romani, Romano; Ponzi, Stefano; Bacciu, Salvatore; Parmigiani, Stefano

    2009-01-01

    We auditorily stimulated patients affected by subjective tinnitus with broadband noise containing a notch around their tinnitus frequency. We assessed the long-term effects on tinnitus perception in patients listening to notched noise stimuli (referred to as windowed sound therapy [WST]) by measuring the variation of subjects' tinnitus loudness over a period of 2-12 months. We tested the effectiveness of WST using non-notched broadband noise and noise of water as control sound therapies. We found a significant long-term reduction of tinnitus loudness in subjects treated with notched noise but not in those treated with control stimulations. These results point to the importance of the personalized sound treatment of tinnitus sufferers for the development of an effective tinnitus sound therapy.

  13. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  14. Hearing Living Symbols and Nonliving Icons: Category Specificities in the Cognitive Processing of Environmental Sounds

    ERIC Educational Resources Information Center

    Giordano, Bruno L.; McDonnell, John; McAdams, Stephen

    2010-01-01

    The neurocognitive processing of environmental sounds and linguistic stimuli shares common semantic resources and can lead to the activation of motor programs for the generation of the passively heard sound or speech. We investigated the extent to which the cognition of environmental sounds, like that of language, relies on symbolic mental…

  15. Rainsticks: Integrating Culture, Folklore, and the Physics of Sound

    ERIC Educational Resources Information Center

    Moseley, Christine; Fies, Carmen

    2007-01-01

    The purpose of this activity is for students to build a rainstick out of materials in their own environment and imitate the sound of rain while investigating the physical principles of sound. Students will be able to relate the sound produced by an instrument to the type and quantity of materials used in its construction.

  16. Sound radiation and wing mechanics in stridulating field crickets (Orthoptera: Gryllidae).

    PubMed

    Montealegre-Z, Fernando; Jonsson, Thorin; Robert, Daniel

    2011-06-15

    Male field crickets emit pure-tone mating calls by rubbing their wings together. Acoustic radiation is produced by rapid oscillations of the wings, as the right wing (RW), bearing a file, is swept across the plectrum borne on the left wing (LW). Earlier work found the natural resonant frequency (f(o)) of individual wings to be different, but there is no consensus on the origin of these differences. Previous studies suggested that the frequency along the song pulse is controlled independently by each wing. It has also been argued that the stridulatory file has a variable f(o) and that the frequency modulation observed in most species is associated with this variability. To test these two hypotheses, a method was developed for the non-contact measurement of wing vibrations during singing in actively stridulating Gryllus bimaculatus. Using focal microinjection of the neuroactivator eserine into the cricket's brain to elicit stridulation and micro-scanning laser Doppler vibrometry, we monitored wing vibration in actively singing insects. The results show significantly lower f(o) in LWs compared with RWs, with the LW f(o) being identical to the sound carrier frequency (N=44). But during stridulation, the two wings resonate at one identical frequency, the song carrier frequency, with the LW dominating in amplitude response. These measurements also demonstrate that the stridulatory file is a constant resonator, as no variation was observed in f(o) along the file during sound radiation. Our findings show that, as they engage in stridulation, cricket wings work as coupled oscillators that together control the mechanical oscillations generating the remarkably pure species-specific song.

  17. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial.

    PubMed

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  18. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    PubMed Central

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children. PMID:29674986

  19. Moderate acoustic changes can disrupt the sleep of very preterm infants in their incubators.

    PubMed

    Kuhn, Pierre; Zores, Claire; Langlet, Claire; Escande, Benoît; Astruc, Dominique; Dufour, André

    2013-10-01

    To evaluate the impact of moderate noise on the sleep of very early preterm infants (VPI). Observational study of 26 VPI of 26-31 weeks' gestation, with prospective measurements of sound pressure level and concomitant video records. Sound peaks were identified and classified according to their signal-to-noise ratio (SNR) above background noise. Prechtl's arousal states during sound peaks were assessed by two observers blinded to the purpose of the study. Changes in sleep/arousal states following sound peaks were compared with spontaneous changes during randomly selected periods without sound peaks. We identified 598 isolated sound peaks (5 ≤ SNR < 10 decibel slow response A (dBA), n = 518; 10 ≤ SNR < 15 dBA, n = 80) during sleep. Awakenings were observed during 33.8% (95% CI, 24-43.7%) of exposures to sound peaks of 5-10 dBA SNR and 39.7% (95% CI, 26-53.3%) of exposures to sound peaks of SNR 10-15 dBA, but only 11.7% (95% CI, 6.2-17.1%) of control periods. The proportions of awakenings following sound peaks were higher than the proportions of arousals during control periods (p < 0.005). Moderate acoustic changes can disrupt the sleep of VPI, and efficient sound abatement measures are needed. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  20. Food words distract the hungry: Evidence of involuntary semantic processing of task-irrelevant but biologically-relevant unexpected auditory words.

    PubMed

    Parmentier, Fabrice B R; Pacheco-Unguetti, Antonia P; Valero, Sara

    2018-01-01

    Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants' biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants' biological needs.

  1. Food words distract the hungry: Evidence of involuntary semantic processing of task-irrelevant but biologically-relevant unexpected auditory words

    PubMed Central

    Pacheco-Unguetti, Antonia P.; Valero, Sara

    2018-01-01

    Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants’ biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants’ biological needs. PMID:29300763

  2. Structural Acoustic Characteristics of Aircraft and Active Control of Interior Noise

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.

    1998-01-01

    The reduction of aircraft cabin sound levels to acceptable values still remains a topic of much research. The use of conventional passive approaches has been extensively studied and implemented. However performance limits of these techniques have been reached. In this project, new techniques for understanding the structural acoustic behavior of aircraft fuselages and the use of this knowledge in developing advanced new control approaches are investigated. A central feature of the project is the Aircraft Fuselage Test Facility at Va Tech which is based around a full scale Cessna Citation III fuselage. The work is divided into two main parts; the first part investigates the use of an inverse technique for identifying dominant fuselage vibrations. The second part studies the development and implementation of active and active-passive techniques for controlling aircraft interior noise.

  3. Toward Active Control of Noise from Hot Supersonic Jets

    DTIC Science & Technology

    2012-11-15

    Instruments PXIe (PCI extensions for Instrumentation-express) system. The PXIe system has four PXIe-4331 cards (8 channels, 24 bits of resolution...üiüj) _ dxjdx ■j \\ düi düj dxj dxi düi du dii dx ?)■ (7) Our intention is to use (7) as an indicator of sound production in high speed

  4. [-25]A Similarity Analysis of Audio Signal to Develop a Human Activity Recognition Using Similarity Networks.

    PubMed

    García-Hernández, Alejandra; Galván-Tejada, Carlos E; Galván-Tejada, Jorge I; Celaya-Padilla, José M; Gamboa-Rosales, Hamurabi; Velasco-Elizondo, Perla; Cárdenas-Vargas, Rogelio

    2017-11-21

    Human Activity Recognition (HAR) is one of the main subjects of study in the areas of computer vision and machine learning due to the great benefits that can be achieved. Examples of the study areas are: health prevention, security and surveillance, automotive research, and many others. The proposed approaches are carried out using machine learning techniques and present good results. However, it is difficult to observe how the descriptors of human activities are grouped. In order to obtain a better understanding of the the behavior of descriptors, it is important to improve the abilities to recognize the human activities. This paper proposes a novel approach for the HAR based on acoustic data and similarity networks. In this approach, we were able to characterize the sound of the activities and identify those activities looking for similarity in the sound pattern. We evaluated the similarity of the sounds considering mainly two features: the sound location and the materials that were used. As a result, the materials are a good reference classifying the human activities compared with the location.

  5. A Similarity Analysis of Audio Signal to Develop a Human Activity Recognition Using Similarity Networks

    PubMed Central

    García-Hernández, Alejandra; Galván-Tejada, Jorge I.; Celaya-Padilla, José M.; Velasco-Elizondo, Perla; Cárdenas-Vargas, Rogelio

    2017-01-01

    Human Activity Recognition (HAR) is one of the main subjects of study in the areas of computer vision and machine learning due to the great benefits that can be achieved. Examples of the study areas are: health prevention, security and surveillance, automotive research, and many others. The proposed approaches are carried out using machine learning techniques and present good results. However, it is difficult to observe how the descriptors of human activities are grouped. In order to obtain a better understanding of the the behavior of descriptors, it is important to improve the abilities to recognize the human activities. This paper proposes a novel approach for the HAR based on acoustic data and similarity networks. In this approach, we were able to characterize the sound of the activities and identify those activities looking for similarity in the sound pattern. We evaluated the similarity of the sounds considering mainly two features: the sound location and the materials that were used. As a result, the materials are a good reference classifying the human activities compared with the location. PMID:29160799

  6. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    PubMed

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  7. Impaired Facilitatory Mechanisms of Auditory Attention After Damage of the Lateral Prefrontal Cortex.

    PubMed

    Bidet-Caulet, Aurélie; Buchanan, Kelly G; Viswanath, Humsini; Black, Jessica; Scabini, Donatella; Bonnet-Brilhault, Frédérique; Knight, Robert T

    2015-11-01

    There is growing evidence that auditory selective attention operates via distinct facilitatory and inhibitory mechanisms enabling selective enhancement and suppression of sound processing, respectively. The lateral prefrontal cortex (LPFC) plays a crucial role in the top-down control of selective attention. However, whether the LPFC controls facilitatory, inhibitory, or both attentional mechanisms is unclear. Facilitatory and inhibitory mechanisms were assessed, in patients with LPFC damage, by comparing event-related potentials (ERPs) to attended and ignored sounds with ERPs to these same sounds when attention was equally distributed to all sounds. In control subjects, we observed 2 late frontally distributed ERP components: a transient facilitatory component occurring from 150 to 250 ms after sound onset; and an inhibitory component onsetting at 250 ms. Only the facilitatory component was affected in patients with LPFC damage: this component was absent when attending to sounds delivered in the ear contralateral to the lesion, with the most prominent decreases observed over the damaged brain regions. These findings have 2 important implications: (i) they provide evidence for functionally distinct facilitatory and inhibitory mechanisms supporting late auditory selective attention; (ii) they show that the LPFC is involved in the control of the facilitatory mechanisms of auditory attention. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Decentralized control of sound radiation using iterative loop recovery.

    PubMed

    Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R

    2010-10-01

    A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.

  9. Decentralized Control of Sound Radiation Using Iterative Loop Recovery

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2009-01-01

    A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.

  10. Decentralized Control of Sound Radiation from an Aircraft-Style Panel Using Iterative Loop Recovery

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.

    2008-01-01

    A decentralized LQG-based control strategy is designed to reduce low-frequency sound transmission through periodically stiffened panels. While modern control strategies have been used to reduce sound radiation from relatively simple structural acoustic systems, significant implementation issues have to be addressed before these control strategies can be extended to large systems such as the fuselage of an aircraft. For instance, centralized approaches typically require a high level of connectivity and are computationally intensive, while decentralized strategies face stability problems caused by the unmodeled interaction between neighboring control units. Since accurate uncertainty bounds are not known a priori, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is validated using real-time control experiments performed on a built-up aluminum test structure representative of the fuselage of an aircraft. Experiments demonstrate that the iterative approach is capable of achieving 12 dB peak reductions and a 3.6 dB integrated reduction in radiated sound power from the stiffened panel.

  11. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    PubMed

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  12. Effects of anthropogenic sound on digging behavior, metabolism, Ca2+/Mg2+ ATPase activity, and metabolism-related gene expression of the bivalve Sinonovacula constricta

    PubMed Central

    Peng, Chao; Zhao, Xinguo; Liu, Saixi; Shi, Wei; Han, Yu; Guo, Cheng; Jiang, Jingang; Wan, Haibo; Shen, Tiedong; Liu, Guangxu

    2016-01-01

    Anthropogenic sound has increased significantly in the past decade. However, only a few studies to date have investigated its effects on marine bivalves, with little known about the underlying physiological and molecular mechanisms. In the present study, the effects of different types, frequencies, and intensities of anthropogenic sounds on the digging behavior of razor clams (Sinonovacula constricta) were investigated. The results showed that variations in sound intensity induced deeper digging. Furthermore, anthropogenic sound exposure led to an alteration in the O:N ratios and the expression of ten metabolism-related genes from the glycolysis, fatty acid biosynthesis, tryptophan metabolism, and Tricarboxylic Acid Cycle (TCA cycle) pathways. Expression of all genes under investigation was induced upon exposure to anthropogenic sound at ~80 dB re 1 μPa and repressed at ~100 dB re 1 μPa sound. In addition, the activity of Ca2+/Mg2+-ATPase in the feet tissues, which is directly related to muscular contraction and subsequently to digging behavior, was also found to be affected by anthropogenic sound intensity. The findings suggest that sound may be perceived by bivalves as changes in the water particle motion and lead to the subsequent reactions detected in razor clams. PMID:27063002

  13. Effects of anthropogenic sound on digging behavior, metabolism, Ca(2+)/Mg(2+) ATPase activity, and metabolism-related gene expression of the bivalve Sinonovacula constricta.

    PubMed

    Peng, Chao; Zhao, Xinguo; Liu, Saixi; Shi, Wei; Han, Yu; Guo, Cheng; Jiang, Jingang; Wan, Haibo; Shen, Tiedong; Liu, Guangxu

    2016-04-11

    Anthropogenic sound has increased significantly in the past decade. However, only a few studies to date have investigated its effects on marine bivalves, with little known about the underlying physiological and molecular mechanisms. In the present study, the effects of different types, frequencies, and intensities of anthropogenic sounds on the digging behavior of razor clams (Sinonovacula constricta) were investigated. The results showed that variations in sound intensity induced deeper digging. Furthermore, anthropogenic sound exposure led to an alteration in the O:N ratios and the expression of ten metabolism-related genes from the glycolysis, fatty acid biosynthesis, tryptophan metabolism, and Tricarboxylic Acid Cycle (TCA cycle) pathways. Expression of all genes under investigation was induced upon exposure to anthropogenic sound at ~80 dB re 1 μPa and repressed at ~100 dB re 1 μPa sound. In addition, the activity of Ca(2+)/Mg(2+)-ATPase in the feet tissues, which is directly related to muscular contraction and subsequently to digging behavior, was also found to be affected by anthropogenic sound intensity. The findings suggest that sound may be perceived by bivalves as changes in the water particle motion and lead to the subsequent reactions detected in razor clams.

  14. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds

    PubMed Central

    2017-01-01

    Purpose This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method The results from neuroscience and psychoacoustics are reviewed. Results In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with “normal hearing.” Conclusions How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601617 PMID:29049598

  15. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers

    PubMed Central

    Cambi, Jacopo; Livi, Ludovica; Livi, Walter

    2017-01-01

    Objectives Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. Methods This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. Results The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness (P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation (P <0.0001). Conclusion Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions. PMID:28690888

  16. Potential uses of vacuum bubbles in noise and vibration control

    NASA Technical Reports Server (NTRS)

    Ver, Istvan L.

    1989-01-01

    Vacuum bubbles are new acoustic elements which are dynamically more compliant than the gas volume they replace, but which are statically robust. They are made of a thin metallic shell with vacuum in their cavity. Consequently, they pose no danger in terms of contamination or fire hazard. The potential of the vacuum bubble concept for noise and vibration control was assessed with special emphases on spacecraft and aircraft applications. The following potential uses were identified: (1) as a cladding, to reduce sound radiation of vibrating surfaces and the sound excitation of structures, (2) as a screen, to reflect or absorb an incident sound wave, and (3) as a liner, to increase low frequency sound transmission loss of double walls and to increase the low frequency sound attenuation of muffler baffles. It was found that geometric and material parameters must be controlled to a very high accuracy to obtain optimal performance and that performance is highly sensitive to variations in static pressure. Consequently, it was concluded that vacuum bubbles have more potential in spacecraft applications where static pressure is controlled more than in aircraft applications where large fluctuations in static pressure are common.

  17. Controlled sound field with a dual layer loudspeaker array

    NASA Astrophysics Data System (ADS)

    Shin, Mincheol; Fazi, Filippo M.; Nelson, Philip A.; Hirono, Fabio C.

    2014-08-01

    Controlled sound interference has been extensively investigated using a prototype dual layer loudspeaker array comprised of 16 loudspeakers. Results are presented for measures of array performance such as input signal power, directivity of sound radiation and accuracy of sound reproduction resulting from the application of conventional control methods such as minimization of error in mean squared pressure, maximization of energy difference and minimization of weighted pressure error and energy. Procedures for selecting the tuning parameters have also been introduced. With these conventional concepts aimed at the production of acoustically bright and dark zones, all the control methods used require a trade-off between radiation directivity and reproduction accuracy in the bright zone. An alternative solution is proposed which can achieve better performance based on the measures presented simultaneously by inserting a low priority zone named as the “gray” zone. This involves the weighted minimization of mean-squared errors in both bright and dark zones together with the gray zone in which the minimization error is given less importance. This results in the production of directional bright zone in which the accuracy of sound reproduction is maintained with less required input power. The results of simulations and experiments are shown to be in excellent agreement.

  18. Acoustic metamaterials capable of both sound insulation and energy harvesting

    NASA Astrophysics Data System (ADS)

    Li, Junfei; Zhou, Xiaoming; Huang, Guoliang; Hu, Gengkai

    2016-04-01

    Membrane-type acoustic metamaterials are well known for low-frequency sound insulation. In this work, by introducing a flexible piezoelectric patch, we propose sound-insulation metamaterials with the ability of energy harvesting from sound waves. The dual functionality of the metamaterial device has been verified by experimental results, which show an over 20 dB sound transmission loss and a maximum energy conversion efficiency up to 15.3% simultaneously. This novel property makes the metamaterial device more suitable for noise control applications.

  19. 75 FR 33698 - Safety Zones; Annual Firework Displays Within the Captain of the Port, Puget Sound Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-15

    ...-AA00 Safety Zones; Annual Firework Displays Within the Captain of the Port, Puget Sound Area of... of the Port (COTP), Puget Sound Area of Responsibility (AOR). When these safety zones are activated... Captain of the Port, Puget Sound or Designated Representative. DATES: This rule is effective June 15, 2010...

  20. Low-frequency sound affects active micromechanics in the human inner ear

    PubMed Central

    Kugler, Kathrin; Wiegrebe, Lutz; Grothe, Benedikt; Kössl, Manfred; Gürkov, Robert; Krause, Eike; Drexl, Markus

    2014-01-01

    Noise-induced hearing loss is one of the most common auditory pathologies, resulting from overstimulation of the human cochlea, an exquisitely sensitive micromechanical device. At very low frequencies (less than 250 Hz), however, the sensitivity of human hearing, and therefore the perceived loudness is poor. The perceived loudness is mediated by the inner hair cells of the cochlea which are driven very inadequately at low frequencies. To assess the impact of low-frequency (LF) sound, we exploited a by-product of the active amplification of sound outer hair cells (OHCs) perform, so-called spontaneous otoacoustic emissions. These are faint sounds produced by the inner ear that can be used to detect changes of cochlear physiology. We show that a short exposure to perceptually unobtrusive, LF sounds significantly affects OHCs: a 90 s, 80 dB(A) LF sound induced slow, concordant and positively correlated frequency and level oscillations of spontaneous otoacoustic emissions that lasted for about 2 min after LF sound offset. LF sounds, contrary to their unobtrusive perception, strongly stimulate the human cochlea and affect amplification processes in the most sensitive and important frequency range of human hearing. PMID:26064536

  1. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations

    PubMed Central

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. PMID:26545618

  2. Near-infrared-spectroscopic study on processing of sounds in the brain; a comparison between native and non-native speakers of Japanese.

    PubMed

    Tsunoda, Koichi; Sekimoto, Sotaro; Itoh, Kenji

    2016-06-01

    Conclusions The result suggested that mother tongue Japanese and non- mother tongue Japanese differ in their pattern of brain dominance when listening to sounds from the natural world-in particular, insect sounds. These results reveal significant support for previous findings from Tsunoda (in 1970). Objectives This study concentrates on listeners who show clear evidence of a 'speech' brain vs a 'music' brain and determines which side is most active in the processing of insect sounds, using with near-infrared spectroscopy. Methods The present study uses 2-channel Near Infrared Spectroscopy (NIRS) to provide a more direct measure of left- and right-brain activity while participants listen to each of three types of sounds: Japanese speech, Western violin music, or insect sounds. Data were obtained from 33 participants who showed laterality on opposite sides for Japanese speech and Western music. Results Results showed that a majority (80%) of the MJ participants exhibited dominance for insect sounds on the side that was dominant for language, while a majority (62%) of the non-MJ participants exhibited dominance for insect sounds on the side that was dominant for music.

  3. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.

    PubMed

    Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.

  4. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments

    PubMed Central

    Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768

  5. L-type calcium channels refine the neural population code of sound level

    PubMed Central

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  6. Spider web-structured labyrinthine acoustic metamaterials for low-frequency sound control

    NASA Astrophysics Data System (ADS)

    Krushynska, A. O.; Bosia, F.; Miniaci, M.; Pugno, N. M.

    2017-10-01

    Attenuating low-frequency sound remains a challenge, despite many advances in this field. Recently-developed acoustic metamaterials are characterized by unusual wave manipulation abilities that make them ideal candidates for efficient subwavelength sound control. In particular, labyrinthine acoustic metamaterials exhibit extremely high wave reflectivity, conical dispersion, and multiple artificial resonant modes originating from the specifically-designed topological architectures. These features enable broadband sound attenuation, negative refraction, acoustic cloaking and other peculiar effects. However, hybrid and/or tunable metamaterial performance implying enhanced wave reflection and simultaneous presence of conical dispersion at desired frequencies has not been reported so far. In this paper, we propose a new type of labyrinthine acoustic metamaterials (LAMMs) with hybrid dispersion characteristics by exploiting spider web-structured configurations. The developed design approach consists in adding a square surrounding frame to sectorial circular-shaped labyrinthine channels described in previous publications (e.g. (11)). Despite its simplicity, this approach provides tunability in the metamaterial functionality, such as the activation/elimination of subwavelength band gaps and negative group-velocity modes by increasing/decreasing the edge cavity dimensions. Since these cavities can be treated as extensions of variable-width internal channels, it becomes possible to exploit geometrical features, such as channel width, to shift the band gap position and size to desired frequencies. Time transient simulations demonstrate the effectiveness of the proposed metastructures for wave manipulation in terms of transmission or reflection coefficients, amplitude attenuation and time delay at subwavelength frequencies. The obtained results can be important for practical applications of LAMMs such as lightweight acoustic barriers with enhanced broadband wave-reflecting performances.

  7. ERP Correlates of Pitch Error Detection in Complex Tone and Voice Auditory Feedback with Missing Fundamental

    PubMed Central

    Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.

    2012-01-01

    Previous studies have shown that the pitch of a sound is perceived in the absence of its fundamental frequency (F0), suggesting that a distinct mechanism may resolve pitch based on a pattern that exists between harmonic frequencies. The present study investigated whether such a mechanism is active during voice pitch control. ERPs were recorded in response to +200 cents pitch shifts in the auditory feedback of self-vocalizations and complex tones with and without the F0. The absence of the fundamental induced no difference in ERP latencies. However, a right-hemisphere difference was found in the N1 amplitudes with larger responses to complex tones that included the fundamental compared to when it was missing. The P1 and N1 latencies were shorter in the left hemisphere, and the N1 and P2 amplitudes were larger bilaterally for pitch shifts in voice and complex tones compared with pure tones. These findings suggest hemispheric differences in neural encoding of pitch in sounds with missing fundamental. Data from the present study suggest that the right cortical auditory areas, thought to be specialized for spectral processing, may utilize different mechanisms to resolve pitch in sounds with missing fundamental. The left hemisphere seems to perform faster processing to resolve pitch based on the rate of temporal variations in complex sounds compared with pure tones. These effects indicate that the differential neural processing of pitch in the left and right hemispheres may enable the audio-vocal system to detect temporal and spectral variations in the auditory feedback for vocal pitch control. PMID:22386045

  8. Ventilation noise and its effects on annoyance and performance

    NASA Astrophysics Data System (ADS)

    Landstrom, Ulf

    2004-05-01

    In almost every room environment, ventilation acts as a more or less prominent part of the noise exposure. The contribution to the overall sound environment is a question not only of the way in which the ventilation system itself functions, but also a question of the prominence of other contemporary sound sources such as speech, equipment, machines, and external noises. Hazardous effects due to ventilation noise are most prominent in offices, hospitals, control rooms, classrooms, conference rooms, and other types of silent areas. The effects evoked by ventilation noise have also been found to be related to the type of activity being conducted. Annoyance and performance thus not only seemed to be linked to the physical character of exposure, i.e., noise level, frequency characteristics, and length of exposure, but also mental and manual activity, complexity, and monotony of the work. The effects can be described in terms of annoyance, discomfort, and fatigue, with consequences on performance and increased mental load. The silent areas where ventilation noise may be most frequently experienced are often synonymous with areas and activities most sensitive to the exposure.

  9. Development of an Experimental Rig for Investigation of Higher Order Modes in Ducts

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Cabell, Randolph H.; Brown, Martha C.

    2006-01-01

    Continued progress to reduce fan noise emission from high bypass ratio engine ducts in aircraft increasingly relies on accurate description of the sound propagation in the duct. A project has been undertaken at NASA Langley Research Center to investigate the propagation of higher order modes in ducts with flow. This is a two-pronged approach, including development of analytic models (the subject of a separate paper) and installation of a laboratory-quality test rig. The purposes of the rig are to validate the analytical models and to evaluate novel duct acoustic liner concepts, both passive and active. The dimensions of the experimental rig test section scale to between 25% and 50% of the aft bypass ducts of most modern engines. The duct is of rectangular cross section so as to provide flexibility to design and fabricate test duct liner samples. The test section can accommodate flow paths that are straight through or offset from inlet to discharge, the latter design allowing investigation of the effect of curvature on sound propagation and duct liner performance. The maximum air flow rate through the duct is Mach 0.3. Sound in the duct is generated by an array of 16 high-intensity acoustic drivers. The signals to the loudspeaker array are generated by a multi-input/multi-output feedforward control system that has been developed for this project. The sound is sampled by arrays of flush-mounted microphones and a modal decomposition is performed at the frequency of sound generation. The data acquisition system consists of two arrays of flush-mounted microphones, one upstream of the test section and one downstream. The data are used to determine parameters such as the overall insertion loss of the test section treatment as well as the effect of the treatment on a modal basis such as mode scattering. The methodology used for modal decomposition is described, as is a description of the mode generation control system. Data are presented which demonstrate the performance of the controller to generate the desired mode while suppressing all other cut on modes in the duct.

  10. DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS

    PubMed Central

    Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.

    2014-01-01

    We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757

  11. Balloons and bavoons versus spikes and shikes: ERPs reveal shared neural processes for shape-sound-meaning congruence in words, and shape-sound congruence in pseudowords.

    PubMed

    Sučević, Jelena; Savić, Andrej M; Popović, Mirjana B; Styles, Suzy J; Ković, Vanja

    2015-01-01

    There is something about the sound of a pseudoword like takete that goes better with a spiky, than a curvy shape (Köhler, 1929:1947). Yet despite decades of research into sound symbolism, the role of this effect on real words in the lexicons of natural languages remains controversial. We report one behavioural and one ERP study investigating whether sound symbolism is active during normal language processing for real words in a speaker's native language, in the same way as for novel word forms. The results indicate that sound-symbolic congruence has a number of influences on natural language processing: Written forms presented in a congruent visual context generate more errors during lexical access, as well as a chain of differences in the ERP. These effects have a very early onset (40-80 ms, 100-160 ms, 280-320 ms) and are later overshadowed by familiar types of semantic processing, indicating that sound symbolism represents an early sensory-co-activation effect. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Data Assimilation Experiments using Quality Controlled AIRS Version 5 Temperature Soundings

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2008-01-01

    The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AlRS data. Version 5 contains accurate case-by-case error estimates for most derived products, which are also used for quality control. We have conducted forecast impact experiments assimilating AlRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM. Assimilation of quality controlled temperature profiles resulted in significantly improved forecast skill in both the Northern Hemisphere and Southern Hemisphere Extra-Tropics, compared to that obtained from analyses obtained when all data used operationally by NCEP except for AlRS data is assimilated. Experiments using different Quality Control thresholds for assimilation of AlRS temperature retrievals showed that a medium quality control threshold performed better than a tighter threshold, which provided better overall sounding accuracy; or a looser threshold, which provided better spatial coverage of accepted soundings. We are conducting more experiments to further optimize this balance of spatial coverage and sounding accuracy from the data assimilation perspective. In all cases, temperature soundings were assimilated well below cloud level in partially cloudy cases. The positive impact of assimilating AlRS derived atmospheric temperatures all but vanished when only AIRS stratospheric temperatures were assimilated. Forecast skill resulting from assimilation of AlRS radiances uncontaminated by clouds, instead of AlRS temperature soundings, was only slightly better than that resulting from assimilation of only stratospheric AlRS temperatures. This reduction in forecast skill is most likely the result of significant loss of tropospheric information when only AIRS radiances unaffected by clouds are used in the data assimilation process.

  13. Amplitude and frequency modulation control of sound production in a mechanical model of the avian syrinx.

    PubMed

    Elemans, Coen P H; Muller, Mees; Larsen, Ole Naesbye; van Leeuwen, Johan L

    2009-04-01

    Birdsong has developed into one of the important models for motor control of learned behaviour and shows many parallels with speech acquisition in humans. However, there are several experimental limitations to studying the vocal organ - the syrinx - in vivo. The multidisciplinary approach of combining experimental data and mathematical modelling has greatly improved the understanding of neural control and peripheral motor dynamics of sound generation in birds. Here, we present a simple mechanical model of the syrinx that facilitates detailed study of vibrations and sound production. Our model resembles the 'starling resistor', a collapsible tube model, and consists of a tube with a single membrane in its casing, suspended in an external pressure chamber and driven by various pressure patterns. With this design, we can separately control 'bronchial' pressure and tension in the oscillating membrane and generate a wide variety of 'syllables' with simple sweeps of the control parameters. We show that the membrane exhibits high frequency, self-sustained oscillations in the audio range (>600 Hz fundamental frequency) using laser Doppler vibrometry, and systematically explore the conditions for sound production of the model in its control space. The fundamental frequency of the sound increases with tension in three membranes with different stiffness and mass. The lower-bound fundamental frequency increases with membrane mass. The membrane vibrations are strongly coupled to the resonance properties of the distal tube, most likely because of its reflective properties to sound waves. Our model is a gross simplification of the complex morphology found in birds, and more closely resembles mathematical models of the syrinx. Our results confirm several assumptions underlying existing mathematical models in a complex geometry.

  14. Seeing sounds and hearing colors: an event-related potential study of auditory-visual synesthesia.

    PubMed

    Goller, Aviva I; Otten, Leun J; Ward, Jamie

    2009-10-01

    In auditory-visual synesthesia, sounds automatically elicit conscious and reliable visual experiences. It is presently unknown whether this reflects early or late processes in the brain. It is also unknown whether adult audiovisual synesthesia resembles auditory-induced visual illusions that can sometimes occur in the general population or whether it resembles the electrophysiological deflection over occipital sites that has been noted in infancy and has been likened to synesthesia. Electrical brain activity was recorded from adult synesthetes and control participants who were played brief tones and required to monitor for an infrequent auditory target. The synesthetes were instructed to attend either to the auditory or to the visual (i.e., synesthetic) dimension of the tone, whereas the controls attended to the auditory dimension alone. There were clear differences between synesthetes and controls that emerged early (100 msec after tone onset). These differences tended to lie in deflections of the auditory-evoked potential (e.g., the auditory N1, P2, and N2) rather than the presence of an additional posterior deflection. The differences occurred irrespective of what the synesthetes attended to (although attention had a late effect). The results suggest that differences between synesthetes and others occur early in time, and that synesthesia is qualitatively different from similar effects found in infants and certain auditory-induced visual illusions in adults. In addition, we report two novel cases of synesthesia in which colors elicit sounds, and vice versa.

  15. Residual Inhibition Functions Overlap Tinnitus Spectra and the Region of Auditory Threshold Shift

    PubMed Central

    Moffat, Graeme; Baumann, Michael; Ward, Lawrence M.

    2008-01-01

    Animals exposed to noise trauma show augmented synchronous neural activity in tonotopically reorganized primary auditory cortex consequent on hearing loss. Diminished intracortical inhibition in the reorganized region appears to enable synchronous network activity that develops when deafferented neurons begin to respond to input via their lateral connections. In humans with tinnitus accompanied by hearing loss, this process may generate a phantom sound that is perceived in accordance with the location of the affected neurons in the cortical place map. The neural synchrony hypothesis predicts that tinnitus spectra, and heretofore unmeasured “residual inhibition functions” that relate residual tinnitus suppression to the center frequency of masking sounds, should cover the region of hearing loss in the audiogram. We confirmed these predictions in two independent cohorts totaling 90 tinnitus subjects, using computer-based tools designed to assess the psychoacoustic properties of tinnitus. Tinnitus spectra and residual inhibition functions for depth and duration increased with the amount of threshold shift over the region of hearing impairment. Residual inhibition depth was shallower when the masking sounds that were used to induce residual inhibition showed decreased correspondence with the frequency spectrum and bandwidth of the tinnitus. These findings suggest that tinnitus and its suppression in residual inhibition depend on processes that span the region of hearing impairment and not on mechanisms that enhance cortical representations for sound frequencies at the audiometric edge. Hearing thresholds measured in age-matched control subjects without tinnitus implicated hearing loss as a factor in tinnitus, although elevated thresholds alone were not sufficient to cause tinnitus. PMID:18712566

  16. 77 FR 53890 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-04

    ... currently approved collection; Title of Information Collection: The Fiscal Soundness Reporting Requirements... those programs maintain fiscally sound organizations. The revised fiscal soundness reporting form...); Frequency: Annually, Quarterly; Affected Public: Private Sector: Business or other for-profits and Not-for...

  17. Exploring Noise: Sound Pollution.

    ERIC Educational Resources Information Center

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  18. Effects of Listening to Music versus Environmental Sounds in Passive and Active Situations on Levels of Pain and Fatigue in Fibromyalgia.

    PubMed

    Mercadíe, Lolita; Mick, Gérard; Guétin, Stéphane; Bigand, Emmanuel

    2015-10-01

    In fibromyalgia, pain symptoms such as hyperalgesia and allodynia are associated with fatigue. Mechanisms underlying such symptoms can be modulated by listening to pleasant music. We expected that listening to music, because of its emotional impact, would have a greater modulating effect on the perception of pain and fatigue in patients with fibromyalgia than listening to nonmusical sounds. To investigate this hypothesis, we carried out a 4-week study in which patients with fibromyalgia listened to either preselected musical pieces or environmental sounds when they experienced pain in active (while carrying out a physical activity) or passive (at rest) situations. Concomitant changes of pain and fatigue levels were evaluated. When patients listened to music or environmental sounds at rest, pain and fatigue levels were significantly reduced after 20 minutes of listening, with no difference of effect magnitude between the two stimuli. This improvement persisted 10 minutes after the end of the listening session. In active situations, pain did not increase in presence of the two stimuli. Contrary to our expectations, music and environmental sounds produced a similar relieving effect on pain and fatigue, with no benefit gained by listening to pleasant music over environmental sounds. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  19. Tomographic Imaging of the Suns Interior

    NASA Technical Reports Server (NTRS)

    Kosovichev, A. G.

    1996-01-01

    A new method is presented of determining the three-dimensional sound-speed structure and flow velocities in the solar convection zone by inversion of the acoustic travel-time data recently obtained by Duvall and coworkers. The initial inversion results reveal large-scale subsurface structures and flows related to the active regions, and are important for understanding the physics of solar activity and large-scale convection. The results provide evidence of a zonal structure below the surface in the low-latitude area of the magnetic activity. Strong converging downflows, up to 1.2 km/s, and a substantial excess of the sound speed are found beneath growing active regions. In a decaying active region, there is evidence for the lower than average sound speed and for upwelling of plasma.

  20. Application of subharmonics for active sound design of electric vehicles.

    PubMed

    Gwak, Doo Young; Yoon, Kiseop; Seong, Yeolwan; Lee, Soogab

    2014-12-01

    The powertrain of electric vehicles generates an unfamiliar acoustical environment for customers. This paper seeks optimal interior sound for electric vehicles based on psychoacoustic knowledge and musical harmonic theory. The concept of inserting a virtual sound, which consists of the subharmonics of an existing high-frequency component, is suggested to improve sound quality. Subjective evaluation results indicate that the impression of interior sound can be enhanced in this manner. Increased appeal is achieved through two designed stimuli, which proves the effectiveness of the method proposed.

  1. Directionally Antagonistic Graphene Oxide-Polyurethane Hybrid Aerogel as a Sound Absorber.

    PubMed

    Oh, Jung-Hwan; Kim, Jieun; Lee, Hyeongrae; Kang, Yeonjune; Oh, Il-Kwon

    2018-06-21

    Innovative sound absorbers, the design of which is based on carbon nanotubes and graphene derivatives, could be used to make more efficient sound absorbing materials because of their excellent intrinsic mechanical and chemical properties. However, controlling the directional alignments of low-dimensional carbon nanomaterials, such as restacking, alignment, and dispersion, has been a challenging problem when developing sound absorbing forms. Herein, we present the directionally antagonistic graphene oxide-polyurethane hybrid aerogel we developed as a sound absorber, the physical properties of which differ according to the alignment of the microscopic graphene oxide sheets. This porous graphene sound absorber has a microporous hierarchical cellular structure with adjustable stiffness and improved sound absorption performance, thereby overcoming the restrictions of both geometric and function-orientated functions. Furthermore, by controlling the inner cell size and aligned structure of graphene oxide layers in this study, we achieved remarkable improvement of the sound absorption performance at low frequency. This improvement is attributed to multiple scattering of incident and reflection waves on the aligned porous surfaces, and air-viscous resistance damping inside interconnected structures between the urethane foam and the graphene oxide network. Two anisotropic sound absorbers based on the directionally antagonistic graphene oxide-polyurethane hybrid aerogels were fabricated. They show remarkable differences owing to the opposite alignment of graphene oxide layers inside the polyurethane foam and are expected to be appropriate for the engineering design of sound absorbers in consideration of the wave direction.

  2. Neurophysiological Studies of Auditory Verbal Hallucinations

    PubMed Central

    Ford, Judith M.; Dierks, Thomas; Fisher, Derek J.; Herrmann, Christoph S.; Hubl, Daniela; Kindler, Jochen; Koenig, Thomas; Mathalon, Daniel H.; Spencer, Kevin M.; Strik, Werner; van Lutterveld, Remko

    2012-01-01

    We discuss 3 neurophysiological approaches to study auditory verbal hallucinations (AVH). First, we describe “state” (or symptom capture) studies where periods with and without hallucinations are compared “within” a patient. These studies take 2 forms: passive studies, where brain activity during these states is compared, and probe studies, where brain responses to sounds during these states are compared. EEG (electroencephalography) and MEG (magnetoencephalography) data point to frontal and temporal lobe activity, the latter resulting in competition with external sounds for auditory resources. Second, we discuss “trait” studies where EEG and MEG responses to sounds are recorded from patients who hallucinate and those who do not. They suggest a tendency to hallucinate is associated with competition for auditory processing resources. Third, we discuss studies addressing possible mechanisms of AVH, including spontaneous neural activity, abnormal self-monitoring, and dysfunctional interregional communication. While most studies show differences in EEG and MEG responses between patients and controls, far fewer show symptom relationships. We conclude that efforts to understand the pathophysiology of AVH using EEG and MEG have been hindered by poor anatomical resolution of the EEG and MEG measures, poor assessment of symptoms, poor understanding of the phenomenon, poor models of the phenomenon, decoupling of the symptoms from the neurophysiology due to medications and comorbidites, and the possibility that the schizophrenia diagnosis breeds truer than the symptoms it comprises. These problems are common to studies of other psychiatric symptoms and should be considered when attempting to understand the basic neural mechanisms responsible for them. PMID:22368236

  3. Active vibration and noise control of vibro-acoustic system by using PID controller

    NASA Astrophysics Data System (ADS)

    Li, Yunlong; Wang, Xiaojun; Huang, Ren; Qiu, Zhiping

    2015-07-01

    Active control simulation of the acoustic and vibration response of a vibro-acoustic cavity of an airplane based on a PID controller is presented. A full numerical vibro-acoustic model is developed by using an Eulerian model, which is a coupled model based on the finite element formulation. The reduced order model, which is used to design the closed-loop control system, is obtained by the combination of modal expansion and variable substitution. Some physical experiments are made to validate and update the full-order and the reduced-order numerical models. Optimization of the actuator placement is employed in order to get an effective closed-loop control system. For the controller design, an iterative method is used to determine the optimal parameters of the PID controller. The process is illustrated by the design of an active noise and vibration control system for a cavity structure. The numerical and experimental results show that a PID-based active control system can effectively suppress the noise inside the cavity using a sound pressure signal as the controller input. It is also possible to control the noise by suppressing the vibration of the structure using the structural displacement signal as the controller input. For an airplane cavity structure, considering the issue of space-saving, the latter is more suitable.

  4. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Borland, Michael S.; Buell, Elizabeth P.; Centanni, Tracy M.; Fink, Melyssa K.; Im, Kwok W.; Wilson, Linda G.; Kilgard, Michael P.

    2015-01-01

    Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial. PMID:26321676

  5. Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae

    PubMed Central

    Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063

  6. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    PubMed

    Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  7. Automatic Activation of Sounds by Letters Occurs Early in Development but Is Not Impaired in Children with Dyslexia

    ERIC Educational Resources Information Center

    Clayton, Francina J.; Hulme, Charles

    2018-01-01

    The automatic letter-sound integration hypothesis proposes that the decoding difficulties seen in dyslexia arise from a specific deficit in establishing automatic letter-sound associations. We report the findings of 2 studies in which we used a priming task to assess automatic letter-sound integration. In Study 1, children between 5 and 7 years of…

  8. The effects of arm movement on reaction time in patients with latent and active upper trapezius myofascial trigger point.

    PubMed

    Yassin, Marzieh; Talebian, Saeed; Ebrahimi Takamjani, Ismail; Maroufi, Nader; Ahmadi, Amir; Sarrafzadeh, Javad; Emrani, Anita

    2015-01-01

    Myofascial pain syndrome is a significant source of mechanical pain. The aim of this study was to investigate the effects of arm movement on reaction time in females with latent and active upper trapezius myofascial trigger point. In this interventional study, a convenience sample of fifteen women with one active MTP, fifteen women with one latent MTP in the upper trapezius, and fifteen normal healthy women were participated. Participants were asked to stand for 10 seconds in an erect standing position. Muscle reaction times were recorded including anterior deltoid (AD), cervical paraspinal (CP) lumbar paraspinal (LP), both of upper trapezius (UT), sternocleidomastoid (SCM) and medial head of gastrocnemius (GcM). Participants were asked to flex their arms in response to a sound stimulus preceded by a warning sound stimulus. Data were analyzed using one-way ANOVA Test. There was significant differences in motor time and reaction time between active and control groups (p< 0.05) except for GcM. There was no significant difference in motor time between active and passive groups except for UT without MTP and SCM (p< 0.05). Also, there were no significant differences in motor times between latent MTP and control groups. Furthermore, there was no significant difference in premotor times between the three groups. The present study shows that patients with active MTP need more time to react to stimulus, but patients with latent MTP are similar to healthy subjects in the reaction time. Patients with active MTP had less compatibility with environmental stimulations, and they responded to a specific stimulation with variability in Surface Electromyography (SEMG).

  9. Sensory-motor networks involved in speech production and motor control: an fMRI study.

    PubMed

    Behroozmand, Roozbeh; Shebek, Rachel; Hansen, Daniel R; Oya, Hiroyuki; Robin, Donald A; Howard, Matthew A; Greenlee, Jeremy D W

    2015-04-01

    Speaking is one of the most complex motor behaviors developed to facilitate human communication. The underlying neural mechanisms of speech involve sensory-motor interactions that incorporate feedback information for online monitoring and control of produced speech sounds. In the present study, we adopted an auditory feedback pitch perturbation paradigm and combined it with functional magnetic resonance imaging (fMRI) recordings in order to identify brain areas involved in speech production and motor control. Subjects underwent fMRI scanning while they produced a steady vowel sound /a/ (speaking) or listened to the playback of their own vowel production (playback). During each condition, the auditory feedback from vowel production was either normal (no perturbation) or perturbed by an upward (+600 cents) pitch-shift stimulus randomly. Analysis of BOLD responses during speaking (with and without shift) vs. rest revealed activation of a complex network including bilateral superior temporal gyrus (STG), Heschl's gyrus, precentral gyrus, supplementary motor area (SMA), Rolandic operculum, postcentral gyrus and right inferior frontal gyrus (IFG). Performance correlation analysis showed that the subjects produced compensatory vocal responses that significantly correlated with BOLD response increases in bilateral STG and left precentral gyrus. However, during playback, the activation network was limited to cortical auditory areas including bilateral STG and Heschl's gyrus. Moreover, the contrast between speaking vs. playback highlighted a distinct functional network that included bilateral precentral gyrus, SMA, IFG, postcentral gyrus and insula. These findings suggest that speech motor control involves feedback error detection in sensory (e.g. auditory) cortices that subsequently activate motor-related areas for the adjustment of speech parameters during speaking. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Effects of musical training on sound pattern processing in high-school students.

    PubMed

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  11. Concerns of the Institute of Transport Study and Research for reducing the sound level inside completely repaired buses. [noise and vibration control

    NASA Technical Reports Server (NTRS)

    Groza, A.; Calciu, J.; Nicola, I.; Ionasek, A.

    1974-01-01

    Sound level measurements on noise sources on buses are used to observe the effects of attenuating acoustic pressure levels inside the bus by sound-proofing during complete repair. A spectral analysis of the sound level as a function of motor speed, bus speed along the road, and the category of the road is reported.

  12. The sound of a mobile phone ringing affects the complex reaction time of its owner

    PubMed Central

    Zajdel, Justyna; Zwolińska, Anna; Śmigielski, Janusz; Beling, Piotr; Cegliński, Tomasz; Nowak, Dariusz

    2012-01-01

    Introduction Mobile phone conversation decreases the ability to concentrate and impairs the attention necessary to perform complex activities, such as driving a car. Does the ringing sound of a mobile phone affect the driver's ability to perform complex sensory-motor activities? We compared a subject's reaction time while performing a test either with a mobile phone ringing or without. Material and methods The examination was performed on a PC-based reaction time self-constructed system Reactor. The study group consisted of 42 healthy students. The protocol included instruction, control without phone and a proper session with subject's mobile phone ringing. The terms of the study were standardised. Results There were significant differences (p < 0.001) in reaction time in control (597 ms), mobile (633 ms) and instruction session (673 ms). The differences in female subpopulation were also significant (p < 0.01). Women revealed the longest reaction time in instruction session (707 ms), were significantly quicker in mobile (657 ms, p < 0.01) and in control session (612 ms, p < 0.001). In men, the significant difference was recorded only between instruction (622 ms) and control session (573 ms, p < 0.01). The other differences were not significant (p > 0.08). Men proofed to complete significantly quicker than women in instruction (p < 0.01) and in mobile session (p < 0.05). Differences amongst the genders in control session was not significant (p > 0.05). Conclusions The results obtained proofed the ringing of a phone exerts a significant influence on complex reaction time and quality of performed task. PMID:23185201

  13. Performances of Student Activism: Sound, Silence, Gender, and Dis/ability

    ERIC Educational Resources Information Center

    Pasque, Penny A.; Vargas, Juanita Gamez

    2014-01-01

    This chapter explores the various performances of activism by students through sound, silence, gender, and dis/ability and how these performances connect to social change efforts around issues such as human trafficking, homeless children, hunger, and children with varying abilities.

  14. Job burnout is associated with dysfunctions in brain mechanisms of voluntary and involuntary attention.

    PubMed

    Sokka, Laura; Leinikka, Marianne; Korpela, Jussi; Henelius, Andreas; Ahonen, Lauri; Alain, Claude; Alho, Kimmo; Huotilainen, Minna

    2016-05-01

    Individuals with job burnout symptoms often report having cognitive difficulties, but related electrophysiological studies are scarce. We assessed the impact of burnout on performing a visual task with varying memory loads, and on involuntary attention switch to distractor sounds using scalp recordings of event-related potentials (ERPs). Task performance was comparable between burnout and control groups. The distractor sounds elicited a P3a response, which was reduced in the burnout group. This suggests burnout-related deficits in processing novel and potentially important events during task performance. In the burnout group, we also observed a decrease in working-memory related P3b responses over posterior scalp and increase over frontal areas. These results suggest that burnout is associated with deficits in cognitive control needed to monitor and update information in working memory. Successful task performance in burnout might require additional recruitment of anterior regions to compensate the decrement in posterior activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.

    PubMed

    Bolle, Loes J; de Jong, Christ A F; Bierman, Stijn M; van Beek, Pieter J G; van Keeken, Olvin A; Wessels, Peter W; van Damme, Cindy J G; Winter, Hendrik V; de Haan, Dick; Dekeling, René P A

    2012-01-01

    In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2) (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa(2)s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2)s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.

  16. Common Sole Larvae Survive High Levels of Pile-Driving Sound in Controlled Exposure Experiments

    PubMed Central

    Bolle, Loes J.; de Jong, Christ A. F.; Bierman, Stijn M.; van Beek, Pieter J. G.; van Keeken, Olvin A.; Wessels, Peter W.; van Damme, Cindy J. G.; Winter, Hendrik V.; de Haan, Dick; Dekeling, René P. A.

    2012-01-01

    In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa2 (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised. PMID:22431996

  17. Effect of Blast Injury on Auditory Localization in Military Service Members.

    PubMed

    Kubli, Lina R; Brungart, Douglas; Northern, Jerry

    Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.

  18. Brain activity associated with selective attention, divided attention and distraction.

    PubMed

    Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo

    2017-06-01

    Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. On the definition and interpretation of voice selective activation in the temporal cortex

    PubMed Central

    Bethmann, Anja; Brechmann, André

    2014-01-01

    Regions along the superior temporal sulci and in the anterior temporal lobes have been found to be involved in voice processing. It has even been argued that parts of the temporal cortices serve as voice-selective areas. Yet, evidence for voice-selective activation in the strict sense is still missing. The current fMRI study aimed at assessing the degree of voice-specific processing in different parts of the superior and middle temporal cortices. To this end, voices of famous persons were contrasted with widely different categories, which were sounds of animals and musical instruments. The argumentation was that only brain regions with statistically proven absence of activation by the control stimuli may be considered as candidates for voice-selective areas. Neural activity was found to be stronger in response to human voices in all analyzed parts of the temporal lobes except for the middle and posterior STG. More importantly, the activation differences between voices and the other environmental sounds increased continuously from the mid-posterior STG to the anterior MTG. Here, only voices but not the control stimuli excited an increase of the BOLD response above a resting baseline level. The findings are discussed with reference to the function of the anterior temporal lobes in person recognition and the general question on how to define selectivity of brain regions for a specific class of stimuli or tasks. In addition, our results corroborate recent assumptions about the hierarchical organization of auditory processing building on a processing stream from the primary auditory cortices to anterior portions of the temporal lobes. PMID:25071527

  20. Comparison of speech intelligibility in cockpit noise using SPH-4 flight helmet with and without active noise reduction

    NASA Technical Reports Server (NTRS)

    Chan, Jeffrey W.; Simpson, Carol A.

    1990-01-01

    Active Noise Reduction (ANR) is a new technology which can reduce the level of aircraft cockpit noise that reaches the pilot's ear while simultaneously improving the signal to noise ratio for voice communications and other information bearing sound signals in the cockpit. A miniature, ear-cup mounted ANR system was tested to determine whether speech intelligibility is better for helicopter pilots using ANR compared to a control condition of ANR turned off. Two signal to noise ratios (S/N), representative of actual cockpit conditions, were used for the ratio of the speech to cockpit noise sound pressure levels. Speech intelligibility was significantly better with ANR compared to no ANR for both S/N conditions. Variability of speech intelligibility among pilots was also significantly less with ANR. When the stock helmet was used with ANR turned off, the average PB Word speech intelligibility score was below the Normally Acceptable level. In comparison, it was above that level with ANR on in both S/N levels.

  1. An analytical and experimental investigation of active structural acoustic control of noise transmission through double panel systems

    NASA Astrophysics Data System (ADS)

    Carneal, James P.; Fuller, Chris R.

    2004-05-01

    An analytical and experimental investigation of active control of sound transmission through double panel systems has been performed. The technique used was active structural acoustic control (ASAC) where the control inputs, in the form of piezoelectric actuators, were applied to the structure while the radiating pressure field was minimized. Results verify earlier experimental investigations and indicate the application of control inputs to the radiating panel of the double panel system resulted in greater transmission loss (TL) due to its direct effect on the nature of the structural-acoustic (or radiation) coupling between the radiating panel and the receiving acoustic space. Increased control performance was seen in a double panel system consisting of a stiffer radiating panel due to its lower modal density and also as a result of better impedance matching between the piezoelectric actuator and the radiating plate. In general the results validate the ASAC approach for double panel systems, demonstrating that it is possible to take advantage of double panel system passive behavior to enhance control performance, and provide design guidelines.

  2. It's More Fun than It Sounds--Enhancing Science Concepts through Hands-on Activities for Young Children

    ERIC Educational Resources Information Center

    Guha, Smita

    2012-01-01

    To teach young children, teachers choose topics in science that children are curious about. Children's inquisitive nature is reflected through the activities as they make repetitive sounds to find the cause and effect relationship. Teachers can make best use of those invaluable moments by incorporating those activities into science lessons on…

  3. Effects of external and gap mean flows on sound transmission through a double-wall sandwich panel

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Sebastian, Alexis

    2015-05-01

    This paper studies analytically the effects of an external mean flow and an internal gap mean flow on sound transmission through a double-wall sandwich panel lined with poroelastic materials. Biot's theory is employed to describe wave propagation in poroelastic materials, and the transfer matrix method with three types of boundary conditions is applied to solve the system simultaneously. The random incidence transmission loss in a diffuse field is calculated numerically, and the limiting angle of incidence due to total internal reflection is discussed in detail. The numerical predictions suggest that the sound insulation performance of such a double-wall panel is enhanced considerably by both external and gap mean flows particularly in the high-frequency range. Similar effects on transmission loss are observed for the two mean flows. It is shown that the effect of the gap mean flow depends on flow velocity, flow direction, gap depth and fluid properties and also that the fluid properties within the gap appear to influence the transmission loss more effectively than the gap flow. Despite the implementation difficulty in practice, an internal gap flow provides more design space for tuning the sound insulation performance of a double-wall sandwich panel and has great potential for active/passive noise control.

  4. Using Sound to Modify Fish Behavior at Power-Production and Water-Control Facilities: A Workshop December 12-13, 1995. Phase II: Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Thomas J.; Popper, Arthur N.

    1997-06-01

    A workshop on ``Use of Sound for Fish Protection at Power-Production and Water-Control Facilities`` was held in Portland, Oregon on December 12--13, 1995. This workshop convened a 22-member panel of international experts from universities, industry, and government to share knowledge, questions, and ideas about using sound for fish guidance. Discussions involved in a broad range of indigenous migratory and resident fish species and fish-protection issues in river systems, with particular focus on the Columbia River Basin. Because the use of sound behavioral barriers for fish is very much in its infancy, the workshop was designed to address the many questionsmore » being asked by fishery managers and researchers about the feasibility and potential benefits of using sound to augment physical barriers for fish protection in the Columbia River system.« less

  5. L-type calcium channels refine the neural population code of sound level.

    PubMed

    Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana

    2016-12-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.

  6. NESSTI: Norms for Environmental Sound Stimuli

    PubMed Central

    Hocking, Julia; Dzafic, Ilvana; Kazovsky, Maria; Copland, David A.

    2013-01-01

    In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies. PMID:24023866

  7. 78 FR 62632 - Agency Information Collection Activities; Proposed Collection Renewal; Comment Request Re...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... Collection Renewal; Comment Request Re: Guidance on Sound Incentive Compensation Practices AGENCY: Federal... notice that it is seeking comment on renewal of its information collection, entitled Guidance on Sound... the following currently approved collections of information: Title: Guidance on Sound Incentive...

  8. Sound production during the spawning season in cavity-nesting darters of the subgenus Catonotus (Percidae: Etheostoma)

    Treesearch

    Carol E. Johnston; Dawn L. Johnson

    2000-01-01

    The cavity-nesting darters Etheostoma nigripinne, Etheostoma crossopterum, and their hybrid (E. nigripinne X E. crossopterum) were found to produce sounds associated with reproduction. Males produced sounds during aggessive encounters and courtship activities. All three taxa produced nonpulsed...

  9. Development and experimental verification of a robust active noise control system for a diesel engine in submarines

    NASA Astrophysics Data System (ADS)

    Sachau, D.; Jukkert, S.; Hövelmann, N.

    2016-08-01

    This paper presents the development and experimental validation of an ANC (active noise control)-system designed for a particular application in the exhaust line of a submarine. Thereby, tonal components of the exhaust noise in the frequency band from 75 Hz to 120 Hz are reduced by more than 30 dB. The ANC-system is based on the feedforward leaky FxLMS-algorithm. The observability of the sound pressure in standing wave field is ensured by using two error microphones. The noninvasive online plant identification method is used to increase the robustness of the controller. Online plant identification is extended by a time-varying convergence gain to improve the performance in the presence of slight error in the frequency of the reference signal.

  10. Abnormal lung sounds in patients with asthma during episodes with normal lung function.

    PubMed

    Schreur, H J; Vanderschoot, J; Zwinderman, A H; Dijkman, J H; Sterk, P J

    1994-07-01

    Even in patients with clinically stable asthma with normal lung function, the airways are characterized by inflammatory changes, including mucosal swelling. In order to investigate whether lung sounds can distinguish these subjects from normal subjects, we compared lung sound characteristics between eight normal and nine symptom-free subjects with mild asthma. All subjects underwent simultaneous recordings of airflow, lung volume changes, and lung sounds during standardized quiet breathing, and during forced maneuvers. Flow-dependent power spectra were computed using fast Fourier transform. For each spectrum we determined lung sound intensity (LSI), frequencies (Q25%, Q50%, Q75%) wheezing (W), and W%. The results were analyzed by ANOVA. During expiration, LSI was lower in patients with asthma than in healthy controls, in particular at relatively low airflow values. During quiet expiration, Q25% to Q75% were higher in asthmatics than in healthy controls, while the change of Q25% to Q75% with flow was greater in asthmatic than in normal subjects. The W and W% were not different between the subject groups. The results indicate that at given airflows, lung sounds are lower in intensity and higher in pitch in asthmatics as compared with controls. This suggests that the generation and/or transmission of lung sounds in symptom-free patients with stable asthma differ from that in normal subjects, even when lung function is within the normal range. Therefore, airflow standardized phonopneumography might reflect morphologic changes in airways of patients with asthma.

  11. Cerebellar contribution to the prediction of self-initiated sounds.

    PubMed

    Knolle, Franziska; Schröger, Erich; Kotz, Sonja A

    2013-10-01

    In everyday life we frequently make the fundamental distinction between sensory input resulting from our own actions and sensory input that is externally-produced. It has been speculated that making this distinction involves the use of an internal forward-model, which enables the brain to adjust its response to self-produced sensory input. In the auditory domain, this idea has been supported by event-related potential and evoked-magnetic field studies revealing that self-initiated sounds elicit a suppressed N100/M100 brain response compared to externally-produced sounds. Moreover, a recent study reveals that patients with cerebellar lesions do not show a significant N100-suppression effect. This result supports the theory that the cerebellum is essential for generating internal forward predictions. However, all except one study compared self-initiated and externally-produced auditory stimuli in separate conditions. Such a setup prevents an unambiguous interpretation of the N100-suppression effect when distinguishing self- and externally-produced sensory stimuli: the N100-suppression can also be explained by differences in the allocation of attention in different conditions. In the current electroencephalography (EEG)-study we investigated the N100-suppression effect in an altered design comparing (i) self-initiated sounds to externally-produced sounds that occurred intermixed with these self-initiated sounds (i.e., both sound types occurred in the same condition) or (ii) self-initiated sounds to externally-produced sounds that occurred in separate conditions. Results reveal that the cerebellum generates selective predictions in response to self-initiated sounds independent of condition type: cerebellar patients, in contrast to healthy controls, do not display an N100-suppression effect in response to self-initiated sounds when intermixed with externally-produced sounds. Furthermore, the effect is not influenced by the temporal proximity of externally-produced sounds to self-produced sounds. Controls and patients showed a P200-reduction in response to self-initiated sounds. This suggests the existence of an additional and probably more conscious mechanism for identifying self-generated sounds that does not functionally depend on the cerebellum. Copyright © 2012 Elsevier Srl. All rights reserved.

  12. Infra-sound cancellation and mitigation in wind turbines

    NASA Astrophysics Data System (ADS)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  13. Does the Sound of a Barking Dog Activate its Corresponding Visual Form? An fMRI Investigation of Modality-Specific Semantic Access

    PubMed Central

    Reilly, Jamie; Garcia, Amanda; Binney, Richard J.

    2016-01-01

    Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210

  14. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    PubMed

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  15. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  16. Quieting Weinberg 5C: a case study in hospital noise control.

    PubMed

    MacLeod, Mark; Dunn, Jeffrey; Busch-Vishniac, Ilene J; West, James E; Reedy, Anita

    2007-06-01

    Weinberg 5C of Johns Hopkins Hospital is a very noisy hematological cancer unit in a relatively new building of a large medical campus. Because of the requirements for dealing with immuno-suppressed patients, options for introducing sound absorbing materials are limited. In this article, a case study of noise control in a hospital, the sound environment in the unit before treatment is described, the chosen noise control approach of adding custom-made sound absorbing panels is presented, and the impact of the noise control installation is discussed. The treatment of Weinberg 5C involved creating sound absorbing panels of 2-in.-thick fiberglass wrapped in an anti-bacterial fabric. Wallpaper paste was used to hold the fabric to the backing of the fiberglass. Installation of these panels on the ceiling and high on corridor walls had a dramatic effect. The noise on the unit (as measured by the equivalent sound pressure level) was immediately reduced by 5 dB(A) and the reverberation time dropped by a factor of over 2. Further, this drop in background noise and reverberation time understates the dramatic impact of the change. Surveys of staff and patients before and after the treatment indicated a change from viewing the unit as very noisy to a view of the unit as relatively quiet.

  17. Sound, Noise, and Vibration Control.

    ERIC Educational Resources Information Center

    Yerges, Lyle F.

    This working guide on the principles and techniques of controlling acoustical environment is discussed in the light of human, environmental and building needs. The nature of sound and its variables are defined. The acoustical environment and its many materials, spaces and functional requirements are described, with specific methods for planning,…

  18. Mode tuning of a simplified string instrument using time-dimensionless state-derivative control

    NASA Astrophysics Data System (ADS)

    Benacchio, Simon; Chomette, Baptiste; Mamou-Mani, Adrien; Finel, Victor

    2015-01-01

    In recent years, there has been a growing interest in smart structures, particularly in the field of musical acoustics. Control methods, initially developed to reduce vibration and damage, can be a good way to shift modal parameters of a structure in order to modify its dynamic response. This study focuses on smart musical instruments and aims to modify their radiated sound. This is achieved by controlling the modal parameters of the soundboard of a simplified string instrument. A method combining a pole placement algorithm and a time-dimensionless state-derivative control is used and quickly compared to a usual state control method. Then the effect of the mode tuning on the coupling between the string and the soundboard is experimentally studied. Controlling two vibration modes of the soundboard, its acoustic response and the damping of the third partial of the sound are modified. Finally these effects are listened in the radiated sound.

  19. Effects of HearFones on speaking and singing voice quality.

    PubMed

    Laukkanen, Anne-Maria; Mickelson, Nils Peter; Laitala, Marja; Syrjä, Tiina; Salo, Arla; Sihvo, Marketta

    2004-12-01

    HearFones (HF) have been designed to enhance auditory feedback during phonation. This study investigated the effects of HF (1) on sound perceivable by the subject, (2) on voice quality in reading and singing, and (3) on voice production in speech and singing at the same pitch and sound level. Test 1: Text reading was recorded with two identical microphones in the ears of a subject. One ear was covered with HF, and the other was free. Four subjects attended this test. Tests 2 and 3: A reading sample was recorded from 13 subjects and a song from 12 subjects without and with HF on. Test 4: Six females repeated [pa:p:a] in speaking and singing modes without and with HF on same pitch and sound level. Long-term average spectra were made (Tests 1-3), and formant frequencies, fundamental frequency, and sound level were measured (Tests 2 and 3). Subglottic pressure was estimated from oral pressure in [p], and simultaneously electroglottography (EGG) was registered during voicing on [a:] (Test 4). Voice quality in speech and singing was evaluated by three professional voice trainers (Tests 2-4). HF seemed to enhance sound perceivable at the whole range studied (0-8 kHz), with the greatest enhancement (up to ca 25 dB) being at 1-3 kHz and at 4-7 kHz. The subjects tended to decrease loudness with HF (when sound level was not being monitored). In more than half of the cases, voice quality was evaluated "less strained" and "better controlled" with HF. When pitch and loudness were constant, no clear differences were heard but closed quotient of the EGG signal was higher and the signal more skewed, suggesting a better glottal closure and/or diminished activity of the thyroarytenoid muscle.

  20. Active noise attenuation in ventilation windows.

    PubMed

    Huang, Huahua; Qiu, Xiaojun; Kang, Jian

    2011-07-01

    The feasibility of applying active noise control techniques to attenuate low frequency noise transmission through a natural ventilation window into a room is investigated analytically and experimentally. The window system is constructed by staggering the opening sashes of a spaced double glazing window to allow ventilation and natural light. An analytical model based on the modal expansion method is developed to calculate the low frequency sound field inside the window and the room and to be used in the active noise control simulations. The effectiveness of the proposed analytical model is validated by using the finite element method. The performance of the active control system for a window with different source and receiver configurations are compared, and it is found that the numerical and experimental results are in good agreement and the best result is achieved when the secondary sources are placed in the center at the bottom of the staggered window. The extra attenuation at the observation points in the optimized window system is almost equivalent to the noise reduction at the error sensor and the frequency range of effective control is up to 390 Hz in the case of a single channel active noise control system. © 2011 Acoustical Society of America

  1. Application of the remote microphone method to active noise control in a mobile phone.

    PubMed

    Cheer, Jordan; Elliott, Stephen J; Oh, Eunmi; Jeong, Jonghoon

    2018-04-01

    Mobile phones are used in a variety of situations where environmental noise may interfere with the ability of the near-end user to communicate with the far-end user. To overcome this problem, it might be possible to use active noise control technology to reduce the noise experienced by the near-end user. This paper initially demonstrates that when an active noise control system is used in a practical mobile phone configuration to minimise the noise measured by an error microphone mounted on the mobile phone, the attenuation achieved at the user's ear depends strongly on the position of the source generating the acoustic interference. To help overcome this problem, a remote microphone processing strategy is investigated that estimates the pressure at the user's ear from the pressure measured by the microphone on the mobile phone. Through an experimental implementation, it is demonstrated that this arrangement achieves a significant improvement in the attenuation measured at the ear of the user, compared to the standard active control strategy. The robustness of the active control system to changes in both the interfering sound field and the position of the mobile device relative to the ear of the user is also investigated experimentally.

  2. Agreements/subagreements Applicable to Wallops, 12 Nov. 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The status of space science agreements are noted. A general overview of the Wallops Flight Facility (WFF) is given. The geography, history, and mission of the facility are briefly surveyed. Brief accounts are given of NASA earth science activities at the WFF, including atmospheric dynamics, atmospheric optics, ocean physics, microwave altimetry, ocean color research, wind-wave-current interaction, flight support activities, the Sounding Rocket Program, and the NASA Balloon Program. Also discussed are the WFF launch range, the research airport, aircraft airborne science, telemetry, data systems, communications, and command and control.

  3. Annoyance resulting from intrusion of aircraft sounds upon various activities

    NASA Technical Reports Server (NTRS)

    Gunn, W. J.; Shepherd, W. T.; Fletcher, J. L.

    1975-01-01

    An experiment was conducted in which subjects were engaged in TV viewing, telephone listening, or reverie (no activity) for a 1/2-hour session. During the session, they were exposed to a series of recorded aircraft sounds at the rate of one flight every 2 minutes. Within each session, four levels of flyover noise, separated by dB increments, were presented several times in a Latin Square balanced sequence. The peak level of the noisiest flyover in any session was fixed at 95, 90, 85, 75, or 70 dBA. At the end of the test session, subjects recorded their responses to the aircraft sounds, using a bipolar scale which covered the range from very pleasant to extremely annoying. Responses to aircraft noises were found to be significantly affected by the particular activity in which the subjects were engaged. Not all subjects found the aircraft sounds to be annoying.

  4. Brain processes in women and men in response to emotive sounds.

    PubMed

    Rigo, Paola; De Pisapia, Nicola; Bornstein, Marc H; Putnick, Diane L; Serra, Mauro; Esposito, Gianluca; Venuti, Paola

    2017-04-01

    Adult appropriate responding to salient infant signals is vital to child healthy psychological development. Here we investigated how infant crying, relative to other emotive sounds of infant laughing or adult crying, captures adults' brain resources. In a sample of nulliparous women and men, we investigated the effects of different sounds on cerebral activation of the default mode network (DMN) and reaction times (RTs) while listeners engaged in self-referential decision and syllabic counting tasks, which, respectively, require the activation or deactivation of the DMN. Sounds affect women and men differently. In women, infant crying deactivated the DMN during the self-referential decision task; in men, female adult crying interfered with the DMN during the syllabic counting task. These findings point to different brain processes underlying responsiveness to crying in women and men and show that cerebral activation is modulated by situational contexts in which crying occurs.

  5. Active Outer Hair Cells Affect the Sound-Evoked Vibration of the Reticular Lamina

    NASA Astrophysics Data System (ADS)

    Jacob, Stefan; Fridberger, Anders

    2011-11-01

    It is well established that the organ of Corti uses active mechanisms to enhance its sensitivity and frequency selectivity. Two possible mechanisms have been identified, both capable of producing mechanical forces, which can alter the sound-evoked vibration of the hearing organ. However, little is known about the effect of these forces on the sound-evoked vibration pattern of the reticular lamina. Current injections into scala media were used to alter the amplitude of the active mechanisms in the apex of the guinea pig temporal bone. We used time-resolved confocal imaging to access the vibration pattern of individual outer hair cells. During positive current injection the the sound-evoked vibration of outer hair cell row three increased while row one showed a small decrease. Negative currents reversed the observed effect. We conclude that the outer hair cell mediated modification of reticular lamina vibration patterns could contribute to the inner hair cell stimulation.

  6. Sound and heat revolutions in phononics

    NASA Astrophysics Data System (ADS)

    Maldovan, Martin

    2013-11-01

    The phonon is the physical particle representing mechanical vibration and is responsible for the transmission of everyday sound and heat. Understanding and controlling the phononic properties of materials provides opportunities to thermally insulate buildings, reduce environmental noise, transform waste heat into electricity and develop earthquake protection. Here I review recent progress and the development of new ideas and devices that make use of phononic properties to control both sound and heat. Advances in sonic and thermal diodes, optomechanical crystals, acoustic and thermal cloaking, hypersonic phononic crystals, thermoelectrics, and thermocrystals herald the next technological revolution in phononics.

  7. Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.

    PubMed

    Shiell, Martha M; Hausfeld, Lars; Formisano, Elia

    2018-05-23

    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.

  8. Digital servo control of random sound fields

    NASA Technical Reports Server (NTRS)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  9. Fostering Alphabet Knowledge Development: A Comparison of Two Instructional Approaches

    ERIC Educational Resources Information Center

    Piasta, Shayne B.; Purpura, David J.; Wagner, Richard K.

    2010-01-01

    Preschool-aged children (n = 58) were randomly assigned to receive small group instruction in letter names and/or sounds or numbers (treated control). Alphabet instruction followed one of two approaches currently utilized in early childhood classrooms: combined letter name and sound instruction or letter sound only instruction. Thirty-four 15…

  10. The rotary subwoofer: a controllable infrasound source.

    PubMed

    Park, Joseph; Garcés, Milton; Thigpen, Bruce

    2009-04-01

    The rotary subwoofer is a novel acoustic transducer capable of projecting infrasonic signals at high sound pressure levels. The projector produces higher acoustic particle velocities than conventional transducers which translate into higher radiated sound pressure levels. This paper characterizes measured performance of a rotary subwoofer and presents a model to predict sound pressure levels.

  11. Earth Observing System/Advanced Microwave SoundingUnit-A (EOS/AMSU-A): Acquisition activities plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This is the acquisition activities plan for the software to be used in the Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A) system. This document is submitted in response to Contract NAS5-323 14 as CDRL 508. The procurement activities required to acquire software for the EOS/AMSU-A program are defined.

  12. Stimulus Expectancy Modulates Inferior Frontal Gyrus and Premotor Cortex Activity in Auditory Perception

    ERIC Educational Resources Information Center

    Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten

    2012-01-01

    In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as…

  13. What is the link between synaesthesia and sound symbolism?

    PubMed Central

    Bankieris, Kaitlyn; Simner, Julia

    2015-01-01

    Sound symbolism is a property of certain words which have a direct link between their phonological form and their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well understood. We examined whether sound symbolism might be mediated by the same types of cross-modal processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic) foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a two-alternative forced-choice task. Both groups showed superior understanding compared to chance levels, but synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may rely on the types of cross-modal integration that drive synaesthetes’ unusual experiences. It also suggests that synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains unrelated to the specific form of synaesthesia. PMID:25498744

  14. Technique of Automated Control Over Cardiopulmonary Resuscitation Procedures

    NASA Astrophysics Data System (ADS)

    Bureev, A. Sh; Kiseleva, E. Yu; Kutsov, M. S.; Zhdanov, D. S.

    2016-01-01

    The article describes a technique of automated control over cardiopulmonary resuscitation procedures on the basis of acoustic data. The research findings have allowed determining the primary important characteristics of acoustic signals (sounds of blood circulation in the carotid artery and respiratory sounds) and proposing a method to control the performance of resuscitation procedures. This method can be implemented as a part of specialized hardware systems.

  15. Assessment of sound quality perception in cochlear implant users during music listening.

    PubMed

    Roy, Alexis T; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J

    2012-04-01

    Although cochlear implant (CI) users frequently report deterioration of sound quality when listening to music, few methods exist to quantify these subjective claims. 1) To design a novel research method for quantifying sound quality perception in CI users during music listening; 2) To validate this method by assessing one attribute of music perception, bass frequency perception, which is hypothesized to be relevant to overall musical sound quality perception. Limitations in bass frequency perception contribute to CI-mediated sound quality deteriorations. The proposed method will quantify this deterioration by measuring CI users' impaired ability to make sound quality discriminations among musical stimuli with variable amounts of bass frequency removal. A method commonly used in the audio industry (multiple stimulus with hidden reference and anchor [MUSHRA]) was adapted for CI users, referred to as CI-MUSHRA. CI users and normal hearing controls were presented with 7 sound quality versions of a musical segment: 5 high pass filter cutoff versions (200-, 400-, 600-, 800-, 1000-Hz) with decreasing amounts of bass information, an unaltered version ("hidden reference"), and a highly altered version (1,000-1,200 Hz band pass filter; "anchor"). Participants provided sound quality ratings between 0 (very poor) and 100 (excellent) for each version; ratings reflected differences in perceived sound quality among stimuli. CI users had greater difficulty making overall sound quality discriminations as a function of bass frequency loss than normal hearing controls, as demonstrated by a significantly weaker correlation between bass frequency content and sound quality ratings. In particular, CI users could not perceive sound quality difference among stimuli missing up to 400 Hz of bass frequency information. Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this reduced sound quality. Although the effects of bass frequency removal were studied here, we advocate CI-MUSHRA as a user-friendly and versatile research tool to measure the effects of a wide range of acoustic manipulations on sound quality perception in CI users.

  16. Stronger efferent suppression of cochlear neural potentials by contralateral acoustic stimulation in awake than in anesthetized chinchilla.

    PubMed

    Aedo, Cristian; Tapia, Eduardo; Pavez, Elizabeth; Elgueda, Diego; Delano, Paul H; Robles, Luis

    2015-01-01

    There are two types of sensory cells in the mammalian cochlea, inner hair cells, which make synaptic contact with auditory-nerve afferent fibers, and outer hair cells that are innervated by crossed and uncrossed medial olivocochlear (MOC) efferent fibers. Contralateral acoustic stimulation activates the uncrossed efferent MOC fibers reducing cochlear neural responses, thus modifying the input to the central auditory system. The chinchilla, among all studied mammals, displays the lowest percentage of uncrossed MOC fibers raising questions about the strength and frequency distribution of the contralateral-sound effect in this species. On the other hand, MOC effects on cochlear sensitivity have been mainly studied in anesthetized animals and since the MOC-neuron activity depends on the level of anesthesia, it is important to assess the influence of anesthesia in the strength of efferent effects. Seven adult chinchillas (Chinchilla laniger) were chronically implanted with round-window electrodes in both cochleae. We compared the effect of contralateral sound in awake and anesthetized condition. Compound action potentials (CAP) and cochlear microphonics (CM) were measured in the ipsilateral cochlea in response to tones in absence and presence of contralateral sound. Control measurements performed after middle-ear muscles section in one animal discarded any possible middle-ear reflex activation. Contralateral sound produced CAP amplitude reductions in all chinchillas, with suppression effects greater by about 1-3 dB in awake than in anesthetized animals. In contrast, CM amplitude increases of up to 1.9 dB were found in only three awake chinchillas. In both conditions the strongest efferent effects were produced by contralateral tones at frequencies equal or close to those of ipsilateral tones. Contralateral CAP suppressions for 1-6 kHz ipsilateral tones corresponded to a span of uncrossed MOC fiber innervation reaching at least the central third of the chinchilla cochlea.

  17. Digital servo control of random sound test excitation. [in reverberant acoustic chamber

    NASA Technical Reports Server (NTRS)

    Nakich, R. B. (Inventor)

    1974-01-01

    A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.

  18. Active Noise Control of Low Speed Fan Rotor-Stator Modes

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.; Hu, Ziqiang; Pla, Frederic G.; Heidelberg, Laurence J.

    1996-01-01

    This report describes the Active Noise Cancellation System designed by General Electric and tested in the NASA Lewis Research Center's 48 inch Active Noise Control Fan. The goal of this study was to assess the feasibility of using wall mounted secondary acoustic sources and sensors within the duct of a high bypass turbofan aircraft engine for active noise cancellation of fan tones. The control system is based on a modal control approach. A known acoustic mode propagating in the fan duct is cancelled using an array of flush-mounted compact sound sources. Controller inputs are signals from a shaft encoder and a microphone array which senses the residual acoustic mode in the duct. The canceling modal signal is generated by a modal controller. The key results are that the (6,0) mode was completely eliminated at 920 Hz and substantially reduced elsewhere. The total tone power was reduced 9.4 dB. Farfield 2BPF SPL reductions of 13 dB were obtained. The (4,0) and (4,1) modes were reduced simultaneously yielding a 15 dB modal PWL decrease. Global attenuation of PWL was obtained using an actuator and sensor system totally contained within the duct.

  19. Parallel language activation and inhibitory control in bimodal bilinguals.

    PubMed

    Giezen, Marcel R; Blumenfeld, Henrike K; Shook, Anthony; Marian, Viorica; Emmorey, Karen

    2015-08-01

    Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. WOLF REXUS EXPERIMENT - European Planetary Science Congress

    NASA Astrophysics Data System (ADS)

    Buzdugan, A.

    2017-09-01

    WOLF experiment is developing a reaction wheel-based control system, effectively functioning as active nutation damper. One reaction wheel is used to reduce the undesirable lateral rates of spinning cylindrically symmetric free falling units, ejected from a sounding rocket. Once validated in REXUS flight, the concept and the design developed during WOLF experiment can be used for other application which require a flat spin of the free falling units.

  1. Music therapy, emotions and the heart: a pilot study.

    PubMed

    Raglio, Alfredo; Oasi, Osmano; Gianotti, Marta; Bellandi, Daniele; Manzoni, Veronica; Goulene, Karine; Imbriani, Chiara; Badiale, Marco Stramba

    2012-01-01

    The autonomic nervous system plays an important role in the control of cardiac function. It has been suggested that sound and music may have effects on the autonomic control of the heart inducing emotions, concomitantly with the activation of specific brain areas, i.e. the limbic area, and they may exert potential beneficial effects. This study is a prerequisite and defines a methodology to assess the relation between changes in cardiac physiological parameters such as heart rate, QT interval and their variability and the psychological responses to music therapy sessions. We assessed the cardiac physiological parameters and psychological responses to a music therapy session. ECG Holter recordings were performed before, during and after a music therapy session in 8 healthy individuals. The different behaviors of the music therapist and of the subjects have been analyzed with a specific music therapy assessment (Music Therapy Checklist). After the session mean heart rate decreased (p = 0.05), high frequency of heart rate variability tended to be higher and QTc variability tended to be lower. During music therapy session "affect attunements" have been found in all subjects but one. A significant emotional activation was associated to a higher dynamicity and variations of sound-music interactions. Our results may represent the rational basis for larger studies in diferent clinical conditions.

  2. Medio-lateral postural instability in subjects with tinnitus.

    PubMed

    Kapoula, Zoi; Yang, Qing; Lê, Thanh-Thuan; Vernet, Marine; Berbey, Nolwenn; Orssaud, Christophe; Londero, Alain; Bonfils, Pierre

    2011-01-01

    Many patients show modulation of tinnitus by gaze, jaw or neck movements, reflecting abnormal sensorimotor integration, and interaction between various inputs. Postural control is based on multi-sensory integration (visual, vestibular, somatosensory, and oculomotor) and indeed there is now evidence that posture can also be influenced by sound. Perhaps tinnitus influences posture similarly to external sound. This study examines the quality of postural performance in quiet stance in patients with modulated tinnitus. Twenty-three patients with highly modulated tinnitus were selected in the ENT service. Twelve reported exclusively or predominately left tinnitus, eight right, and three bilateral. Eighteen control subjects were also tested. Subjects were asked to fixate a target at 40 cm for 51 s; posturography was performed with the platform (Technoconcept, 40 Hz) for both the eyes open and eyes closed conditions. For both conditions, tinnitus subjects showed abnormally high lateral body sway (SDx). This was corroborated by fast Fourrier Transformation (FFTx) and wavelet analysis. For patients with left tinnitus only, medio-lateral sway increased significantly when looking away from the center. Similarly to external sound stimulation, tinnitus could influence lateral sway by activating attention shift, and perhaps vestibular responses. Poor integration of sensorimotor signals is another possibility. Such abnormalities would be accentuated in left tinnitus because of the importance of the right cerebral cortex in processing both auditory-tinnitus eye position and attention.

  3. Crocodile Physics. [CD-ROM].

    ERIC Educational Resources Information Center

    1999

    This high school physics resource is a simulator for optics, electronics, force, motion, and sound. Students can study oscillations, look at sound waves, and use probes to graph a wide variety of quantities. Over 100 activities are pre-written, and students can easily create their own additional activities using the multimedia editor. (WRM)

  4. Trans-tibial amputee gait: time-distance parameters and EMG activity.

    PubMed

    Isakov, E; Keren, O; Benjuya, N

    2000-12-01

    Gait analysis of trans-tibial (TT) amputees discloses asymmetries in gait parameters between the amputated and sound legs. The present study aimed at outlining differences between both legs with regard to kinematic parameters and activity of the muscles controlling the knees. The gait of 14 traumatic TT amputees, walking at a mean speed of 74.96 m/min, was analysed by means of an electronic walkway, video camera, and portable electromyography system. Results showed differences in kinematic parameters. Step length, step time and swing time were significantly longer, while stance time and single support time were significantly shorter on the amputated side. A significant difference was also found between knee angle in both legs at heel strike. The biceps femoris/vastus medialis ratio in the amputated leg, during the first half of stance phase, was significantly higher when compared to the same muscle ratio in the sound leg. This difference was due to the higher activity of the biceps femoris, almost four times higher than the vastus medialis in the amputated leg. The observed differences in time-distance parameters are due to stiffness of the prosthesis ankle (the SACH foot) that impedes the normal forward advance of the amputated leg during the first half of stance. The higher knee flexion at heel strike is due to the necessary socket alignment. Unlike in the sound leg, the biceps femoris in the amputated leg reaches maximal activity during the first half of stance, cocontracting with the vastus medialis, to support body weight on the amputated leg. The obtained data can serve as a future reference for evaluating the influence of new prosthetic components on the quality of TT amputee's gait.

  5. Sound Waves Induce Neural Differentiation of Human Bone Marrow-Derived Mesenchymal Stem Cells via Ryanodine Receptor-Induced Calcium Release and Pyk2 Activation.

    PubMed

    Choi, Yura; Park, Jeong-Eun; Jeong, Jong Seob; Park, Jung-Keug; Kim, Jongpil; Jeon, Songhee

    2016-10-01

    Mesenchymal stem cells (MSCs) have shown considerable promise as an adaptable cell source for use in tissue engineering and other therapeutic applications. The aims of this study were to develop methods to test the hypothesis that human MSCs could be differentiated using sound wave stimulation alone and to find the underlying mechanism. Human bone marrow (hBM)-MSCs were stimulated with sound waves (1 kHz, 81 dB) for 7 days and the expression of neural markers were analyzed. Sound waves induced neural differentiation of hBM-MSC at 1 kHz and 81 dB but not at 1 kHz and 100 dB. To determine the signaling pathways involved in the neural differentiation of hBM-MSCs by sound wave stimulation, we examined the Pyk2 and CREB phosphorylation. Sound wave induced an increase in the phosphorylation of Pyk2 and CREB at 45 min and 90 min, respectively, in hBM-MSCs. To find out the upstream activator of Pyk2, we examined the intracellular calcium source that was released by sound wave stimulation. When we used ryanodine as a ryanodine receptor antagonist, sound wave-induced calcium release was suppressed. Moreover, pre-treatment with a Pyk2 inhibitor, PF431396, prevented the phosphorylation of Pyk2 and suppressed sound wave-induced neural differentiation in hBM-MSCs. These results suggest that specific sound wave stimulation could be used as a neural differentiation inducer of hBM-MSCs.

  6. Prey-mediated behavioral responses of feeding blue whales in controlled sound exposure experiments.

    PubMed

    Friedlaender, A S; Hazen, E L; Goldbogen, J A; Stimpert, A K; Calambokidis, J; Southall, B L

    2016-06-01

    Behavioral response studies provide significant insights into the nature, magnitude, and consequences of changes in animal behavior in response to some external stimulus. Controlled exposure experiments (CEEs) to study behavioral response have faced challenges in quantifying the importance of and interaction among individual variability, exposure conditions, and environmental covariates. To investigate these complex parameters relative to blue whale behavior and how it may change as a function of certain sounds, we deployed multi-sensor acoustic tags and conducted CEEs using simulated mid-frequency active sonar (MFAS) and pseudo-random noise (PRN) stimuli, while collecting synoptic, quantitative prey measures. In contrast to previous approaches that lacked such prey data, our integrated approach explained substantially more variance in blue whale dive behavioral responses to mid-frequency sounds (r2 = 0.725 vs. 0.14 previously). Results demonstrate that deep-feeding whales respond more clearly and strongly to CEEs than those in other behavioral states, but this was only evident with the increased explanatory power provided by incorporating prey density and distribution as contextual covariates. Including contextual variables increases the ability to characterize behavioral variability and empirically strengthens previous findings that deep-feeding blue whales respond significantly to mid-frequency sound exposure. However, our results are only based on a single behavioral state with a limited sample size, and this analytical framework should be applied broadly across behavioral states. The increased capability to describe and account for individual response variability by including environmental variables, such as prey, that drive foraging behavior underscores the importance of integrating these and other relevant contextual parameters in experimental designs. Our results suggest the need to measure and account for the ecological dynamics of predator-prey interactions when studying the effects of anthropogenic disturbance in feeding animals.

  7. Group Behavioural Responses of Atlantic Salmon (Salmo salar L.) to Light, Infrasound and Sound Stimuli

    PubMed Central

    Bui, Samantha; Oppedal, Frode; Korsøen, Øyvind J.; Sonny, Damien; Dempster, Tim

    2013-01-01

    Understanding species-specific flight behaviours is essential in developing methods of guiding fish spatially, and requires knowledge on how groups of fish respond to aversive stimuli. By harnessing their natural behaviours, the use of physical manipulation or other potentially harmful procedures can be minimised. We examined the reactions of sea-caged groups of 50 salmon (1331±364 g) to short-term exposure to visual or acoustic stimuli. In light experiments, fish were exposed to one of three intensities of blue LED light (high, medium and low) or no light (control). Sound experiments included exposure to infrasound (12 Hz), a surface disturbance event, the combination of infrasound and surface disturbance, or no stimuli. Groups that experienced light, infrasound, and the combination of infrasound and surface disturbance treatments, elicited a marked change in vertical distribution, where fish dived to the bottom of the sea-cage for the duration of the stimulus. Light treatments, but not sound, also reduced the total echo-signal strength (indicative of swim bladder volume) after exposure to light, compared to pre-stimulus levels. Groups in infrasound and combination treatments showed increased swimming activity during stimulus application, with swimming speeds tripled compared to that of controls. In all light and sound treatments, fish returned to their pre-stimulus swimming depths and speeds once exposure had ceased. This work establishes consistent, short-term avoidance responses to these stimuli, and provides a basis for methods to guide fish for aquaculture applications, or create avoidance barriers for conservation purposes. In doing so, we can achieve the manipulation of group position with minimal welfare impacts, to create more sustainable practices. PMID:23691087

  8. Group behavioural responses of Atlantic salmon (Salmo salar L.) to light, infrasound and sound stimuli.

    PubMed

    Bui, Samantha; Oppedal, Frode; Korsøen, Øyvind J; Sonny, Damien; Dempster, Tim

    2013-01-01

    Understanding species-specific flight behaviours is essential in developing methods of guiding fish spatially, and requires knowledge on how groups of fish respond to aversive stimuli. By harnessing their natural behaviours, the use of physical manipulation or other potentially harmful procedures can be minimised. We examined the reactions of sea-caged groups of 50 salmon (1331 ± 364 g) to short-term exposure to visual or acoustic stimuli. In light experiments, fish were exposed to one of three intensities of blue LED light (high, medium and low) or no light (control). Sound experiments included exposure to infrasound (12 Hz), a surface disturbance event, the combination of infrasound and surface disturbance, or no stimuli. Groups that experienced light, infrasound, and the combination of infrasound and surface disturbance treatments, elicited a marked change in vertical distribution, where fish dived to the bottom of the sea-cage for the duration of the stimulus. Light treatments, but not sound, also reduced the total echo-signal strength (indicative of swim bladder volume) after exposure to light, compared to pre-stimulus levels. Groups in infrasound and combination treatments showed increased swimming activity during stimulus application, with swimming speeds tripled compared to that of controls. In all light and sound treatments, fish returned to their pre-stimulus swimming depths and speeds once exposure had ceased. This work establishes consistent, short-term avoidance responses to these stimuli, and provides a basis for methods to guide fish for aquaculture applications, or create avoidance barriers for conservation purposes. In doing so, we can achieve the manipulation of group position with minimal welfare impacts, to create more sustainable practices.

  9. Sound control by temperature gradients

    NASA Astrophysics Data System (ADS)

    Sánchez-Dehesa, José; Angelov, Mitko I.; Cervera, Francisco; Cai, Liang-Wu

    2009-11-01

    This work reports experiments showing that airborne sound propagation can be controlled by temperature gradients. A system of two heated tubes is here used to demonstrate the collimation and focusing of an ultrasonic beam by the refractive index profile created by the temperature gradients existing around the tubes. Numerical simulations supporting the experimental findings are also reported.

  10. Non-linear solitary sound waves in lipid membranes and their possible role in biological signaling

    NASA Astrophysics Data System (ADS)

    Shrivastava, Shamit

    Biological macromolecules self-assemble under entropic forces to form a dynamic 2D interfacial medium where the elastic properties arise from the curvature of the entropic potential of the interface. Elastic interfaces should be capable of propagating localized perturbations analogous to sound waves. However, (1) the existence and (2) the possible role of such waves in affecting biological functions remain unexplored. Both these aspects of "sound" as a signaling mechanism in biology are explored experimentally on mixed monolayers of lipids-fluorophores-proteins at the air/water interface as a model biological interface. This study shows - for the first time - that the nonlinear susceptibility near a thermodynamic transition in a lipid monolayer results in nonlinear solitary sound waves that are of 'all or none' nature. The state dependence of the nonlinear propagation is characterized by studying the velocity-amplitude relationship and results on distance dependence, effect of geometry and collision of solitary waves are presented. Given that the lipid bilayers and real biological membranes have such nonlinearities in their susceptibility diagrams, similar solitary phenomenon should be expected in biological membranes. In fact the observed characteristics of solitary sound waves such as, their all or none nature, a biphasic pulse shape with a long tail and optp-mechano-electro-thermal coupling etc. are strikingly similar to the phenomenon of nerve pulse propagation as observed in single nerve fibers. Finally given the strong correlation between the activity of membrane bound enzymes and the susceptibility and the fact that the later varies within a single solitary pulse, a new thermodynamic basis for biological signaling is proposed. The state of the interface controls both the nature of sound propagation and its impact on incorporated enzymes and proteins. The proof of concept is demonstrated for acetylcholine esterase embedded in a lipid monolayer, where the enzyme is spatiotemporally "knocked out" by a propagating sound wave.

  11. Recent sounding rocket highlights and a concept for melding sounding rocket and space shuttle activities

    NASA Technical Reports Server (NTRS)

    Lane, J. H.; Mayo, E. E.

    1980-01-01

    Highlights include launching guided vehicles into the African Solar Eclipse, initiation of development of a Three-Stage Black Brant to explore the dayside polar cusp, large payload Aries Flights at White Sands Missile Range, and an active program with the Orion vehicle family using surplus motors. Sounding rocket philosophy and experience is being applied to the shuttle in a Get Away Special and Experiments of Opportunity Payloads Programs. In addition, an orbit selection and targeting software system to support shuttle pallet mounted experiments is under development.

  12. Investigation of the potential effects of underwater noise from petroleum-industry activities on feeding humpback whale behavior. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malme, C.I.; Miles, P.R.; Tyack, P.

    1985-06-01

    An investigation was made of the potential effects of underwater noise from petroleum-industry activities on the behavior of feeding humpback whales in Frederick Sound and Stephens Passage, Alaska in August, 1984. Test sounds were a 100 cu. in. air gun and playbacks of recorded drillship, drilling platform, production platform, semi-submersible drill rig, and helicopter fly-over noise. Sound source levels and acoustic propagation losses were measured. The movement patterns of whales were determined by observations of whale-surfacing positions.

  13. NFL Films audio, video, and film production facilities

    NASA Astrophysics Data System (ADS)

    Berger, Russ; Schrag, Richard C.; Ridings, Jason J.

    2003-04-01

    The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.

  14. The Sound Path: Adding Music to a Child Care Playground.

    ERIC Educational Resources Information Center

    Kern, Petra; Wolery, Mark

    2002-01-01

    This article discusses how musical activities were added to a childcare playground and the benefits for a young child with blindness. The six-station "Sound Path" is described, and suggestions are provided for using sound pipes to develop sensorimotor skills, social and communication skills, cognitive skills, and emotional skills. (Contains…

  15. Effects of Natural Sounds on Pain: A Randomized Controlled Trial with Patients Receiving Mechanical Ventilation Support.

    PubMed

    Saadatmand, Vahid; Rejeh, Nahid; Heravi-Karimooi, Majideh; Tadrisi, Sayed Davood; Vaismoradi, Mojtaba; Jordan, Sue

    2015-08-01

    Nonpharmacologic pain management in patients receiving mechanical ventilation support in critical care units is under investigated. Natural sounds may help reduce the potentially harmful effects of anxiety and pain in hospitalized patients. The aim of this study was to examine the effect of pleasant, natural sounds on self-reported pain in patients receiving mechanical ventilation support, using a pragmatic parallel-arm, randomized controlled trial. The study was conducted in a general adult intensive care unit of a high-turnover teaching hospital, in Tehran, Iran. Between October 2011 and June 2012, we recruited 60 patients receiving mechanical ventilation support to the intervention (n = 30) and control arms (n = 30) of a pragmatic parallel-group, randomized controlled trial. Participants in both arms wore headphones for 90 minutes. Those in the intervention arm heard pleasant, natural sounds, whereas those in the control arm heard nothing. Outcome measures included the self-reported visual analog scale for pain at baseline; 30, 60, and 90 minutes into the intervention; and 30 minutes post-intervention. All patients approached agreed to participate. The trial arms were similar at baseline. Pain scores in the intervention arm fell and were significantly lower than in the control arm at each time point (p < .05). Administration of pleasant, natural sounds via headphones is a simple, safe, nonpharmacologic nursing intervention that may be used to allay pain for up to 120 minutes in patients receiving mechanical ventilation support. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  16. Effect of Casino-Related Sound, Red Light and Pairs on Decision-Making During the Iowa Gambling Task

    PubMed Central

    Noël, Xavier; Bechara, Antoine; Vanavermaete, Nora; Verbanck, Paul; Kornreich, Charles

    2014-01-01

    Casino venues are often characterized by “warm” colors, reward-related sounds, and the presence of others. These factors have always been identified as a key factor in energizing gambling. However, few empirical studies have examined their impact on gambling behaviors. Here, we aimed to explore the impact of combined red light and casino-related sounds, with or without the presence of another participant, on gambling-related behaviors. Gambling behavior was estimated with the Iowa Gambling Task (IGT). Eighty non-gamblers participants took part in one of four experimental conditions (20 participants in each condition); (1) IGT without casino-related sound and under normal (white) light (control), (2) IGT with combined casino-related sound and red light (casino alone), (3) IGT with combined casino-related sound, red light and in front of another participant (casino competition—implicit), and (4) IGT with combined casino-related sound, red light and against another participant (casino competition—explicit). Results showed that, in contrast to the control condition, participants in the three “casino” conditions did not exhibit slower deck selection reaction time after losses than after rewards. Moreover, participants in the two “competition” conditions displayed lowered deck selection reaction time after losses and rewards, as compared with the control and the “casino alone” conditions. These findings suggest that casino environment may diminish the time used for reflecting and thinking before acting after losses. These findings are discussed along with the methodological limitations, potential directions for future studies, as well as implications to enhance prevention strategies of abnormal gambling. PMID:24414096

  17. Effect of casino-related sound, red light and pairs on decision-making during the Iowa gambling task.

    PubMed

    Brevers, Damien; Noël, Xavier; Bechara, Antoine; Vanavermaete, Nora; Verbanck, Paul; Kornreich, Charles

    2015-06-01

    Casino venues are often characterized by "warm" colors, reward-related sounds, and the presence of others. These factors have always been identified as a key factor in energizing gambling. However, few empirical studies have examined their impact on gambling behaviors. Here, we aimed to explore the impact of combined red light and casino-related sounds, with or without the presence of another participant, on gambling-related behaviors. Gambling behavior was estimated with the Iowa Gambling Task (IGT). Eighty non-gamblers participants took part in one of four experimental conditions (20 participants in each condition); (1) IGT without casino-related sound and under normal (white) light (control), (2) IGT with combined casino-related sound and red light (casino alone), (3) IGT with combined casino-related sound, red light and in front of another participant (casino competition-implicit), and (4) IGT with combined casino-related sound, red light and against another participant (casino competition-explicit). Results showed that, in contrast to the control condition, participants in the three "casino" conditions did not exhibit slower deck selection reaction time after losses than after rewards. Moreover, participants in the two "competition" conditions displayed lowered deck selection reaction time after losses and rewards, as compared with the control and the "casino alone" conditions. These findings suggest that casino environment may diminish the time used for reflecting and thinking before acting after losses. These findings are discussed along with the methodological limitations, potential directions for future studies, as well as implications to enhance prevention strategies of abnormal gambling.

  18. The impact of artificial vehicle sounds for pedestrians on driver stress.

    PubMed

    Cottrell, Nicholas D; Barton, Benjamin K

    2012-01-01

    Electrically based vehicles have produced some concern over their lack of sound, but the impact of artificial sounds now being implemented have not been examined in respect to their effects upon the driver. The impact of two different implementations of vehicle sound on driver stress in electric vehicles was examined. A Nissan HEV running in electric vehicle mode was driven by participants in an area of congestion using three sound implementations: (1) no artificial sounds, (2) manually engaged sounds and (3) automatically engaged sounds. Physiological and self-report questionnaire measures were collected to determine stress and acceptance of the automated sound protocol. Driver stress was significantly higher in the manually activated warning condition, compared to both no artificial sounds and automatically engaged sounds. Implications for automation usage and measurement methods are discussed and future research directions suggested. The advent of hybrid- and all-electric vehicles has created a need for artificial warning signals for pedestrian safety that place task demands on drivers. We investigated drivers' stress differences in response to varying conditions of warning signals for pedestrians. Driver stress was lower when noises were automated.

  19. THE USE OF ARCHITECTURAL ACOUSTICAL MATERIALS, THEORY AND PRACTICE. SECOND EDITION.

    ERIC Educational Resources Information Center

    Acoustical Materials Association, New York, NY.

    THIS DISCUSSION OF THE BASIC FUNCTION OF ACOUSTICAL MATERIALS--THE CONTROL OF SOUND BY SOUND ABSORPTION--IS BASED ON THE WAVE AND ENERGY PROPERTIES OF SOUND. IT IS STATED THAT, IN GENERAL, A MUCH LARGER VOLUME OF ACOUSTICAL MATERIALS IS NEEDED TO REMOVE DISTRACTING NOISE FROM CLASSROOMS AND OFFICES, FOR EXAMPLE, THAN FROM AUDITORIUMS, WHERE A…

  20. The influence of company identity on the perception of vehicle sounds.

    PubMed

    Humphreys, Louise; Giudice, Sebastiano; Jennings, Paul; Cain, Rebecca; Song, Wookeun; Dunne, Garry

    2011-04-01

    In order to determine how the interior of a car should sound, automotive manufacturers often rely on obtaining data from individual evaluations of vehicle sounds. Company identity could play a role in these appraisals, particularly when individuals are comparing cars from opposite ends of the performance spectrum. This research addressed the question: does company identity influence the evaluation of automotive sounds belonging to cars of a similar performance level and from the same market segment? Participants listened to car sounds from two competing manufacturers, together with control sounds. Before listening to each sound, participants were presented with the correct company identity for that sound, the incorrect identity or were given no information about the identity of the sound. The results showed that company identity did not influence appraisals of high performance cars belonging to different manufacturers. These results have positive implications for methodologies employed to capture the perceptions of individuals. STATEMENT OF RELEVANCE: A challenge in automotive design is to set appropriate targets for vehicle sounds, relying on understanding subjective reactions of individuals to such sounds. This paper assesses the role of company identity in influencing these subjective reactions and will guide sound evaluation studies, in which the manufacturer is often apparent.

Top