Sample records for acoustic signature recognition

  1. Event identification by acoustic signature recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dress, W.B.; Kercel, S.W.

    1995-07-01

    Many events of interest to the security commnnity produce acoustic emissions that are, in principle, identifiable as to cause. Some obvious examples are gunshots, breaking glass, takeoffs and landings of small aircraft, vehicular engine noises, footsteps (high frequencies when on gravel, very low frequencies. when on soil), and voices (whispers to shouts). We are investigating wavelet-based methods to extract unique features of such events for classification and identification. We also discuss methods of classification and pattern recognition specifically tailored for acoustic signatures obtained by wavelet analysis. The paper is divided into three parts: completed work, work in progress, and futuremore » applications. The completed phase has led to the successful recognition of aircraft types on landing and takeoff. Both small aircraft (twin-engine turboprop) and large (commercial airliners) were included in the study. The project considered the design of a small, field-deployable, inexpensive device. The techniques developed during the aircraft identification phase were then adapted to a multispectral electromagnetic interference monitoring device now deployed in a nuclear power plant. This is a general-purpose wavelet analysis engine, spanning 14 octaves, and can be adapted for other specific tasks. Work in progress is focused on applying the methods previously developed to speaker identification. Some of the problems to be overcome include recognition of sounds as voice patterns and as distinct from possible background noises (e.g., music), as well as identification of the speaker from a short-duration voice sample. A generalization of the completed work and the work in progress is a device capable of classifying any number of acoustic events-particularly quasi-stationary events such as engine noises and voices and singular events such as gunshots and breaking glass. We will show examples of both kinds of events and discuss their recognition likelihood.« less

  2. Acoustic signature recognition technique for Human-Object Interactions (HOI) in persistent surveillance systems

    NASA Astrophysics Data System (ADS)

    Alkilani, Amjad; Shirkhodaie, Amir

    2013-05-01

    Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we present a robust method for detection and classification of HOI events via clustering of extracted features from training of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each classifiers employs different similarity distance matching technique for classification. Performance evaluations of classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed. The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS applications.

  3. Methods and apparatus for multi-parameter acoustic signature inspection

    DOEpatents

    Diaz, Aaron A [Richland, WA; Samuel, Todd J [Pasco, WA; Valencia, Juan D [Kennewick, WA; Gervais, Kevin L [Richland, WA; Tucker, Brian J [Pasco, WA; Kirihara, Leslie J [Richland, WA; Skorpik, James R [Kennewick, WA; Reid, Larry D [Benton City, WA; Munley, John T [Benton City, WA; Pappas, Richard A [Richland, WA; Wright, Bob W [West Richland, WA; Panetta, Paul D [Richland, WA; Thompson, Jason S [Richland, WA

    2007-07-24

    A multiparameter acoustic signature inspection device and method are described for non-invasive inspection of containers. Dual acoustic signatures discriminate between various fluids and materials for identification of the same.

  4. Door latching recognition apparatus and process

    DOEpatents

    Eakle, Jr., Robert F.

    2012-05-15

    An acoustic door latch detector is provided in which a sound recognition sensor is integrated into a door or door lock mechanism. The programmable sound recognition sensor can be trained to recognize the acoustic signature of the door and door lock mechanism being properly engaged and secured. The acoustic sensor will signal a first indicator indicating that proper closure was detected or sound an alarm condition if the proper acoustic signature is not detected within a predetermined time interval.

  5. Detection and Identification of Acoustic Signatures

    DTIC Science & Technology

    2011-08-01

    represent complex scenarios such as urban scenes with multiple sources in the soundscape and significant amount of reverberation and diffraction effects... soundscape . In either case it is necessary to understand that the probability of detection is a function of both the vehicle acoustic signature and...the acoustic masking component, or soundscape . And that both signals must be defined in greater depth than overall level, or average frequency

  6. Acoustic/infrasonic rocket engine signatures

    NASA Astrophysics Data System (ADS)

    Tenney, Stephen M.; Noble, John M.; Whitaker, Rodney W.; ReVelle, Douglas O.

    2003-09-01

    Infrasonics offers the potential of long-range acoustic detection of explosions, missiles and even sounds created by manufacturing plants. The atmosphere attenuates acoustic energy above 20 Hz quite rapidly, but signals below 10 Hz can propagate to long ranges. Space shuttle launches have been detected infrasonically from over 1000 km away and the Concorde airliner from over 400 km. This technology is based on microphones designed to respond to frequencies from .1 to 300 Hz that can be operated outdoors for extended periods of time with out degrading their performance. The US Army Research Laboratory and Los Alamos National Laboratory have collected acoustic and infrasonic signatures of static engine testing of two missiles. Signatures were collected of a SCUD missile engine at Huntsville, AL and a Minuteman engine at Edwards AFB. The engines were fixed vertically in a test stand during the burn. We will show the typical time waveform signals of these static tests and spectrograms for each type. High resolution, 24-bit data were collected at 512 Hz and 16-bit acoustic data at 10 kHz. Edwards data were recorded at 250 Hz and 50 Hz using a Geotech Instruments 24 bit digitizer. Ranges from the test stand varied from 1 km to 5 km. Low level and upper level meteorological data was collected to provide full details of atmospheric propagation during the engine test. Infrasonic measurements were made with the Chaparral Physics Model 2 microphone with porous garden hose attached for wind noise suppression. A B&K microphone was used for high frequency acoustic measurements. Results show primarily a broadband signal with distinct initiation and completion points. There appear to be features present in the signals that would allow identification of missile type. At 5 km the acoustic/infrasonic signal was clearly present. Detection ranges for the types of missile signatures measured will be predicted based on atmospheric modeling. As part of an experiment conducted by ARL

  7. Signature Verification Based on Handwritten Text Recognition

    NASA Astrophysics Data System (ADS)

    Viriri, Serestina; Tapamo, Jules-R.

    Signatures continue to be an important biometric trait because it remains widely used primarily for authenticating the identity of human beings. This paper presents an efficient text-based directional signature recognition algorithm which verifies signatures, even when they are composed of special unconstrained cursive characters which are superimposed and embellished. This algorithm extends the character-based signature verification technique. The experiments carried out on the GPDS signature database and an additional database created from signatures captured using the ePadInk tablet, show that the approach is effective and efficient, with a positive verification rate of 94.95%.

  8. Hybrid Speaker Recognition Using Universal Acoustic Model

    NASA Astrophysics Data System (ADS)

    Nishimura, Jun; Kuroda, Tadahiro

    We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.

  9. Conic section function neural network circuitry for offline signature recognition.

    PubMed

    Erkmen, Burcu; Kahraman, Nihan; Vural, Revna A; Yildirim, Tulay

    2010-04-01

    In this brief, conic section function neural network (CSFNN) circuitry was designed for offline signature recognition. CSFNN is a unified framework for multilayer perceptron (MLP) and radial basis function (RBF) networks to make simultaneous use of advantages of both. The CSFNN circuitry architecture was developed using a mixed mode circuit implementation. The designed circuit system is problem independent. Hence, the general purpose neural network circuit system could be applied to various pattern recognition problems with different network sizes on condition with the maximum network size of 16-16-8. In this brief, CSFNN circuitry system has been applied to two different signature recognition problems. CSFNN circuitry was trained with chip-in-the-loop learning technique in order to compensate typical analog process variations. CSFNN hardware achieved highly comparable computational performances with CSFNN software for nonlinear signature recognition problems.

  10. Speech recognition: Acoustic phonetic and lexical knowledge representation

    NASA Astrophysics Data System (ADS)

    Zue, V. W.

    1983-02-01

    The purpose of this program is to develop a speech data base facility under which the acoustic characteristics of speech sounds in various contexts can be studied conveniently; investigate the phonological properties of a large lexicon of, say 10,000 words, and determine to what extent the phontactic constraints can be utilized in speech recognition; study the acoustic cues that are used to mark work boundaries; develop a test bed in the form of a large-vocabulary, IWR system to study the interactions of acoustic, phonetic and lexical knowledge; and develop a limited continuous speech recognition system with the goal of recognizing any English word from its spelling in order to assess the interactions of higher-level knowledge sources.

  11. Online signature recognition using principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan

    2016-12-01

    In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.

  12. Toward an automated signature recognition toolkit for mission operations

    NASA Technical Reports Server (NTRS)

    Cleghorn, T.; Laird, P; Perrine, L.; Culbert, C.; Macha, M.; Saul, R.; Hammen, D.; Moebes, T.; Shelton, R.

    1994-01-01

    Signature recognition is the problem of identifying an event or events from its time series. The generic problem has numerous applications to science and engineering. At NASA's Johnson Space Center, for example, mission control personnel, using electronic displays and strip chart recorders, monitor telemetry data from three-phase electrical buses on the Space Shuttle and maintain records of device activation and deactivation. Since few electrical devices have sensors to indicate their actual status, changes of state are inferred from characteristic current and voltage fluctuations. Controllers recognize these events both by examining the waveform signatures and by listening to audio channels between ground and crew. Recently the authors have developed a prototype system that identifies major electrical events from the telemetry and displays them on a workstation. Eventually the system will be able to identify accurately the signatures of over fifty distinct events in real time, while contending with noise, intermittent loss of signal, overlapping events, and other complications. This system is just one of many possible signature recognition applications in Mission Control. While much of the technology underlying these applications is the same, each application has unique data characteristics, and every control position has its own interface and performance requirements. There is a need, therefore, for CASE tools that can reduce the time to implement a running signature recognition application from months to weeks or days. This paper describes our work to date and our future plans.

  13. Toward an automated signature recognition toolkit for mission operations

    NASA Astrophysics Data System (ADS)

    Cleghorn, T.; Laird, P.; Perrine, L.; Culbert, C.; Macha, M.; Saul, R.; Hammen, D.; Moebes, T.; Shelton, R.

    1994-10-01

    Signature recognition is the problem of identifying an event or events from its time series. The generic problem has numerous applications to science and engineering. At NASA's Johnson Space Center, for example, mission control personnel, using electronic displays and strip chart recorders, monitor telemetry data from three-phase electrical buses on the Space Shuttle and maintain records of device activation and deactivation. Since few electrical devices have sensors to indicate their actual status, changes of state are inferred from characteristic current and voltage fluctuations. Controllers recognize these events both by examining the waveform signatures and by listening to audio channels between ground and crew. Recently the authors have developed a prototype system that identifies major electrical events from the telemetry and displays them on a workstation. Eventually the system will be able to identify accurately the signatures of over fifty distinct events in real time, while contending with noise, intermittent loss of signal, overlapping events, and other complications. This system is just one of many possible signature recognition applications in Mission Control. While much of the technology underlying these applications is the same, each application has unique data characteristics, and every control position has its own interface and performance requirements. There is a need, therefore, for CASE tools that can reduce the time to implement a running signature recognition application from months to weeks or days. This paper describes our work to date and our future plans.

  14. Acoustic interference and recognition space within a complex assemblage of dendrobatid frogs

    PubMed Central

    Amézquita, Adolfo; Flechas, Sandra Victoria; Lima, Albertina Pimentel; Gasser, Herbert; Hödl, Walter

    2011-01-01

    In species-rich assemblages of acoustically communicating animals, heterospecific sounds may constrain not only the evolution of signal traits but also the much less-studied signal-processing mechanisms that define the recognition space of a signal. To test the hypothesis that the recognition space is optimally designed, i.e., that it is narrower toward the species that represent the higher potential for acoustic interference, we studied an acoustic assemblage of 10 diurnally active frog species. We characterized their calls, estimated pairwise correlations in calling activity, and, to model the recognition spaces of five species, conducted playback experiments with 577 synthetic signals on 531 males. Acoustic co-occurrence was not related to multivariate distance in call parameters, suggesting a minor role for spectral or temporal segregation among species uttering similar calls. In most cases, the recognition space overlapped but was greater than the signal space, indicating that signal-processing traits do not act as strictly matched filters against sounds other than homospecific calls. Indeed, the range of the recognition space was strongly predicted by the acoustic distance to neighboring species in the signal space. Thus, our data provide compelling evidence of a role of heterospecific calls in evolutionarily shaping the frogs' recognition space within a complex acoustic assemblage without obvious concomitant effects on the signal. PMID:21969562

  15. A static acoustic signature system for the analysis of dynamic flight information

    NASA Technical Reports Server (NTRS)

    Ramer, D. J.

    1978-01-01

    The Army family of helicopters was analyzed to measure the polar octave band acoustic signature in various modes of flight. A static array of calibrated microphones was used to simultaneously acquire the signature and differential times required to mathematically position the aircraft in space. The signature was then reconstructed, mathematically normalized to a fixed radius around the aircraft.

  16. In-situ acoustic signature monitoring in additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Koester, Lucas W.; Taheri, Hossein; Bigelow, Timothy A.; Bond, Leonard J.; Faierson, Eric J.

    2018-04-01

    Additive manufacturing is a rapidly maturing process for the production of complex metallic, ceramic, polymeric, and composite components. The processes used are numerous, and with the complex geometries involved this can make quality control and standardization of the process and inspection difficult. Acoustic emission measurements have been used previously to monitor a number of processes including machining and welding. The authors have identified acoustic signature measurement as a potential means of monitoring metal additive manufacturing processes using process noise characteristics and those discrete acoustic emission events characteristic of defect growth, including cracks and delamination. Results of acoustic monitoring for a metal additive manufacturing process (directed energy deposition) are reported. The work investigated correlations between acoustic emissions and process noise with variations in machine state and deposition parameters, and provided proof of concept data that such correlations do exist.

  17. Operational Parameters in Acoustic Signature Inspection of Railroad Wheels

    DOT National Transportation Integrated Search

    1980-04-01

    A brief summary is given of some prior studies which established the feasibility of using acoustic signatures for inspection of railroad wheels. The purpose of the present work was to elucidate operational parameters which would be of importance for ...

  18. Feasibility of Flaw Detection in Railroad Wheels Using Acoustic Signatures

    DOT National Transportation Integrated Search

    1976-10-01

    The feasibility study on the use of acoustic signatures for detection of flaws in railway wheels was conducted with the ultimate objective of development of an intrack device for moving cars. Determinations of the natural modes of vibrating wheels un...

  19. Development of a Transient Acoustic Boundary Element Method to Predict the Noise Signature of Swimming Fish

    NASA Astrophysics Data System (ADS)

    Wagenhoffer, Nathan; Moored, Keith; Jaworski, Justin

    2015-11-01

    Animals have evolved flexible wings and fins to efficiently and quietly propel themselves through the air and water. The design of quiet and efficient bio-inspired propulsive concepts requires a rapid, unified computational framework that integrates three essential features: the fluid mechanics, the elastic structural response, and the noise generation. This study focuses on the development, validation, and demonstration of a transient, two-dimensional acoustic boundary element solver accelerated by a fast multipole algorithm. The resulting acoustic solver is used to characterize the acoustic signature produced by a vortex street advecting over a NACA 0012 airfoil, which is representative of vortex-body interactions that occur in schools of swimming fish. Both 2S and 2P canonical vortex streets generated by fish are investigated over the range of Strouhal number 0 . 2 < St < 0 . 4 , and the acoustic signature of the airfoil is quantified. This study provides the first estimate of the noise signature of a school of swimming fish. Lehigh University CORE Grant.

  20. Fuzzy Intervals for Designing Structural Signature: An Application to Graphic Symbol Recognition

    NASA Astrophysics Data System (ADS)

    Luqman, Muhammad Muzzamil; Delalandre, Mathieu; Brouard, Thierry; Ramel, Jean-Yves; Lladós, Josep

    The motivation behind our work is to present a new methodology for symbol recognition. The proposed method employs a structural approach for representing visual associations in symbols and a statistical classifier for recognition. We vectorize a graphic symbol, encode its topological and geometrical information by an attributed relational graph and compute a signature from this structural graph. We have addressed the sensitivity of structural representations to noise, by using data adapted fuzzy intervals. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set. The Bayesian network is deployed in a supervised learning scenario for recognizing query symbols. The method has been evaluated for robustness against degradations & deformations on pre-segmented 2D linear architectural & electronic symbols from GREC databases, and for its recognition abilities on symbols with context noise i.e. cropped symbols.

  1. Artificial neural networks for acoustic target recognition

    NASA Astrophysics Data System (ADS)

    Robertson, James A.; Mossing, John C.; Weber, Bruce A.

    1995-04-01

    Acoustic sensors can be used to detect, track and identify non-line-of-sight targets passively. Attempts to alter acoustic emissions often result in an undesirable performance degradation. This research project investigates the use of neural networks for differentiating between features extracted from the acoustic signatures of sources. Acoustic data were filtered and digitized using a commercially available analog-digital convertor. The digital data was transformed to the frequency domain for additional processing using the FFT. Narrowband peak detection algorithms were incorporated to select peaks above a user defined SNR. These peaks were then used to generate a set of robust features which relate specifically to target components in varying background conditions. The features were then used as input into a backpropagation neural network. A K-means unsupervised clustering algorithm was used to determine the natural clustering of the observations. Comparisons between a feature set consisting of the normalized amplitudes of the first 250 frequency bins of the power spectrum and a set of 11 harmonically related features were made. Initial results indicate that even though some different target types had a tendency to group in the same clusters, the neural network was able to differentiate the targets. Successful identification of acoustic sources under varying operational conditions with high confidence levels was achieved.

  2. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor

    NASA Astrophysics Data System (ADS)

    Heracleous, Panikos; Kaino, Tomomi; Saruwatari, Hiroshi; Shikano, Kiyohiro

    2006-12-01

    We present the use of stethoscope and silicon NAM (nonaudible murmur) microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible) speech, but also very quietly uttered speech (nonaudible murmur). As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc.) for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a[InlineEquation not available: see fulltext.] word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.

  3. Methods and apparatus for non-acoustic speech characterization and recognition

    DOEpatents

    Holzrichter, John F.

    1999-01-01

    By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.

  4. Methods and apparatus for non-acoustic speech characterization and recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.

    By simultaneously recording EM wave reflections and acoustic speech information, the positions and velocities of the speech organs as speech is articulated can be defined for each acoustic speech unit. Well defined time frames and feature vectors describing the speech, to the degree required, can be formed. Such feature vectors can uniquely characterize the speech unit being articulated each time frame. The onset of speech, rejection of external noise, vocalized pitch periods, articulator conditions, accurate timing, the identification of the speaker, acoustic speech unit recognition, and organ mechanical parameters can be determined.

  5. Speech recognition: Acoustic-phonetic knowledge acquisition and representation

    NASA Astrophysics Data System (ADS)

    Zue, Victor W.

    1988-09-01

    The long-term research goal is to develop and implement speaker-independent continuous speech recognition systems. It is believed that the proper utilization of speech-specific knowledge is essential for such advanced systems. This research is thus directed toward the acquisition, quantification, and representation, of acoustic-phonetic and lexical knowledge, and the application of this knowledge to speech recognition algorithms. In addition, we are exploring new speech recognition alternatives based on artificial intelligence and connectionist techniques. We developed a statistical model for predicting the acoustic realization of stop consonants in various positions in the syllable template. A unification-based grammatical formalism was developed for incorporating this model into the lexical access algorithm. We provided an information-theoretic justification for the hierarchical structure of the syllable template. We analyzed segmented duration for vowels and fricatives in continuous speech. Based on contextual information, we developed durational models for vowels and fricatives that account for over 70 percent of the variance, using data from multiple, unknown speakers. We rigorously evaluated the ability of human spectrogram readers to identify stop consonants spoken by many talkers and in a variety of phonetic contexts. Incorporating the declarative knowledge used by the readers, we developed a knowledge-based system for stop identification. We achieved comparable system performance to that to the readers.

  6. Acoustic signature of thunder from seismic records

    NASA Astrophysics Data System (ADS)

    Kappus, Mary E.; Vernon, Frank L.

    1991-06-01

    Thunder, the sound wave through the air associated with lightning, transfers sufficient energy to the ground to trigger seismometers set to record regional earthquakes. The acoustic signature recorded on seismometers, in the form of ground velocity as a function of time, contains the same type features as pressure variations recorded with microphones in air. At a seismic station in Kislovodsk, USSR, a nearly direct lightning strike caused electronic failure of borehole instruments while leaving a brief impulsive acoustic signature on the surface instruments. The peak frequency of 25-55 Hz is consistent with previously published values for cloud-to-ground lightning strikes, but spectra from this station are contaminated by very strong wind noise in this band. A thunderstorm near a similar station in Karasu triggered more than a dozen records of individual lightning strikes during a 2-hour period. The spectra for these events are fairly broadband, with peaks at low frequencies, varying from 6 to 13 Hz. The spectra were all computed by multitaper analysis, which deals appropriately with the nonstationary thunder signal. These independent measurements of low-frequency peaks corroborate the occasional occurrences in traditional microphone records, but a theory concerning the physical mechanism to account for them is still in question. Examined separately, the individual claps in each record have similar frequency distributions, discounting a need for multiple mechanisms to explain different phases of the thunder sequence. Particle motion, determined from polarization analysis of the three-component records, is predominantly vertical downward, with smaller horizontal components indicative of the direction to the lightning bolt. In three of the records the azimuth to the lightning bolt changes with time, confirming a significant horizontal component to the lightning channel itself.

  7. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    NASA Astrophysics Data System (ADS)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  8. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  9. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, J.F.; Ng, L.C.

    1998-03-17

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.

  10. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.; Ng, L.C.

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used formore » purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.« less

  11. Audio Tracking in Noisy Environments by Acoustic Map and Spectral Signature.

    PubMed

    Crocco, Marco; Martelli, Samuele; Trucco, Andrea; Zunino, Andrea; Murino, Vittorio

    2018-05-01

    A novel method is proposed for generic target tracking by audio measurements from a microphone array. To cope with noisy environments characterized by persistent and high energy interfering sources, a classification map (CM) based on spectral signatures is calculated by means of a machine learning algorithm. Next, the CM is combined with the acoustic map, describing the spatial distribution of sound energy, in order to obtain a cleaned joint map in which contributions from the disturbing sources are removed. A likelihood function is derived from this map and fed to a particle filter yielding the target location estimation on the acoustic image. The method is tested on two real environments, addressing both speaker and vehicle tracking. The comparison with a couple of trackers, relying on the acoustic map only, shows a sharp improvement in performance, paving the way to the application of audio tracking in real challenging environments.

  12. Specific acoustic models for spontaneous and dictated style in indonesian speech recognition

    NASA Astrophysics Data System (ADS)

    Vista, C. B.; Satriawan, C. H.; Lestari, D. P.; Widyantoro, D. H.

    2018-03-01

    The performance of an automatic speech recognition system is affected by differences in speech style between the data the model is originally trained upon and incoming speech to be recognized. In this paper, the usage of GMM-HMM acoustic models for specific speech styles is investigated. We develop two systems for the experiments; the first employs a speech style classifier to predict the speech style of incoming speech, either spontaneous or dictated, then decodes this speech using an acoustic model specifically trained for that speech style. The second system uses both acoustic models to recognise incoming speech and decides upon a final result by calculating a confidence score of decoding. Results show that training specific acoustic models for spontaneous and dictated speech styles confers a slight recognition advantage as compared to a baseline model trained on a mixture of spontaneous and dictated training data. In addition, the speech style classifier approach of the first system produced slightly more accurate results than the confidence scoring employed in the second system.

  13. Dynamic Gesture Recognition with a Terahertz Radar Based on Range Profile Sequences and Doppler Signatures

    PubMed Central

    Pi, Yiming

    2017-01-01

    The frequency of terahertz radar ranges from 0.1 THz to 10 THz, which is higher than that of microwaves. Multi-modal signals, including high-resolution range profile (HRRP) and Doppler signatures, can be acquired by the terahertz radar system. These two kinds of information are commonly used in automatic target recognition; however, dynamic gesture recognition is rarely discussed in the terahertz regime. In this paper, a dynamic gesture recognition system using a terahertz radar is proposed, based on multi-modal signals. The HRRP sequences and Doppler signatures were first achieved from the radar echoes. Considering the electromagnetic scattering characteristics, a feature extraction model is designed using location parameter estimation of scattering centers. Dynamic Time Warping (DTW) extended to multi-modal signals is used to accomplish the classifications. Ten types of gesture signals, collected from a terahertz radar, are applied to validate the analysis and the recognition system. The results of the experiment indicate that the recognition rate reaches more than 91%. This research verifies the potential applications of dynamic gesture recognition using a terahertz radar. PMID:29267249

  14. Dynamic Gesture Recognition with a Terahertz Radar Based on Range Profile Sequences and Doppler Signatures.

    PubMed

    Zhou, Zhi; Cao, Zongjie; Pi, Yiming

    2017-12-21

    The frequency of terahertz radar ranges from 0.1 THz to 10 THz, which is higher than that of microwaves. Multi-modal signals, including high-resolution range profile (HRRP) and Doppler signatures, can be acquired by the terahertz radar system. These two kinds of information are commonly used in automatic target recognition; however, dynamic gesture recognition is rarely discussed in the terahertz regime. In this paper, a dynamic gesture recognition system using a terahertz radar is proposed, based on multi-modal signals. The HRRP sequences and Doppler signatures were first achieved from the radar echoes. Considering the electromagnetic scattering characteristics, a feature extraction model is designed using location parameter estimation of scattering centers. Dynamic Time Warping (DTW) extended to multi-modal signals is used to accomplish the classifications. Ten types of gesture signals, collected from a terahertz radar, are applied to validate the analysis and the recognition system. The results of the experiment indicate that the recognition rate reaches more than 91%. This research verifies the potential applications of dynamic gesture recognition using a terahertz radar.

  15. Predicting word-recognition performance in noise by young listeners with normal hearing using acoustic, phonetic, and lexical variables.

    PubMed

    McArdle, Rachel; Wilson, Richard H

    2008-06-01

    To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.

  16. Target recognition based on the moment functions of radar signatures

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Tae; Kim, Hyo-Tae

    2002-03-01

    In this paper, we present the results of target recognition research based on the moment functions of various radar signatures, such as time-frequency signatures, range profiles, and scattering centers. The proposed approach utilizes geometrical moments or central moments of the obtained radar signatures. In particular, we derived exact and closed form expressions of the geometrical moments of the adaptive Gaussian representation (AGR), which is one of the adaptive joint time-frequency techniques, and also computed the central moments of range profiles and one-dimensional (1-D) scattering centers on a target, which are obtained by various super-resolution techniques. The obtained moment functions are further processed to provide small dimensional and redundancy-free feature vectors, and classified via a neural network approach or a Bayes classifier. The performances of the proposed technique are demonstrated using a simulated radar cross section (RCS) data set, or a measured RCS data set of various scaled aircraft models, obtained at the Pohang University of Science and Technology (POSTECH) compact range facility. Results show that the techniques in this paper can not only provide reliable classification accuracy, but also save computational resources.

  17. Relationships between Structural and Acoustic Properties of Maternal Talk and Children's Early Word Recognition

    ERIC Educational Resources Information Center

    Suttora, Chiara; Salerni, Nicoletta; Zanchi, Paola; Zampini, Laura; Spinelli, Maria; Fasolo, Mirco

    2017-01-01

    This study aimed to investigate specific associations between structural and acoustic characteristics of infant-directed (ID) speech and word recognition. Thirty Italian-acquiring children and their mothers were tested when the children were 1;3. Children's word recognition was measured with the looking-while-listening task. Maternal ID speech was…

  18. A New Adaptive Structural Signature for Symbol Recognition by Using a Galois Lattice as a Classifier.

    PubMed

    Coustaty, M; Bertet, K; Visani, M; Ogier, J

    2011-08-01

    In this paper, we propose a new approach for symbol recognition using structural signatures and a Galois lattice as a classifier. The structural signatures are based on topological graphs computed from segments which are extracted from the symbol images by using an adapted Hough transform. These structural signatures-that can be seen as dynamic paths which carry high-level information-are robust toward various transformations. They are classified by using a Galois lattice as a classifier. The performance of the proposed approach is evaluated based on the GREC'03 symbol database, and the experimental results we obtain are encouraging.

  19. The Acoustic Signature of Glaciated Margins

    NASA Astrophysics Data System (ADS)

    Newton, A. M. W.; Huuse, M.

    2016-12-01

    As climate warms it has become increasingly clear that, in order to fully understand how it might evolve in the future, we need to look for examples of how climate has changed in the past. The Late Cenozoic history of the Arctic Ocean and its surrounding seas has been dominated by glacial-interglacials cycles. This has resulted in major environmental changes in relative sea levels, ice volumes, sea ice conditions, and ocean circulation as marine and terrestrially-based ice sheets waxed and waned. In this work, the acoustic signatures of several glaciated margins in the Northern Hemisphere are investigated and compared. This includes: NW Greenland, West Greenland, East Greenland, mid-Norway, Northern Norway, and the North Sea. These shelf successions preserve a geomorphological record of multiple glaciations and are imaged using seismic reflection data. To date, the majority of work in these areas has tended to focus on the most recent glaciations, which are well known. Here, the focus of the work is to look at the overall stratigraphic setting and how it influences (and is influenced by) the evolution of ice sheets throughout the glacial succession. Landform records are imaged using seismic data to provide a long-term insight into the styles of glaciation on each margin and what relation this may have had on climate, whilst the stratigraphic architectures across each site demonstrate how the inherited geology and tectonic setting can provide a fundamental control on the ice sheet and depositional styles. For example, Scoresby Sund is characterised by significant aggradation that is likely related to subsidence induced by lithospheric cooling rather than rapid glacial deposition, whilst the subsidence of the mid-Norwegian margin can be related to rapid glacial deposition and trapping of sediments behind inversion structures such as the Helland-Hansen Arch. The insights from this multi-margin study allow for regional, basin-wide, glaciological records to be developed

  20. Mortar and artillery variants classification by exploiting characteristics of the acoustic signature

    NASA Astrophysics Data System (ADS)

    Hohil, Myron E.; Grasing, David; Desai, Sachi; Morcos, Amir

    2007-10-01

    Feature extraction methods based on the discrete wavelet transform and multiresolution analysis facilitate the development of a robust classification algorithm that reliably discriminates mortar and artillery variants via acoustic signals produced during the launch/impact events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants. Distinct characteristics arise within the different mortar variants because varying HE mortar payloads and related charges emphasize concussive and shrapnel effects upon impact employing varying magnitude explosions. The different mortar variants are characterized by variations in the resulting waveform of the event. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing techniques can employed to classify a given set. The DWT and other readily available signal processing techniques will be used to extract the predominant components of these characteristics from the acoustic signatures at ranges exceeding 2km. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of wavelet coefficients, frequency spectrum, and higher frequency details found within different levels of the multiresolution decomposition. The process that will be described herein extends current technologies, which emphasis multi modal sensor fusion suites to provide such situational awareness. A two fold problem of energy consumption and line of sight arise with the multi modal sensor suites. The process described within will exploit the acoustic properties of the event to provide variant classification as added situational awareness to the solider.

  1. Individual vocal signatures in barn owl nestlings: does individual recognition have an adaptive role in sibling vocal competition?

    PubMed

    Dreiss, A N; Ruppli, C A; Roulin, A

    2014-01-01

    To compete over limited parental resources, young animals communicate with their parents and siblings by producing honest vocal signals of need. Components of begging calls that are sensitive to food deprivation may honestly signal need, whereas other components may be associated with individual-specific attributes that do not change with time such as identity, sex, absolute age and hierarchy. In a sib-sib communication system where barn owl (Tyto alba) nestlings vocally negotiate priority access to food resources, we show that calls have individual signatures that are used by nestlings to recognize which siblings are motivated to compete, even if most vocalization features vary with hunger level. Nestlings were more identifiable when food-deprived than food-satiated, suggesting that vocal identity is emphasized when the benefit of winning a vocal contest is higher. In broods where siblings interact iteratively, we speculate that individual-specific signature permits siblings to verify that the most vocal individual in the absence of parents is the one that indeed perceived the food brought by parents. Individual recognition may also allow nestlings to associate identity with individual-specific characteristics such as position in the within-brood dominance hierarchy. Calls indeed revealed age hierarchy and to a lower extent sex and absolute age. Using a cross-fostering experimental design, we show that most acoustic features were related to the nest of origin (but not the nest of rearing), suggesting a genetic or an early developmental effect on the ontogeny of vocal signatures. To conclude, our study suggests that sibling competition has promoted the evolution of vocal behaviours that signal not only hunger level but also intrinsic individual characteristics such as identity, family, sex and age. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  2. Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition

    PubMed Central

    Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen

    2018-01-01

    Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642

  3. Acoustic landmarks contain more information about the phone string than other frames for automatic speech recognition with deep neural network acoustic model

    NASA Astrophysics Data System (ADS)

    He, Di; Lim, Boon Pang; Yang, Xuesong; Hasegawa-Johnson, Mark; Chen, Deming

    2018-06-01

    Most mainstream Automatic Speech Recognition (ASR) systems consider all feature frames equally important. However, acoustic landmark theory is based on a contradictory idea, that some frames are more important than others. Acoustic landmark theory exploits quantal non-linearities in the articulatory-acoustic and acoustic-perceptual relations to define landmark times at which the speech spectrum abruptly changes or reaches an extremum; frames overlapping landmarks have been demonstrated to be sufficient for speech perception. In this work, we conduct experiments on the TIMIT corpus, with both GMM and DNN based ASR systems and find that frames containing landmarks are more informative for ASR than others. We find that altering the level of emphasis on landmarks by re-weighting acoustic likelihood tends to reduce the phone error rate (PER). Furthermore, by leveraging the landmark as a heuristic, one of our hybrid DNN frame dropping strategies maintained a PER within 0.44% of optimal when scoring less than half (45.8% to be precise) of the frames. This hybrid strategy out-performs other non-heuristic-based methods and demonstrate the potential of landmarks for reducing computation.

  4. Truck acoustic data analyzer system

    DOEpatents

    Haynes, Howard D.; Akerman, Alfred; Ayers, Curtis W.

    2006-07-04

    A passive vehicle acoustic data analyzer system having at least one microphone disposed in the acoustic field of a moving vehicle and a computer in electronic communication the microphone(s). The computer detects and measures the frequency shift in the acoustic signature emitted by the vehicle as it approaches and passes the microphone(s). The acoustic signature of a truck driving by a microphone can provide enough information to estimate the truck speed in miles-per-hour (mph), engine speed in rotations-per-minute (RPM), turbocharger speed in RPM, and vehicle weight.

  5. An acoustical bubble counter for superheated drop detectors.

    PubMed

    Taylor, Chris; Montvila, Darius; Flynn, David; Brennan, Christopher; d'Errico, Francesco

    2006-01-01

    A new bubble counter has been developed based on the well-established approach of detecting vaporization events acoustically in superheated drop detectors (SDDs). This counter is called the Framework Scientific ABC 1260, and it represents a major improvement over prior versions of this technology. By utilizing advanced acoustic pattern recognition software, the bubble formation event can be differentiated from ambient background noise, as well as from other acoustic signatures. Additional structural design enhancements include a relocation of the electronic components to the bottom of the device; thus allowing for greater stability, easier access to vial SDDs without exposure to system electronics. Upgrades in the electronics permit an increase in the speed of bubble detection by almost 50%, compared with earlier versions of the counters. By positioning the vial on top of the device, temperature and sound insulation can be accommodated for extreme environments. Lead shells can also be utilized for an enhanced response to high-energy neutrons.

  6. Examination on the use of acoustic emission for monitoring metal forging process: A study using simulation technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mullins, W.M.; Irwin, R.D.; Malas, J.C. III

    The aim of this study is to determine the feasibility of using acoustic emission as a monitoring technique for metal forging operations. From the sensor development paradigm proposed by McClean et al. the most likely approach to determining feasibility for application is through signal recognition. For this reason, signature prediction and analysis was chosen to determine the suitability for forging applications.

  7. Acoustic emission and acousto-ultrasonic signature analysis of failure mechanisms in carbon fiber reinforced polymer materials

    NASA Astrophysics Data System (ADS)

    Carey, Shawn Allen

    Fiber reinforced polymer composite materials, particularly carbon (CFRPs), are being used for primary structural applications, particularly in the aerospace and naval industries. Advantages of CFRP materials, compared to traditional materials such as steel and aluminum, include: light weight, high strength to weight ratio, corrosion resistance, and long life expectancy. A concern with CFRPs is that despite quality control during fabrication, the material can contain many hidden internal flaws. These flaws in combination with unseen damage due to fatigue and low velocity impact have led to catastrophic failure of structures and components. Therefore a large amount of research has been conducted regarding nondestructive testing (NDT) and structural health monitoring (SHM) of CFRP materials. The principal objective of this research program was to develop methods to characterize failure mechanisms in CFRP materials used by the U.S. Army using acoustic emission (AE) and/or acousto-ultrasonic (AU) data. Failure mechanisms addressed include fiber breakage, matrix cracking, and delamination due to shear between layers. CFRP specimens were fabricated and tested in uniaxial tension to obtain AE and AU data. The specimens were designed with carbon fibers in different orientations to produce the different failure mechanisms. Some specimens were impacted with a blunt indenter prior to testing to simulate low-velocity impact. A signature analysis program was developed to characterize the AE data based on data examination using visual pattern recognition techniques. It was determined that it was important to characterize the AE event , using the location of the event as a parameter, rather than just the AE hit (signal recorded by an AE sensor). A back propagation neural network was also trained based on the results of the signature analysis program. Damage observed on the specimens visually with the aid of a scanning electron microscope agreed with the damage type assigned by the

  8. Individual odor recognition in birds: an endogenous olfactory signature on petrels' feathers?

    PubMed

    Bonadonna, Francesco; Miguel, Eve; Grosbois, Vladimir; Jouventin, Pierre; Bessiere, Jean-Marie

    2007-09-01

    A growing body of evidence indicates that odors are used in individual, sexual, and species recognition in vertebrates, and may be reliable signals of quality and compatibility. Petrels are seabirds that exhibit an acute sense of smell. During the breeding period, many species of petrels live in dense colonies on small oceanic islands and form pairs that use individual underground burrows. Mates alternate between parental duties and foraging trips at sea. Returning from the ocean at night (to avoid bird predators), petrels must find their nest burrow. Antarctic prions, Pachyptila desolata, are thought to identify their nest by recognizing their partner's odor, suggesting the existence of an individual odor signature. We used gas chromatography and mass spectrometry to analyze extracts obtained from the feathers of 13 birds. The chemical profile of a single bird was more similar to itself, from year to year, than to that of any other bird. The profile contained up to a hundred volatile lipids, but the odor signature may be based on the presence or absence of a few specific compounds. Our results show that the odor signature in Antarctic prions is probably endogenous, suggesting that in some species of petrels it may broadcast compatibility and quality of potential mates.

  9. On the acoustic signature of tandem airfoils: The sound of an elastic airfoil in the wake of a vortex generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manela, A.

    The acoustic signature of an acoustically compact tandem airfoil setup in uniform high-Reynolds number flow is investigated. The upstream airfoil is considered rigid and is actuated at its leading edge with small-amplitude harmonic pitching motion. The downstream airfoil is taken passive and elastic, with its motion forced by the vortex-street excitation of the upstream airfoil. The non-linear near-field description is obtained via potential thin-airfoil theory. It is then applied as a source term into the Powell-Howe acoustic analogy to yield the far-field dipole radiation of the system. To assess the effect of downstream-airfoil elasticity, results are compared with counterpart calculationsmore » for a non-elastic setup, where the downstream airfoil is rigid and stationary. Depending on the separation distance between airfoils, airfoil-motion and airfoil-wake dynamics shift between in-phase (synchronized) and counter-phase behaviors. Consequently, downstream airfoil elasticity may act to amplify or suppress sound through the direct contribution of elastic-airfoil motion to the total signal. Resonance-type motion of the elastic airfoil is found when the upstream airfoil is actuated at the least stable eigenfrequency of the downstream structure. This, again, results in system sound amplification or suppression, depending on the separation distance between airfoils. With increasing actuation frequency, the acoustic signal becomes dominated by the direct contribution of the upstream airfoil motion, whereas the relative contribution of the elastic airfoil to the total signature turns negligible.« less

  10. Individual acoustic variation in Belding's ground squirrel alarm chirps in the High Sierra Nevada

    NASA Astrophysics Data System (ADS)

    McCowan, Brenda; Hooper, Stacie L.

    2002-03-01

    The acoustic structure of calls within call types can vary as function of individual identity, sex, and social group membership and is important in kin and social group recognition. Belding's ground squirrels (Spermophilus beldingi) produce alarm chirps that function in predator avoidance but little is known about the acoustic variability of these alarm chirps. The purpose of this preliminary study was to analyze the acoustic structure of alarm chirps with respect to individual differences (e.g., signature information) from eight Belding's ground squirrels from four different lakes in the High Sierra Nevada. Results demonstrate that alarm chirps are individually distinctive, and that acoustic similarity among individuals may correspond to genetic similarity and thus dispersal patterns in this species. These data suggest, on a preliminary basis, that the acoustic structure of calls might be used as a bioacoustic tool for tracking individuals, dispersal, and other population dynamics in Belding's ground squirrels, and perhaps other vocal species.

  11. Acoustic Signatures of a Model Fan in the NASA-Lewis Anechoic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Dietrich, D. A.; Heidmann, M. F.; Abbott, J. M.

    1977-01-01

    One-third octave band and narrowband spectra and continuous directivity patterns radiated from an inlet are presented over ranges of fan operating conditions, tunnel velocity, and angle of attack. Tunnel flow markedly reduced the unsteadiness and level of the blade passage tone, revealed the cutoff design feature of the blade passage tone, and exposed a lobular directivity pattern for the second harmonic tone. The full effects of tunnel flow are shown to be complete above a tunnel velocity of 20 meters/second. The acoustic signatures are also shown to be strongly affected by fan rotational speed, fan blade loading, and inlet angle of attack.

  12. Acoustic signatures of sound source-tract coupling.

    PubMed

    Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B

    2011-04-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society

  13. Acoustic signatures of sound source-tract coupling

    PubMed Central

    Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.

    2014-01-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213

  14. Time and timing in the acoustic recognition system of crickets

    PubMed Central

    Hennig, R. Matthias; Heller, Klaus-Gerhard; Clemens, Jan

    2014-01-01

    The songs of many insects exhibit precise timing as the result of repetitive and stereotyped subunits on several time scales. As these signals encode the identity of a species, time and timing are important for the recognition system that analyzes these signals. Crickets are a prominent example as their songs are built from sound pulses that are broadcast in a long trill or as a chirped song. This pattern appears to be analyzed on two timescales, short and long. Recent evidence suggests that song recognition in crickets relies on two computations with respect to time; a short linear-nonlinear (LN) model that operates as a filter for pulse rate and a longer integration time window for monitoring song energy over time. Therefore, there is a twofold role for timing. A filter for pulse rate shows differentiating properties for which the specific timing of excitation and inhibition is important. For an integrator, however, the duration of the time window is more important than the precise timing of events. Here, we first review evidence for the role of LN-models and integration time windows for song recognition in crickets. We then parameterize the filter part by Gabor functions and explore the effects of duration, frequency, phase, and offset as these will correspond to differently timed patterns of excitation and inhibition. These filter properties were compared with known preference functions of crickets and katydids. In a comparative approach, the power for song discrimination by LN-models was tested with the songs of over 100 cricket species. It is demonstrated how the acoustic signals of crickets occupy a simple 2-dimensional space for song recognition that arises from timing, described by a Gabor function, and time, the integration window. Finally, we discuss the evolution of recognition systems in insects based on simple sensory computations. PMID:25161622

  15. Modeling magnetic field and TEC signatures of large-amplitude acoustic and gravity waves generated by natural hazard events

    NASA Astrophysics Data System (ADS)

    Zettergren, M. D.; Snively, J. B.; Inchin, P.; Komjathy, A.; Verkhoglyadova, O. P.

    2017-12-01

    Ocean and solid earth responses during earthquakes are a significant source of large amplitude acoustic and gravity waves (AGWs) that perturb the overlying ionosphere-thermosphere (IT) system. IT disturbances are routinely detected following large earthquakes (M > 7.0) via GPS total electron content (TEC) observations, which often show acoustic wave ( 3-4 min periods) and gravity wave ( 10-15 min) signatures with amplitudes of 0.05-2 TECU. In cases of very large earthquakes (M > 8.0) the persisting acoustic waves are estimated to have 100-200 m/s compressional velocities in the conducting ionospheric E and F-regions and should generate significant dynamo currents and magnetic field signatures. Indeed, some recent reports (e.g. Hao et al, 2013, JGR, 118, 6) show evidence for magnetic fluctuations, which appear to be related to AGWs, following recent large earthquakes. However, very little quantitative information is available on: (1) the detailed spatial and temporal dependence of these magnetic fluctuations, which are usually observed at a small number of irregularly arranged stations, and (2) the relation of these signatures to TEC perturbations in terms of relative amplitudes, frequency, and timing for different events. This work investigates space- and time-dependent behavior of both TEC and magnetic fluctuations following recent large earthquakes, with the aim to improve physical understanding of these perturbations via detailed, high-resolution, two- and three-dimensional modeling case studies with a coupled neutral atmospheric and ionospheric model, MAGIC-GEMINI (Zettergren and Snively, 2015, JGR, 120, 9). We focus on cases inspired by the large Chilean earthquakes from the past decade (viz., the M > 8.0 earthquakes from 2010 and 2015) to constrain the sources for the model, i.e. size, frequency, amplitude, and timing, based on available information from ocean buoy and seismometer data. TEC data are used to validate source amplitudes and to constrain

  16. Signatures support program

    NASA Astrophysics Data System (ADS)

    Hawley, Chadwick T.

    2009-05-01

    The Signatures Support Program (SSP) leverages the full spectrum of signature-related activities (collections, processing, development, storage, maintenance, and dissemination) within the Department of Defense (DOD), the intelligence community (IC), other Federal agencies, and civil institutions. The Enterprise encompasses acoustic, seismic, radio frequency, infrared, radar, nuclear radiation, and electro-optical signatures. The SSP serves the war fighter, the IC, and civil institutions by supporting military operations, intelligence operations, homeland defense, disaster relief, acquisitions, and research and development. Data centers host and maintain signature holdings, collectively forming the national signatures pool. The geographically distributed organizations are the authoritative sources and repositories for signature data; the centers are responsible for data content and quality. The SSP proactively engages DOD, IC, other Federal entities, academia, and industry to locate signatures for inclusion in the distributed national signatures pool and provides world-wide 24/7 access via the SSP application.

  17. Auditory emotion recognition impairments in Schizophrenia: Relationship to acoustic features and cognition

    PubMed Central

    Gold, Rinat; Butler, Pamela; Revheim, Nadine; Leitman, David; Hansen, John A.; Gur, Ruben; Kantrowitz, Joshua T.; Laukka, Petri; Juslin, Patrik N.; Silipo, Gail S.; Javitt, Daniel C.

    2013-01-01

    Objective Schizophrenia is associated with deficits in ability to perceive emotion based upon tone of voice. The basis for this deficit, however, remains unclear and assessment batteries remain limited. We evaluated performance in schizophrenia on a novel voice emotion recognition battery with well characterized physical features, relative to impairments in more general emotional and cognitive function. Methods We studied in a primary sample of 92 patients relative to 73 controls. Stimuli were characterized according to both intended emotion and physical features (e.g., pitch, intensity) that contributed to the emotional percept. Parallel measures of visual emotion recognition, pitch perception, general cognition, and overall outcome were obtained. More limited measures were obtained in an independent replication sample of 36 patients, 31 age-matched controls, and 188 general comparison subjects. Results Patients showed significant, large effect size deficits in voice emotion recognition (F=25.4, p<.00001, d=1.1), and were preferentially impaired in recognition of emotion based upon pitch-, but not intensity-features (group X feature interaction: F=7.79, p=.006). Emotion recognition deficits were significantly correlated with pitch perception impairments both across (r=56, p<.0001) and within (r=.47, p<.0001) group. Path analysis showed both sensory-specific and general cognitive contributions to auditory emotion recognition deficits in schizophrenia. Similar patterns of results were observed in the replication sample. Conclusions The present study demonstrates impairments in auditory emotion recognition in schizophrenia relative to acoustic features of underlying stimuli. Furthermore, it provides tools and highlights the need for greater attention to physical features of stimuli used for study of social cognition in neuropsychiatric disorders. PMID:22362394

  18. Pulse analysis of acoustic emission signals

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.; Packman, P. F.

    1977-01-01

    A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio were examined in the frequency domain analysis and pulse shape deconvolution was developed for use in the time domain analysis. Comparisons of the relative performance of each analysis technique are made for the characterization of acoustic emission pulses recorded by a measuring system. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameter values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emission associated with (a) crack propagation, (b) ball dropping on a plate, (c) spark discharge, and (d) defective and good ball bearings. Deconvolution of the first few micro-seconds of the pulse train is shown to be the region in which the significant signatures of the acoustic emission event are to be found.

  19. Pulse analysis of acoustic emission signals

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.; Packman, P. F.

    1977-01-01

    A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio were examined in the frequency domain analysis, and pulse shape deconvolution was developed for use in the time domain analysis. Comparisons of the relative performance of each analysis technique are made for the characterization of acoustic emission pulses recorded by a measuring system. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameters values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emissions associated with: (1) crack propagation, (2) ball dropping on a plate, (3) spark discharge and (4) defective and good ball bearings. Deconvolution of the first few micro-seconds of the pulse train are shown to be the region in which the significant signatures of the acoustic emission event are to be found.

  20. Surface Acoustic Wave (SAW) for Chemical Sensing Applications of Recognition Layers.

    PubMed

    Mujahid, Adnan; Dickert, Franz L

    2017-11-24

    Surface acoustic wave (SAW) resonators represent some of the most prominent acoustic devices for chemical sensing applications. As their frequency ranges from several hundred MHz to GHz, therefore they can record remarkably diminutive frequency shifts resulting from exceptionally small mass loadings. Their miniaturized design, high thermal stability and possibility of wireless integration make these devices highly competitive. Owing to these special characteristics, they are widely accepted as smart transducers that can be combined with a variety of recognition layers based on host-guest interactions, metal oxide coatings, carbon nanotubes, graphene sheets, functional polymers and biological receptors. As a result of this, there is a broad spectrum of SAW sensors, i.e., having sensing applications ranging from small gas molecules to large bio-analytes or even whole cell structures. This review shall cover from the fundamentals to modern design developments in SAW devices with respect to interfacial receptor coatings for exemplary sensor applications. The related problems and their possible solutions shall also be covered, with a focus on emerging trends and future opportunities for making SAW as established sensing technology.

  1. Social Communication and Vocal Recognition in Free-Ranging Rhesus Monkeys

    NASA Astrophysics Data System (ADS)

    Rendall, Christopher Andrew

    Kinship and individual identity are key determinants of primate sociality, and the capacity for vocal recognition of individuals and kin is hypothesized to be an important adaptation facilitating intra-group social communication. Research was conducted on adult female rhesus monkeys on Cayo Santiago, Puerto Rico to test this hypothesis for three acoustically distinct calls characterized by varying selective pressures on communicating identity: coos (contact calls), grunts (close range social calls), and noisy screams (agonistic recruitment calls). Vocalization playback experiments confirmed a capacity for both individual and kin recognition of coos, but not screams (grunts were not tested). Acoustic analyses, using traditional spectrographic methods as well as linear predictive coding techniques, indicated that coos (but not grunts or screams) were highly distinctive, and that the effects of vocal tract filtering--formants --contributed more to statistical discriminations of both individuals and kin groups than did temporal or laryngeal source features. Formants were identified from very short (23 ms.) segments of coos and were stable within calls, indicating that formant cues to individual and kin identity were available throughout a call. This aspect of formant cues is predicted to be an especially important design feature for signaling identity efficiently in complex acoustic environments. Results of playback experiments involving manipulated coo stimuli provided preliminary perceptual support for the statistical inference that formant cues take precedence in facilitating vocal recognition. The similarity of formants among female kin suggested a mechanism for the development of matrilineal vocal signatures from the genetic and environmental determinants of vocal tract morphology shared among relatives. The fact that screams --calls strongly expected to communicate identity--were not individually distinctive nor recognized suggested the possibility that their

  2. Paternal kin recognition in the high frequency / ultrasonic range in a solitary foraging mammal

    PubMed Central

    2012-01-01

    Background Kin selection is a driving force in the evolution of mammalian social complexity. Recognition of paternal kin using vocalizations occurs in taxa with cohesive, complex social groups. This is the first investigation of paternal kin recognition via vocalizations in a small-brained, solitary foraging mammal, the grey mouse lemur (Microcebus murinus), a frequent model for ancestral primates. We analyzed the high frequency/ultrasonic male advertisement (courtship) call and alarm call. Results Multi-parametric analyses of the calls’ acoustic parameters and discriminant function analyses showed that advertisement calls, but not alarm calls, contain patrilineal signatures. Playback experiments controlling for familiarity showed that females paid more attention to advertisement calls from unrelated males than from their fathers. Reactions to alarm calls from unrelated males and fathers did not differ. Conclusions 1) Findings provide the first evidence of paternal kin recognition via vocalizations in a small-brained, solitarily foraging mammal. 2) High predation, small body size, and dispersed social systems may select for acoustic paternal kin recognition in the high frequency/ultrasonic ranges, thus limiting risks of inbreeding and eavesdropping by predators or conspecific competitors. 3) Paternal kin recognition via vocalizations in mammals is not dependent upon a large brain and high social complexity, but may already have been an integral part of the dispersed social networks from which more complex, kin-based sociality emerged. PMID:23198727

  3. Effects and modeling of phonetic and acoustic confusions in accented speech.

    PubMed

    Fung, Pascale; Liu, Yi

    2005-11-01

    Accented speech recognition is more challenging than standard speech recognition due to the effects of phonetic and acoustic confusions. Phonetic confusion in accented speech occurs when an expected phone is pronounced as a different one, which leads to erroneous recognition. Acoustic confusion occurs when the pronounced phone is found to lie acoustically between two baseform models and can be equally recognized as either one. We propose that it is necessary to analyze and model these confusions separately in order to improve accented speech recognition without degrading standard speech recognition. Since low phonetic confusion units in accented speech do not give rise to automatic speech recognition errors, we focus on analyzing and reducing phonetic and acoustic confusability under high phonetic confusion conditions. We propose using likelihood ratio test to measure phonetic confusion, and asymmetric acoustic distance to measure acoustic confusion. Only accent-specific phonetic units with low acoustic confusion are used in an augmented pronunciation dictionary, while phonetic units with high acoustic confusion are reconstructed using decision tree merging. Experimental results show that our approach is effective and superior to methods modeling phonetic confusion or acoustic confusion alone in accented speech, with a significant 5.7% absolute WER reduction, without degrading standard speech recognition.

  4. Surface Acoustic Wave (SAW) for Chemical Sensing Applications of Recognition Layers †

    PubMed Central

    2017-01-01

    Surface acoustic wave (SAW) resonators represent some of the most prominent acoustic devices for chemical sensing applications. As their frequency ranges from several hundred MHz to GHz, therefore they can record remarkably diminutive frequency shifts resulting from exceptionally small mass loadings. Their miniaturized design, high thermal stability and possibility of wireless integration make these devices highly competitive. Owing to these special characteristics, they are widely accepted as smart transducers that can be combined with a variety of recognition layers based on host-guest interactions, metal oxide coatings, carbon nanotubes, graphene sheets, functional polymers and biological receptors. As a result of this, there is a broad spectrum of SAW sensors, i.e., having sensing applications ranging from small gas molecules to large bio-analytes or even whole cell structures. This review shall cover from the fundamentals to modern design developments in SAW devices with respect to interfacial receptor coatings for exemplary sensor applications. The related problems and their possible solutions shall also be covered, with a focus on emerging trends and future opportunities for making SAW as established sensing technology. PMID:29186771

  5. Automated real-time structure health monitoring via signature pattern recognition

    NASA Astrophysics Data System (ADS)

    Sun, Fanping P.; Chaudhry, Zaffir A.; Rogers, Craig A.; Majmundar, M.; Liang, Chen

    1995-05-01

    Described in this paper are the details of an automated real-time structure health monitoring system. The system is based on structural signature pattern recognition. It uses an array of piezoceramic patches bonded to the structure as integrated sensor-actuators, an electric impedance analyzer for structural frequency response function acquisition and a PC for control and graphic display. An assembled 3-bay truss structure is employed as a test bed. Two issues, the localization of sensing area and the sensor temperature drift, which are critical for the success of this technique are addressed and a novel approach of providing temperature compensation using probability correlation function is presented. Due to the negligible weight and size of the solid-state sensor array and its ability to sense incipient-type damage, the system can eventually be implemented on many types of structures such as aircraft, spacecraft, large-span dome roof and steel bridges requiring multilocation and real-time health monitoring.

  6. Terrain type recognition using ERTS-1 MSS images

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N.

    1973-01-01

    For the automatic recognition of earth resources from ERTS-1 digital tapes, both multispectral and spatial pattern recognition techniques are important. Recognition of terrain types is based on spatial signatures that become evident by processing small portions of an image through selected algorithms. An investigation of spatial signatures that are applicable to ERTS-1 MSS images is described. Artifacts in the spatial signatures seem to be related to the multispectral scanner. A method for suppressing such artifacts is presented. Finally, results of terrain type recognition for one ERTS-1 image are presented.

  7. Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels

    PubMed Central

    Caballero-Morales, Santiago-Omar

    2013-01-01

    An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is performed by counting the number of emotion-specific vowels found in the ASR's output for the sentence. With this approach, accuracy of 87–100% was achieved for the recognition of emotional state of Mexican Spanish speech. PMID:23935410

  8. Statistical analysis of infrasound signatures in airglow observations: Indications for acoustic resonance

    NASA Astrophysics Data System (ADS)

    Pilger, Christoph; Schmidt, Carsten; Bittner, Michael

    2013-02-01

    The detection of infrasonic signals in temperature time series of the mesopause altitude region (at about 80-100 km) is performed at the German Remote Sensing Data Center of the German Aerospace Center (DLR-DFD) using GRIPS instrumentation (GRound-based Infrared P-branch Spectrometers). Mesopause temperature values with a temporal resolution of up to 10 s are derived from the observation of nocturnal airglow emissions and permit the identification of signals within the long-period infrasound range.Spectral intensities of wave signatures with periods between 2.5 and 10 min are estimated applying the wavelet analysis technique to one minute mean temperature values. Selected events as well as the statistical distribution of 40 months of observation are presented and discussed with respect to resonant modes of the atmosphere. The mechanism of acoustic resonance generated by strong infrasonic sources is a potential explanation of distinct features with periods between 3 and 5 min observed in the dataset.

  9. An Electrophysiological Signature of Unconscious Recognition Memory

    PubMed Central

    Voss, Joel L.; Paller, Ken A.

    2009-01-01

    Contradicting the common assumption that accurate recognition reflects explicit-memory processing, we describe evidence for recognition lacking two hallmark explicit-memory features: awareness of memory retrieval and facilitation by attentive encoding. Kaleidoscope images were encoded in conjunction with an attentional diversion and subsequently recognized more accurately than those encoded without diversion. Confidence in recognition was superior following attentive encoding, though recognition was remarkably accurate when people claimed to be unaware of memory retrieval. This “implicit recognition” was associated with frontal-occipital negative brain potentials at 200-400 ms post-stimulus-onset, which were spatially and temporally distinct from positive brain potentials corresponding to explicit recollection and familiarity. This dissociation between behavioral and electrophysiological characteristics of “implicit recognition” versus explicit recognition indicates that a neurocognitive mechanism with properties similar to those that produce implicit memory can be operative in standard recognition tests. People can accurately discriminate repeat stimuli from new stimuli without necessarily knowing it. PMID:19198606

  10. Modeling the origins of mammalian sociality: moderate evidence for matrilineal signatures in mouse lemur vocalizations.

    PubMed

    Kessler, Sharon E; Radespiel, Ute; Hasiniaina, Alida I F; Leliveld, Lisette M C; Nash, Leanne T; Zimmermann, Elke

    2014-02-20

    moderately distinctive by matriline. Because sleeping groups consisted of close maternal kin, both genetics and social learning may have generated these acoustic signatures. As mouse lemurs are models for solitary foragers, we recommend further studies testing whether the lemurs use these calls to recognize kin. This would enable further modeling of how kin recognition in ancestral species could have shaped the evolution of complex sociality.

  11. Call recognition and individual identification of fish vocalizations based on automatic speech recognition: An example with the Lusitanian toadfish.

    PubMed

    Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C

    2015-12-01

    The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types.

  12. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    NASA Astrophysics Data System (ADS)

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y.; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-09-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral.

  13. Extraction of small boat harmonic signatures from passive sonar.

    PubMed

    Ogden, George L; Zurk, Lisa M; Jones, Mark E; Peterson, Mary E

    2011-06-01

    This paper investigates the extraction of acoustic signatures from small boats using a passive sonar system. Noise radiated from a small boats consists of broadband noise and harmonically related tones that correspond to engine and propeller specifications. A signal processing method to automatically extract the harmonic structure of noise radiated from small boats is developed. The Harmonic Extraction and Analysis Tool (HEAT) estimates the instantaneous fundamental frequency of the harmonic tones, refines the fundamental frequency estimate using a Kalman filter, and automatically extracts the amplitudes of the harmonic tonals to generate a harmonic signature for the boat. Results are presented that show the HEAT algorithms ability to extract these signatures. © 2011 Acoustical Society of America

  14. Different importance of the volatile and non-volatile fractions of an olfactory signature for individual social recognition in rats versus mice and short-term versus long-term memory.

    PubMed

    Noack, Julia; Richter, Karin; Laube, Gregor; Haghgoo, Hojjat Allah; Veh, Rüdiger W; Engelmann, Mario

    2010-11-01

    When tested in the olfactory cued social recognition/discrimination test, rats and mice differ in their retention of a recognition memory for a previously encountered conspecific juvenile: Rats are able to recognize a given juvenile for approximately 45 min only whereas mice show not only short-term, but also long-term recognition memory (≥ 24 h). Here we modified the social recognition/social discrimination procedure to investigate the neurobiological mechanism(s) underlying the species differences. We presented a conspecific juvenile repeatedly to the experimental subjects and monitored the investigation duration as a measure for recognition. Presentation of only the volatile fraction of the juvenile olfactory signature was sufficient for both short- and long-term recognition in mice but not rats. Applying additional volatile, mono-molecular odours to the "to be recognized" juveniles failed to affect short-term memory in both species, but interfered with long-term recognition in mice. Finally immunocytochemical analysis of c-Fos as a marker for cellular activation, revealed that juvenile exposure stimulated areas involved in the processing of olfactory signals in both the main and the accessory olfactory bulb in mice. In rats, we measured an increased c-Fos synthesis almost exclusively in cells of the accessory olfactory bulb. Our data suggest that the species difference in the retention of social recognition memory is based on differences in the processing of the volatile versus non-volatile fraction of the individuals' olfactory signature. The non-volatile fraction is sufficient for retaining a short-term social memory only. Long-term social memory - as observed in mice - requires a processing of both the volatile and non-volatile fractions of the olfactory signature. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Infra-sound Signature of Lightning

    NASA Astrophysics Data System (ADS)

    Arechiga, R. O.; Badillo, E.; Johnson, J.; Edens, H. E.; Rison, W.; Thomas, R. J.

    2012-12-01

    We have analyzed thunder from over 200 lightning flashes to determine which part of thunder comes from the gas dynamic expansion of portions of the rapidly heated lightning channel and which from electrostatic field changes. Thunder signals were recorded by a ~1500 m network of 3 to 4 4-element microphone deployed in the Magdalena mountains of New Mexico in the summers of 2011 and 2012. The higher frequency infra-sound and audio-range portion of thunder is thought to come from the gas dynamic expansion, and the electrostatic mechanism gives rise to a signature infra-sound pulse peaked at a few Hz. More than 50 signature infra-sound pulses were observed in different portions of the thunder signal, with no preference towards the beginning or the end of the signal. Detection of the signature pulse occurs sometimes only for one array and sometimes for several arrays, which agrees with the theory that the pulse is highly directional (i.e., the recordings have to be in a specific position with respect to the cloud generating the pulse to be able to detect it). The detection of these pulses under quiet wind conditions by different acoustic arrays corroborates the electrostatic mechanism originally proposed by Wilson [1920], further studied by Dessler [1973] and Few [1985], observed by Bohannon [1983] and Balachandran [1979, 1983], and recently analyzed by Pasko [2009]. Pasko employed a model to explain the electrostatic-to-acoustic energy conversion and the initial compression waves in observed infrasonic pulses, which agrees with the observations we have made. We present thunder samples that exhibit signature infra-sound pulses at different times and acoustic source reconstruction to demonstrate the beaming effect.

  16. Detection of Delamination in Composite Beams Using Broadband Acoustic Emission Signatures

    NASA Technical Reports Server (NTRS)

    Okafor, A. C.; Chandrashekhara, K.; Jiang, Y. P.

    1996-01-01

    Delamination in composite structure may be caused by imperfections introduced during the manufacturing process or by impact loads by foreign objects during the operational life. There are some nondestructive evaluation methods to detect delamination in composite structures such as x-radiography, ultrasonic testing, and thermal/infrared inspection. These methods are expensive and hard to use for on line detection. Acoustic emission testing can monitor the material under test even under the presence of noise generated under load. It has been used extensively in proof-testing of fiberglass pressure vessels and beams. In the present work, experimental studies are conducted to investigate the use of broadband acoustic emission signatures to detect delaminations in composite beams. Glass/epoxy beam specimens with full width, prescribed delamination sizes of 2 inches and 4 inches are investigated. The prescribed delamination is produced by inserting Teflon film between laminae during the fabrication of composite laminate. The objectives of this research is to develop a method for predicting delamination size and location in laminated composite beams by combining smart materials concept and broadband AE analysis techniques. More specifically, a piezoceramic (PZT) patch is bonded on the surface of composite beams and used as a pulser. The piezoceramic patch simulates the AE wave source as a 3 cycles, 50KHz, burst sine wave. One broadband AE sensor is fixed near the PZT patch to measure the AE wave near the AE source. A second broadband AE sensor, which is used as a receiver, is scanned along the composite beams at 0.25 inch step to measure propagation of AE wave along the composite beams. The acquired AE waveform is digitized and processed. Signal strength, signal energy, cross-correlation of AE waveforms, and tracking of specific cycle of AE waveforms are used to detect delamination size and location.

  17. Experimental investigation of the effects of the acoustical conditions in a simulated classroom on speech recognition and learning in children a

    PubMed Central

    Valente, Daniel L.; Plevinsky, Hallie M.; Franco, John M.; Heinrichs-Graham, Elizabeth C.; Lewis, Dawna E.

    2012-01-01

    The potential effects of acoustical environment on speech understanding are especially important as children enter school where students’ ability to hear and understand complex verbal information is critical to learning. However, this ability is compromised because of widely varied and unfavorable classroom acoustics. The extent to which unfavorable classroom acoustics affect children’s performance on longer learning tasks is largely unknown as most research has focused on testing children using words, syllables, or sentences as stimuli. In the current study, a simulated classroom environment was used to measure comprehension performance of two classroom learning activities: a discussion and lecture. Comprehension performance was measured for groups of elementary-aged students in one of four environments with varied reverberation times and background noise levels. The reverberation time was either 0.6 or 1.5 s, and the signal-to-noise level was either +10 or +7 dB. Performance is compared to adult subjects as well as to sentence-recognition in the same condition. Significant differences were seen in comprehension scores as a function of age and condition; both increasing background noise and reverberation degraded performance in comprehension tasks compared to minimal differences in measures of sentence-recognition. PMID:22280587

  18. Machine Learning Through Signature Trees. Applications to Human Speech.

    ERIC Educational Resources Information Center

    White, George M.

    A signature tree is a binary decision tree used to classify unknown patterns. An attempt was made to develop a computer program for manipulating signature trees as a general research tool for exploring machine learning and pattern recognition. The program was applied to the problem of speech recognition to test its effectiveness for a specific…

  19. Identification and Characteristics of Signature Whistles in Wild Bottlenose Dolphins (Tursiops truncatus) from Namibia

    PubMed Central

    Elwen, Simon Harvey; Nastasi, Aurora

    2014-01-01

    A signature whistle type is a learned, individually distinctive whistle type in a dolphin's acoustic repertoire that broadcasts the identity of the whistle owner. The acquisition and use of signature whistles indicates complex cognitive functioning that requires wider investigation in wild dolphin populations. Here we identify signature whistle types from a population of approximately 100 wild common bottlenose dolphins (Tursiops truncatus) inhabiting Walvis Bay, and describe signature whistle occurrence, acoustic parameters and temporal production. A catalogue of 43 repeatedly emitted whistle types (REWTs) was generated by analysing 79 hrs of acoustic recordings. From this, 28 signature whistle types were identified using a method based on the temporal patterns in whistle sequences. A visual classification task conducted by 5 naïve judges showed high levels of agreement in classification of whistles (Fleiss-Kappa statistic, κ = 0.848, Z = 55.3, P<0.001) and supported our categorisation. Signature whistle structure remained stable over time and location, with most types (82%) recorded in 2 or more years, and 4 identified at Walvis Bay and a second field site approximately 450 km away. Whistle acoustic parameters were consistent with those of signature whistles documented in Sarasota Bay (Florida, USA). We provide evidence of possible two-voice signature whistle production by a common bottlenose dolphin. Although signature whistle types have potential use as a marker for studying individual habitat use, we only identified approximately 28% of those from the Walvis Bay population, despite considerable recording effort. We found that signature whistle type diversity was higher in larger dolphin groups and groups with calves present. This is the first study describing signature whistles in a wild free-ranging T. truncatus population inhabiting African waters and it provides a baseline on which more in depth behavioural studies can be based. PMID:25203814

  20. First images of thunder: Acoustic imaging of triggered lightning

    NASA Astrophysics Data System (ADS)

    Dayeh, M. A.; Evans, N. D.; Fuselier, S. A.; Trevino, J.; Ramaekers, J.; Dwyer, J. R.; Lucia, R.; Rassoul, H. K.; Kotovsky, D. A.; Jordan, D. M.; Uman, M. A.

    2015-07-01

    An acoustic camera comprising a linear microphone array is used to image the thunder signature of triggered lightning. Measurements were taken at the International Center for Lightning Research and Testing in Camp Blanding, FL, during the summer of 2014. The array was positioned in an end-fire orientation thus enabling the peak acoustic reception pattern to be steered vertically with a frequency-dependent spatial resolution. On 14 July 2014, a lightning event with nine return strokes was successfully triggered. We present the first acoustic images of individual return strokes at high frequencies (>1 kHz) and compare the acoustically inferred profile with optical images. We find (i) a strong correlation between the return stroke peak current and the radiated acoustic pressure and (ii) an acoustic signature from an M component current pulse with an unusual fast rise time. These results show that acoustic imaging enables clear identification and quantification of thunder sources as a function of lightning channel altitude.

  1. Tracking and Characterization of Aircraft Wakes Using Acoustic and Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Booth, Earl R., Jr.; Humphreys, William M., Jr.

    2005-01-01

    Data from the 2003 Denver International Airport Wake Acoustics Test are further examined to discern spectral content of aircraft wake signatures, and to compare three dimensional wake tracking from acoustic data to wake tracking data obtained through use of continuous wave and pulsed lidar. Wake tracking data derived from acoustic array data agree well with both continuous wave and pulsed lidar in the horizontal plane, but less well with pulsed lidar in the vertical direction. Results from this study show that the spectral distribution of acoustic energy in a wake signature varies greatly with aircraft type.

  2. Selective habituation shapes acoustic predator recognition in harbour seals.

    PubMed

    Deecke, Volker B; Slater, Peter J B; Ford, John K B

    2002-11-14

    Predation is a major force in shaping the behaviour of animals, so that precise identification of predators will confer substantial selective advantages on animals that serve as food to others. Because experience with a predator can be lethal, early researchers studying birds suggested that predator recognition does not require learning. However, a predator image that can be modified by learning and experience will be advantageous in situations where cues associated with the predator are highly variable or change over time. In this study, we investigated the response of harbour seals (Phoca vitulina) to the underwater calls of different populations of killer whales (Orcinus orca). We found that the seals responded strongly to the calls of mammal-eating killer whales and unfamiliar fish-eating killer whales but not to the familiar calls of the local fish-eating population. This demonstrates that wild harbour seals are capable of complex acoustic discrimination and that they modify their predator image by selectively habituating to the calls of harmless killer whales. Fear in these animals is therefore focused on local threats by learning and experience.

  3. A Search for Signatures of Ion Acoustic Shoulders in the SERSIO sounding rocket data set

    NASA Astrophysics Data System (ADS)

    Ellis, A. T.; Lessard, M. R.; Kintner, P. M.; Lynch, K. A.; Klatt, E.; Oksavik, K.

    2004-12-01

    Although first predicted in the early 1960's, enhanced Ion Acoustic Shoulders have only been observed by incoherent scatter radars since the late 1980's. The signature of an IAS is seen as a positive and negative frequency shift about the center radar frequency. These features occur at altitudes of 150 to over 600 km, peaking at 500 km, with spatial extent (perpendicular to the magnetic field) the order of 10 km. The occurrence distribution shows a maximum in the pre-midnight region, with a secondary peak on the dayside (Rietveld et al 1995). Observations of strong (1000 mA/m2), localized currents by EISCAT have led to theories based on current-driven instabilities as the source of these waves (Forme, 1993; St.-Maurice et al., 1996). The SERSIO (Svalbard EISCAT Rocket Study of Ion Outflows) sounding rocket mission was launched into CME-driven dayside aurora on the 22nd of January 2004 at 0857 UT (0436 MLT) from Ny-Alesund (78o 55' 11" N, 11o 56' 60" E) and reached an apogee of 782 km. During the flight, the EISCAT incoherent scatter radar network supported the mission by monitoring altitude profiles of electron and ion density, velocity and temperature. From Longyearbyen, located approximately 50 km south east of Ny-Alesund and near the trajectory of SERSIO, the 32m ESR dish was tracking the ionospheric footprint of the payload while the 42 m dish was making local field-aligned measurements. The data from these radars clearly indicated the presence of enhanced ion acoustic shoulders, suggesting that SERSIO flew through a 'field' of Ion Acoustic Shoulders. In fact, the plasma wave environment observed by SERSIO was composed of traditional VLF hiss and Broad Band ELF hiss with wavelengths less than the order of 6m. Here we present the result of our search for Ion Acoustic Shoulders in the SERSIO data set.

  4. Problems Associated with Statistical Pattern Recognition of Acoustic Emission Signals in a Compact Tension Fatigue Specimen

    NASA Technical Reports Server (NTRS)

    Hinton, Yolanda L.

    1999-01-01

    Acoustic emission (AE) data were acquired during fatigue testing of an aluminum 2024-T4 compact tension specimen using a commercially available AE system. AE signals from crack extension were identified and separated from noise spikes, signals that reflected from the specimen edges, and signals that saturated the instrumentation. A commercially available software package was used to train a statistical pattern recognition system to classify the signals. The software trained a network to recognize signals with a 91-percent accuracy when compared with the researcher's interpretation of the data. Reasons for the discrepancies are examined and it is postulated that additional preprocessing of the AE data to focus on the extensional wave mode and eliminate other effects before training the pattern recognition system will result in increased accuracy.

  5. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency.

    PubMed

    Branstetter, Brian K; DeLong, Caroline M; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.

  6. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency

    PubMed Central

    Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519

  7. Post-analysis report on Chesapeake Bay data processing. [spectral analysis and recognition computer signature extension

    NASA Technical Reports Server (NTRS)

    Thomson, F.

    1972-01-01

    The additional processing performed on data collected over the Rhode River Test Site and Forestry Site in November 1970 is reported. The techniques and procedures used to obtain the processed results are described. Thermal data collected over three approximately parallel lines of the site were contoured, and the results color coded, for the purpose of delineating important scene constituents and to identify trees attacked by pine bark beetles. Contouring work and histogram preparation are reviewed and the important conclusions from the spectral analysis and recognition computer (SPARC) signature extension work are summarized. The SPARC setup and processing records are presented and recommendations are made for future data collection over the site.

  8. Post interaural neural net-based vowel recognition

    NASA Astrophysics Data System (ADS)

    Jouny, Ismail I.

    2001-10-01

    Interaural head related transfer functions are used to process speech signatures prior to neural net based recognition. Data representing the head related transfer function of a dummy has been collected at MIT and made available on the Internet. This data is used to pre-process vowel signatures to mimic the effects of human ear on speech perception. Signatures representing various vowels of the English language are then presented to a multi-layer perceptron trained using the back propagation algorithm for recognition purposes. The focus in this paper is to assess the effects of human interaural system on vowel recognition performance particularly when using a classification system that mimics the human brain such as a neural net.

  9. Listening in on Friction: Stick-Slip Acoustical Signatures in Velcro

    NASA Astrophysics Data System (ADS)

    Hurtado Parra, Sebastian; Morrow, Leslie; Radziwanowski, Miles; Angiolillo, Paul

    2013-03-01

    The onset of kinetic friction and the possible resulting stick-slip motion remain mysterious phenomena. Moreover, stick-slip dynamics are typically accompanied by acoustic bursts that occur temporally with the slip event. The dry sliding dynamics of the hook-and-loop system, as exemplified by Velcro, manifest stick-slip behavior along with audible bursts that are easily micrphonically collected. Synchronized measurements of the friction force and acoustic emissions were collected as hooked Velcro was driven at constant velocity over a bed of looped Velcro in an anechoic chamber. Not surprising, the envelope of the acoustic bursts maps well onto the slip events of the friction force time series and the intensity of the bursts trends with the magnitude of the difference of the friction force during a stick-slip event. However, the analysis of the acoustic emission can serve as a sensitive tool for revealing some of the hidden details of the evolution of the transition from static to kinetic friction. For instance, small acoustic bursts are seen prior to the Amontons-Coulomb threshold, signaling precursor events prior to the onset of macroscopically observed motion. Preliminary spectral analysis of the acoustic emissions including intensity-frequency data will be presented.

  10. Perceptual Plasticity for Auditory Object Recognition

    PubMed Central

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  11. Effects of Hearing Protection Device Attenuation on Unmanned Aerial Vehicle (UAV) Audio Signatures

    DTIC Science & Technology

    2016-03-01

    acoustic signatures of Unmanned Aircraft Systems (UASs). The results could be used to select appropriate HPDs for environments where noise from UASs may be...formed earplugs passively reduce noise by using foam to efficiently absorb sound. Preformed earplugs attenuate by using either level-dependent or non...domain. In this study, a program using these techniques will be created to simulate these HPD ratings and its effects on acoustic signatures of unmanned

  12. Acoustic Design of Naval Structures

    DTIC Science & Technology

    2005-12-01

    Ship Signatures Department Research and Development Report NSWCCD-70--TR-2005/149 December 2005 ACOUSTIC DESIGN OF NAVAL STRUCTURES by: S. Nikiforov...NSWCCD-70--TR–2005/149 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) Office of Naval Research ...approach, gained through his research experience on the acoustic characteristics of vibration and radiation of ship structures, sources of the main

  13. Comments on "Intraspecific and geographic variation of West Indian manatee (Trichechus manatus spp.) vocalizations" [J. Acoust. Soc. Am. 114, 66-69 (2003)].

    PubMed

    Sousa-Lima, Renata S

    2006-06-01

    This letter concerns the paper "Intraspecific and geographic variation of West Indian manatee (Trichechus manatus spp.) vocalizations" [Nowacek et al., J. Acoust. Soc. Am. 114, 66-69 (2003)]. The purpose here is to correct the fundamental frequency range and information on intraindividual variation in the vocalizations of Amazonian manatees reported by Nowacek et al. (2003) in citing the paper "Signature information and individual recognition in the isolation calls of Amazonian manatees, Trichechus inunguis (Mammalia: Sirenia)" [Sousa-Lima et al., Anim. Behav. 63, 301-310 (2002)].

  14. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English

    ERIC Educational Resources Information Center

    Banzina, Elina; Dilley, Laura C.; Hewitt, Lynne E.

    2016-01-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found,…

  15. The signatures of acoustic emission waveforms from fatigue crack advancing in thin metallic plates

    NASA Astrophysics Data System (ADS)

    Yeasin Bhuiyan, Md; Giurgiutiu, Victor

    2018-01-01

    The acoustic emission (AE) waveforms from a fatigue crack advancing in a thin metallic plate possess diverse and complex spectral signatures. In this article, we analyze these waveform signatures in coordination with the load level during cyclic fatigue. The advancing fatigue crack may generate numerous AE hits while it grows under fatigue loading. We found that these AE hits can be sorted into various groups based on their AE waveform signatures. Each waveform group has a particular time-domain signal pattern and a specific frequency spectrum. This indicates that each group represents a certain AE event related to the fatigue crack growth behavior. In situ AE-fatigue experiments were conducted to monitor the fatigue crack growth with simultaneous measurement of AE signals, fatigue loading, and optical crack growth measurement. An in situ microscope was installed in the load-frame of the mechanical testing system (MTS) to optically monitor the fatigue crack growth and relate the AE signals with the crack growth measurement. We found the AE signal groups at higher load levels (75%-85% of maximum load) were different from the AE signal groups that happened at lower load levels (below 60% of load level). These AE waveform groups are highly related to the fatigue crack-related AE events. These AE signals mostly contain the higher frequency peaks (100 kHz, 230 kHz, 450 kHz, 550 kHz). Some AE signal groups happened as a clustered form that relates a sequence of small AE events within the fatigue crack. They happened at relatively lower load level (50%-60% of the maximum load). These AE signal groups may be related to crack friction and micro-fracture during the friction process. These AE signals mostly contain the lower frequency peaks (60 kHz, 100 kHz, 200 kHz). The AE waveform based analysis may give us comprehensive information of the metal fatigue.

  16. Acoustic data transmission through a drill string

    DOEpatents

    Drumheller, D.S.

    1988-04-21

    Acoustical signals are transmitted through a drill string by canceling upward moving acoustical noise and by preconditioning the data in recognition of the comb filter impedance characteristics of the drill string. 5 figs.

  17. Acoustic Tomography in the Canary Basin: Meddies and Tides

    NASA Astrophysics Data System (ADS)

    Dushaw, Brian D.; Gaillard, Fabienne; Terre, Thierry

    2017-11-01

    An acoustic propagation experiment over 308 km range conducted in the Canary Basin in 1997-1998 was used to assess the ability of ocean acoustic tomography to measure the flux of Mediterranean water and Meddies. Instruments on a mooring adjacent to the acoustic path measured the southwestward passage of a strong Meddy in temperature, salinity, and current. Over 9 months of transmissions, the acoustic arrival pattern was an initial broad stochastic pulse varying in duration by 250-500 ms, followed eight stable, identified-ray arrivals. Small-scale sound speed fluctuations from Mediterranean water parcels littered around the sound channel axis caused acoustic scattering. Internal waves contributed more modest acoustic scattering. Based on simulations, the main effect of a Meddy passing across the acoustic path is the formation of many early-arriving, near-axis rays, but these rays are thoroughly scattered by the small-scale Mediterranean-water fluctuations. A Meddy decreases the deep-turning ray travel times by 10-30 ms. The dominant acoustic signature of a Meddy is therefore the expansion of the width of the initial stochastic pulse. While this signature appears inseparable from the other effects of Mediterranean water in this region, the acoustic time series indicates the steady passage of Mediterranean water across the acoustic path. Tidal variations caused by the mode-1 internal tides were measured by the acoustic travel times. The observed internal tides were partly predicted using a recent global model for such tides derived from satellite altimetry.

  18. NW-MILO Acoustic Data Collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Myers, Joshua R.; Maxwell, Adam R.

    2010-02-17

    There is an enduring requirement to improve our ability to detect potential threats and discriminate these from the legitimate commercial and recreational activity ongoing in the nearshore/littoral portion of the maritime domain. The Northwest Maritime Information and Littoral Operations (NW-MILO) Program at PNNL’s Coastal Security Institute in Sequim, Washington is establishing a methodology to detect and classify these threats - in part through developing a better understanding of acoustic signatures in a near-shore environment. The purpose of the acoustic data collection described here is to investigate the acoustic signatures of small vessels. The data is being recorded continuously, 24 hoursmore » a day, along with radar track data and imagery. The recording began in August 2008, and to date the data contains tens of thousands of signals from small vessels recorded in a variety of environmental conditions. The quantity and variety of this data collection, with the supporting imagery and radar track data, makes it particularly useful for the development of robust acoustic signature models and advanced algorithms for signal classification and information extraction. The underwater acoustic sensing system is part of a multi-modal sensing system that is operating near the mouth of Sequim Bay. Sequim Bay opens onto the Straight of Juan de Fuca, which contains part of the border between the U.S. and Canada. Table 1 lists the specific components used for the NW-MILO system. The acoustic sensor is a hydrophone permanently deployed at a mean depth of about 3 meters. In addition to a hydrophone, the other sensors in the system are a marine radar, an electro-optical (EO) camera and an infra-red (IR) camera. The radar is integrated with a vessel tracking system (VTS) that provides position, speed and heading information. The data from all the sensors is recorded and saved to a central server. The data has been validated in terms of its usability for characterizing the

  19. State Recognition of Bone Drilling Based on Acoustic Emission in Pedicle Screw Operation.

    PubMed

    Guan, Fengqing; Sun, Yu; Qi, Xiaozhi; Hu, Ying; Yu, Gang; Zhang, Jianwei

    2018-05-09

    Pedicle drilling is an important step in pedicle screw fixation and the most significant challenge in this operation is how to determine a key point in the transition region between cancellous and inner cortical bone. The purpose of this paper is to find a method to achieve the recognition for the key point. After acquiring acoustic emission (AE) signals during the drilling process, this paper proposed a novel frequency distribution-based algorithm (FDB) to analyze the AE signals in the frequency domain after certain processes. Then we select a specific frequency domain of the signal for standard operations and choose a fitting function to fit the obtained sequence. Characters of the fitting function are extracted as outputs for identification of different bone layers. The results, which are obtained by detecting force signal and direct measurement, are given in the paper. Compared with the results above, the results obtained by AE signals are distinguishable for different bone layers and are more accurate and precise. The results of the algorithm are trained and identified by a neural network and the recognition rate reaches 84.2%. The proposed method is proved to be efficient and can be used for bone layer identification in pedicle screw fixation.

  20. Pen-chant: Acoustic emissions of handwriting and drawing

    NASA Astrophysics Data System (ADS)

    Seniuk, Andrew G.

    The sounds generated by a writing instrument ('pen-chant') provide a rich and underutilized source of information for pattern recognition. We examine the feasibility of recognition of handwritten cursive text, exclusively through an analysis of acoustic emissions. We design and implement a family of recognizers using a template matching approach, with templates and similarity measures derived variously from: smoothed amplitude signal with fixed resolution, discrete sequence of magnitudes obtained from peaks in the smoothed amplitude signal, and ordered tree obtained from a scale space signal representation. Test results are presented for recognition of isolated lowercase cursive characters and for whole words. We also present qualitative results for recognizing gestures such as circling, scratch-out, check-marks, and hatching. Our first set of results, using samples provided by the author, yield recognition rates of over 70% (alphabet) and 90% (26 words), with a confidence of +/-8%, based solely on acoustic emissions. Our second set of results uses data gathered from nine writers. These results demonstrate that acoustic emissions are a rich source of information, usable---on their own or in conjunction with image-based features---to solve pattern recognition problems. In future work, this approach can be applied to writer identification, handwriting and gesture-based computer input technology, emotion recognition, and temporal analysis of sketches.

  1. Signature Verification Using N-tuple Learning Machine.

    PubMed

    Maneechot, Thanin; Kitjaidure, Yuttana

    2005-01-01

    This research presents new algorithm for signature verification using N-tuple learning machine. The features are taken from handwritten signature on Digital Tablet (On-line). This research develops recognition algorithm using four features extraction, namely horizontal and vertical pen tip position(x-y position), pen tip pressure, and pen altitude angles. Verification uses N-tuple technique with Gaussian thresholding.

  2. Acoustic Characterization of a Multi-Rotor Unmanned Aircraft

    NASA Astrophysics Data System (ADS)

    Feight, Jordan; Gaeta, Richard; Jacob, Jamey

    2017-11-01

    In this study, the noise produced by a small multi-rotor rotary wing aircraft, or drone, is measured and characterized. The aircraft is tested in different configurations and environments to investigate specific parameters and how they affect the acoustic signature of the system. The parameters include rotor RPM, the number of rotors, distance and angle of microphone array from the noise source, and the ambient environment. The testing environments include an anechoic chamber for an idealized setting and both indoor and outdoor settings to represent real world conditions. PIV measurements are conducted to link the downwash and vortical flow structures from the rotors with the noise generation. The significant factors that arise from this study are the operational state of the aircraft and the microphone location (or the directivity of the noise source). The directivity in the rotor plane was shown to be omni-directional, regardless of the varying parameters. The tonal noise dominates the low to mid frequencies while the broadband noise dominates the higher frequencies. The fundamental characteristics of the acoustic signature appear to be invariant to the number of rotors. Flight maneuvers of the aircraft also significantly impact the tonal content in the acoustic signature.

  3. Acoustic emission signatures of damage modes in concrete

    NASA Astrophysics Data System (ADS)

    Aggelis, D. G.; Mpalaskas, A. C.; Matikas, T. E.; Van Hemelrijck, D.

    2014-03-01

    The characterization of the dominant fracture mode may assist in the prediction of the remaining life of a concrete structure due to the sequence between successive tensile and shear mechanisms. Acoustic emission sensors record the elastic responses after any fracture event converting them into electric waveforms. The characteristics of the waveforms vary according to the movement of the crack tips, enabling characterization of the original mode. In this study fracture experiments on concrete beams are conducted. The aim is to examine the typical acoustic signals emitted by different fracture modes (namely tension due to bending and shear) in a concrete matrix. This is an advancement of a recent study focusing on smaller scale mortar and marble specimens. The dominant stress field and ultimate fracture mode is controlled by modification of the four-point bending setup while acoustic emission is monitored by six sensors at fixed locations. Conclusions about how to distinguish the sources based on waveform parameters of time domain (duration, rise time) and frequency are drawn. Specifically, emissions during the shear loading exhibit lower frequencies and longer duration than tensile. Results show that, combination of AE features may help to characterize the shift between dominant fracture modes and contribute to the structural health monitoring of concrete. This offers the basis for in-situ application provided that the distortion of the signal due to heterogeneous wave path is accounted for.

  4. Non-Linear Acoustic Concealed Weapons Detector

    DTIC Science & Technology

    2006-05-01

    signature analysis 8 the interactions of the beams with concealed objects. The Khokhlov- Zabolotskaya-Kuznetsov ( KZK ) equation is the most widely used...Hamilton developed a finite difference method based on the KZK equation to model pulsed acoustic emissions from axial symmetric sources. Using a...College of William & Mary, we have developed a simulation code using the KZK equation to model non-linear acoustic beams and visualize beam patterns

  5. Automatic Target Recognition Based on Cross-Plot

    PubMed Central

    Wong, Kelvin Kian Loong; Abbott, Derek

    2011-01-01

    Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository. PMID:21980508

  6. Optimizing the Combination of Acoustic and Electric Hearing in the Implanted Ear

    PubMed Central

    Karsten, Sue A.; Turner, Christopher W.; Brown, Carolyn J.; Jeon, Eun Kyung; Abbas, Paul J.; Gantz, Bruce J.

    2016-01-01

    Objectives The aim of this study was to determine an optimal approach to program combined acoustic plus electric (A+E) hearing devices in the same ear to maximize speech-recognition performance. Design Ten participants with at least 1 year of experience using Nucleus Hybrid (short electrode) A+E devices were evaluated across three different fitting conditions that varied in the frequency ranges assigned to the acoustically and electrically presented portions of the spectrum. Real-ear measurements were used to optimize the acoustic component for each participant, and the acoustic stimulation was then held constant across conditions. The lower boundary of the electric frequency range was systematically varied to create three conditions with respect to the upper boundary of the acoustic spectrum: Meet, Overlap, and Gap programming. Consonant recognition in quiet and speech recognition in competing-talker babble were evaluated after participants were given the opportunity to adapt by using the experimental programs in their typical everyday listening situations. Participants provided subjective ratings and evaluations for each fitting condition. Results There were no significant differences in performance between conditions (Meet, Overlap, Gap) for consonant recognition in quiet. A significant decrement in performance was measured for the Overlap fitting condition for speech recognition in babble. Subjective ratings indicated a significant preference for the Meet fitting regimen. Conclusions Participants using the Hybrid ipsilateral A+E device generally performed better when the acoustic and electric spectra were programmed to meet at a single frequency region, as opposed to a gap or overlap. Although there is no particular advantage for the Meet fitting strategy for recognition of consonants in quiet, the advantage becomes evident for speech recognition in competing-talker babble and in patient preferences. PMID:23059851

  7. Department of Cybernetic Acoustics

    NASA Astrophysics Data System (ADS)

    The development of the theory, instrumentation and applications of methods and systems for the measurement, analysis, processing and synthesis of acoustic signals within the audio frequency range, particularly of the speech signal and the vibro-acoustic signal emitted by technical and industrial equipments treated as noise and vibration sources was discussed. The research work, both theoretical and experimental, aims at applications in various branches of science, and medicine, such as: acoustical diagnostics and phoniatric rehabilitation of pathological and postoperative states of the speech organ; bilateral ""man-machine'' speech communication based on the analysis, recognition and synthesis of the speech signal; vibro-acoustical diagnostics and continuous monitoring of the state of machines, technical equipments and technological processes.

  8. A novel probabilistic framework for event-based speech recognition

    NASA Astrophysics Data System (ADS)

    Juneja, Amit; Espy-Wilson, Carol

    2003-10-01

    One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.

  9. Introducing passive acoustic filter in acoustic based condition monitoring: Motor bike piston-bore fault identification

    NASA Astrophysics Data System (ADS)

    Jena, D. P.; Panigrahi, S. N.

    2016-03-01

    Requirement of designing a sophisticated digital band-pass filter in acoustic based condition monitoring has been eliminated by introducing a passive acoustic filter in the present work. So far, no one has attempted to explore the possibility of implementing passive acoustic filters in acoustic based condition monitoring as a pre-conditioner. In order to enhance the acoustic based condition monitoring, a passive acoustic band-pass filter has been designed and deployed. Towards achieving an efficient band-pass acoustic filter, a generalized design methodology has been proposed to design and optimize the desired acoustic filter using multiple filter components in series. An appropriate objective function has been identified for genetic algorithm (GA) based optimization technique with multiple design constraints. In addition, the sturdiness of the proposed method has been demonstrated in designing a band-pass filter by using an n-branch Quincke tube, a high pass filter and multiple Helmholtz resonators. The performance of the designed acoustic band-pass filter has been shown by investigating the piston-bore defect of a motor-bike using engine noise signature. On the introducing a passive acoustic filter in acoustic based condition monitoring reveals the enhancement in machine learning based fault identification practice significantly. This is also a first attempt of its own kind.

  10. Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Sulpizio, Simone; McQueen, James M.

    2012-01-01

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…

  11. Pulse analysis of acoustic emission signals. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.

    1976-01-01

    A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio are examined in the frequency domain analysis, and pulse shape deconvolution is developed for use in the time domain analysis. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameters values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emissions associated with: (1) crack propagation, (2) ball dropping on a plate, (3) spark discharge and (4) defective and good ball bearings.

  12. Battlefield decision aid for acoustical ground sensors with interface to meteorological data sources

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Noble, John M.; VanAartsen, Bruce H.; Szeto, Gregory L.

    2001-08-01

    The performance of acoustical ground sensors depends heavily on the local atmospheric and terrain conditions. This paper describes a prototype physics-based decision aid, called the Acoustic Battlefield Aid (ABFA), for predicting these environ-mental effects. ABFA integrates advanced models for acoustic propagation, atmospheric structure, and array signal process-ing into a convenient graphical user interface. The propagation calculations are performed in the frequency domain on user-definable target spectra. The solution method involves a parabolic approximation to the wave equation combined with a ter-rain diffraction model. Sensor performance is characterized with Cramer-Rao lower bounds (CRLBs). The CRLB calcula-tions include randomization of signal energy and wavefront orientation resulting from atmospheric turbulence. Available performance characterizations include signal-to-noise ratio, probability of detection, direction-finding accuracy for isolated receiving arrays, and location-finding accuracy for networked receiving arrays. A suite of integrated tools allows users to create new target descriptions from standard digitized audio files and to design new sensor array layouts. These tools option-ally interface with the ARL Database/Automatic Target Recognition (ATR) Laboratory, providing access to an extensive library of target signatures. ABFA also includes a Java-based capability for network access of near real-time data from sur-face weather stations or forecasts from the Army's Integrated Meteorological System. As an example, the detection footprint of an acoustical sensor, as it evolves over a 13-hour period, is calculated.

  13. Acoustic analysis of the propfan

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Succi, G. P.

    1979-01-01

    A review of propeller noise prediction technology is presented. Two methods for the prediction of the noise from conventional and advanced propellers in forward flight are described. These methods are based on different time domain formulations. Brief descriptions of the computer algorithms based on these formulations are given. The output of the programs (the acoustic pressure signature) was Fourier analyzed to get the acoustic pressure spectrum. The main difference between the two programs is that one can handle propellers with supersonic tip speed while the other is for subsonic tip speed propellers. Comparisons of the calculated and measured acoustic data for a conventional and an advanced propeller show good agreement in general.

  14. Correlation Time of Ocean Ambient Noise Intensity in San Diego Bay and Target Recognition in Acoustic Daylight Images

    NASA Astrophysics Data System (ADS)

    Wadsworth, Adam J.

    A method for passively detecting and imaging underwater targets using ambient noise as the sole source of illumination (named acoustic daylight) was successfully implemented in the form of the Acoustic Daylight Ocean Noise Imaging System (ADONIS). In a series of imaging experiments conducted in San Diego Bay, where the dominant source of high-frequency ambient noise is snapping shrimp, a large quantity of ambient noise intensity data was collected with the ADONIS (Epifanio, 1997). In a subset of the experimental data sets, fluctuations of time-averaged ambient noise intensity exhibited a diurnal pattern consistent with the increase in frequency of shrimp snapping near dawn and dusk. The same subset of experimental data is revisited here and the correlation time is estimated and analysed for sequences of ambient noise data several minutes in length, with the aim of detecting possible periodicities or other trends in the fluctuation of the shrimp-dominated ambient noise field. Using videos formed from sequences of acoustic daylight images along with other experimental information, candidate segments of static-configuration ADONIS raw ambient noise data were isolated. For each segment, the normalized intensity auto-correlation closely resembled the delta function, the auto-correlation of white noise. No intensity fluctuation patterns at timescales smaller than a few minutes were discernible, suggesting that the shrimp do not communicate, synchronise, or exhibit any periodicities in their snapping. Also presented here is a ADONIS-specific target recognition algorithm based on principal component analysis, along with basic experimental results using a database of acoustic daylight images.

  15. Interpreting Underwater Acoustic Images of the Upper Ocean Boundary Layer

    ERIC Educational Resources Information Center

    Ulloa, Marco J.

    2007-01-01

    A challenging task in physical studies of the upper ocean using underwater sound is the interpretation of high-resolution acoustic images. This paper covers a number of basic concepts necessary for undergraduate and postgraduate students to identify the most distinctive features of the images, providing a link with the acoustic signatures of…

  16. Small Vocabulary Recognition Using Surface Electromyography in an Acoustically Harsh Environment

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; Jorgensen, Charles

    2005-01-01

    This paper presents results of electromyographic-based (EMG-based) speech recognition on a small vocabulary of 15 English words. The work was motivated in part by a desire to mitigate the effects of high acoustic noise on speech intelligibility in communication systems used by first responders. Both an off-line and a real-time system were constructed. Data were collected from a single male subject wearing a fireghter's self-contained breathing apparatus. A single channel of EMG data was used, collected via surface sensors at a rate of 104 samples/s. The signal processing core consisted of an activity detector, a feature extractor, and a neural network classifier. In the off-line phase, 150 examples of each word were collected from the subject. Generalization testing, conducted using bootstrapping, produced an overall average correct classification rate on the 15 words of 74%, with a 95% confidence interval of [71%, 77%]. Once the classifier was trained, the subject used the real-time system to communicate and to control a robotic device. The real-time system was tested with the subject exposed to an ambient noise level of approximately 95 decibels.

  17. Discriminative Features Mining for Offline Handwritten Signature Verification

    NASA Astrophysics Data System (ADS)

    Neamah, Karrar; Mohamad, Dzulkifli; Saba, Tanzila; Rehman, Amjad

    2014-03-01

    Signature verification is an active research area in the field of pattern recognition. It is employed to identify the particular person with the help of his/her signature's characteristics such as pen pressure, loops shape, speed of writing and up down motion of pen, writing speed, pen pressure, shape of loops, etc. in order to identify that person. However, in the entire process, features extraction and selection stage is of prime importance. Since several signatures have similar strokes, characteristics and sizes. Accordingly, this paper presents combination of orientation of the skeleton and gravity centre point to extract accurate pattern features of signature data in offline signature verification system. Promising results have proved the success of the integration of the two methods.

  18. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    PubMed Central

    Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang

    2018-01-01

    The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407

  19. The effects of reverberant self- and overlap-masking on speech recognition in cochlear implant listeners.

    PubMed

    Desmond, Jill M; Collins, Leslie M; Throckmorton, Chandra S

    2014-06-01

    Many cochlear implant (CI) listeners experience decreased speech recognition in reverberant environments [Kokkinakis et al., J. Acoust. Soc. Am. 129(5), 3221-3232 (2011)], which may be caused by a combination of self- and overlap-masking [Bolt and MacDonald, J. Acoust. Soc. Am. 21(6), 577-580 (1949)]. Determining the extent to which these effects decrease speech recognition for CI listeners may influence reverberation mitigation algorithms. This study compared speech recognition with ideal self-masking mitigation, with ideal overlap-masking mitigation, and with no mitigation. Under these conditions, mitigating either self- or overlap-masking resulted in significant improvements in speech recognition for both normal hearing subjects utilizing an acoustic model and for CI listeners using their own devices.

  20. Evaluation of MPEG-7-Based Audio Descriptors for Animal Voice Recognition over Wireless Acoustic Sensor Networks.

    PubMed

    Luque, Joaquín; Larios, Diego F; Personal, Enrique; Barbancho, Julio; León, Carlos

    2016-05-18

    Environmental audio monitoring is a huge area of interest for biologists all over the world. This is why some audio monitoring system have been proposed in the literature, which can be classified into two different approaches: acquirement and compression of all audio patterns in order to send them as raw data to a main server; or specific recognition systems based on audio patterns. The first approach presents the drawback of a high amount of information to be stored in a main server. Moreover, this information requires a considerable amount of effort to be analyzed. The second approach has the drawback of its lack of scalability when new patterns need to be detected. To overcome these limitations, this paper proposes an environmental Wireless Acoustic Sensor Network architecture focused on use of generic descriptors based on an MPEG-7 standard. These descriptors demonstrate it to be suitable to be used in the recognition of different patterns, allowing a high scalability. The proposed parameters have been tested to recognize different behaviors of two anuran species that live in Spanish natural parks; the Epidalea calamita and the Alytes obstetricans toads, demonstrating to have a high classification performance.

  1. Evaluation of MPEG-7-Based Audio Descriptors for Animal Voice Recognition over Wireless Acoustic Sensor Networks

    PubMed Central

    Luque, Joaquín; Larios, Diego F.; Personal, Enrique; Barbancho, Julio; León, Carlos

    2016-01-01

    Environmental audio monitoring is a huge area of interest for biologists all over the world. This is why some audio monitoring system have been proposed in the literature, which can be classified into two different approaches: acquirement and compression of all audio patterns in order to send them as raw data to a main server; or specific recognition systems based on audio patterns. The first approach presents the drawback of a high amount of information to be stored in a main server. Moreover, this information requires a considerable amount of effort to be analyzed. The second approach has the drawback of its lack of scalability when new patterns need to be detected. To overcome these limitations, this paper proposes an environmental Wireless Acoustic Sensor Network architecture focused on use of generic descriptors based on an MPEG-7 standard. These descriptors demonstrate it to be suitable to be used in the recognition of different patterns, allowing a high scalability. The proposed parameters have been tested to recognize different behaviors of two anuran species that live in Spanish natural parks; the Epidalea calamita and the Alytes obstetricans toads, demonstrating to have a high classification performance. PMID:27213375

  2. Research on Signature Verification Method Based on Discrete Fréchet Distance

    NASA Astrophysics Data System (ADS)

    Fang, J. L.; Wu, W.

    2018-05-01

    This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.

  3. Reliable classification of high explosive and chemical/biological artillery using acoustic sensors

    NASA Astrophysics Data System (ADS)

    Desai, Sachi V.; Hohil, Myron E.; Bass, Henry E.; Chambers, Jim

    2005-05-01

    Feature extraction methods based on the discrete wavelet transform and multiresolution analysis are used to develop a robust classification algorithm that reliably discriminates between conventional and simulated chemical/biological artillery rounds via acoustic signals produced during detonation utilizing a generic acoustic sensor. Based on the transient properties of the signature blast distinct characteristics arise within the different acoustic signatures because high explosive warheads emphasize concussive and shrapnel effects, while chemical/biological warheads are designed to disperse their contents over large areas, therefore employing a slower burning, less intense explosive to mix and spread their contents. The ensuing blast waves are readily characterized by variations in the corresponding peak pressure and rise time of the blast, differences in the ratio of positive pressure amplitude to the negative amplitude, and variations in the overall duration of the resulting waveform. Unique attributes can also be identified that depend upon the properties of the gun tube, projectile speed at the muzzle, and the explosive burn rates of the warhead. The algorithm enables robust classification of various airburst signatures using acoustics. It is capable of being integrated within an existing chemical/biological sensor, a stand-alone generic sensor, or a part of a disparate sensor suite. When emplaced in high-threat areas, this added capability would further provide field personal with advanced battlefield knowledge without the aide of so-called "sniffer" sensors that rely upon air particle information based on direct contact with possible contaminated air. In this work, the discrete wavelet transform is used to extract the predominant components of these characteristics from air burst signatures at ranges exceeding 2km while maintaining temporal sequence of the data to keep relevance to the transient differences of the airburst signatures. Highly reliable

  4. The Role of Secondary-Stressed and Unstressed-Unreduced Syllables in Word Recognition: Acoustic and Perceptual Studies with Russian Learners of English.

    PubMed

    Banzina, Elina; Dilley, Laura C; Hewitt, Lynne E

    2016-08-01

    The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found, likely due to a transfer of native phonological features. Next, a cross-modal phonological priming technique combined with a lexical decision task assessed the effect of inaccurate SS and UU syllable productions on native American English listeners' speech processing. Inaccurate UU vowels led to significant inhibition of lexical access, while reduced SS vowels revealed less interference. The results have implications for understanding the role of SS and UU syllables for word recognition and English pronunciation instruction.

  5. The expression and recognition of emotions in the voice across five nations: A lens model analysis based on acoustic features.

    PubMed

    Laukka, Petri; Elfenbein, Hillary Anger; Thingujam, Nutankumar S; Rockstuhl, Thomas; Iraki, Frederick K; Chui, Wanda; Althoff, Jean

    2016-11-01

    This study extends previous work on emotion communication across cultures with a large-scale investigation of the physical expression cues in vocal tone. In doing so, it provides the first direct test of a key proposition of dialect theory, namely that greater accuracy of detecting emotions from one's own cultural group-known as in-group advantage-results from a match between culturally specific schemas in emotional expression style and culturally specific schemas in emotion recognition. Study 1 used stimuli from 100 professional actors from five English-speaking nations vocally conveying 11 emotional states (anger, contempt, fear, happiness, interest, lust, neutral, pride, relief, sadness, and shame) using standard-content sentences. Detailed acoustic analyses showed many similarities across groups, and yet also systematic group differences. This provides evidence for cultural accents in expressive style at the level of acoustic cues. In Study 2, listeners evaluated these expressions in a 5 × 5 design balanced across groups. Cross-cultural accuracy was greater than expected by chance. However, there was also in-group advantage, which varied across emotions. A lens model analysis of fundamental acoustic properties examined patterns in emotional expression and perception within and across groups. Acoustic cues were used relatively similarly across groups both to produce and judge emotions, and yet there were also subtle cultural differences. Speakers appear to have a culturally nuanced schema for enacting vocal tones via acoustic cues, and perceivers have a culturally nuanced schema in judging them. Consistent with dialect theory's prediction, in-group judgments showed a greater match between these schemas used for emotional expression and perception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. DEMON-type algorithms for determination of hydro-acoustic signatures of surface ships and of divers

    NASA Astrophysics Data System (ADS)

    Slamnoiu, G.; Radu, O.; Rosca, V.; Pascu, C.; Damian, R.; Surdu, G.; Curca, E.; Radulescu, A.

    2016-08-01

    With the project “System for detection, localization, tracking and identification of risk factors for strategic importance in littoral areas”, developed in the National Programme II, the members of the research consortium intend to develop a functional model for a hydroacoustic passive subsystem for determination of acoustic signatures of targets such as fast boats and autonomous divers. This paper presents some of the results obtained in the area of hydroacoustic signal processing by using DEMON-type algorithms (Detection of Envelope Modulation On Noise). For evaluation of the performance of various algorithm variations we have used both audio recordings of the underwater noise generated by ships and divers in real situations and also simulated noises. We have analysed the results of processing these signals using four DEMON algorithm structures as presented in the reference literature and a fifth DEMON algorithm structure proposed by the authors of this paper. The algorithm proposed by the authors generates similar results to those obtained by applying the traditional algorithms but requires less computing resources than those and at the same time it has proven to be more resilient to random noise influence.

  7. Speaker recognition with temporal cues in acoustic and electric hearing

    NASA Astrophysics Data System (ADS)

    Vongphoe, Michael; Zeng, Fan-Gang

    2005-08-01

    Natural spoken language processing includes not only speech recognition but also identification of the speaker's gender, age, emotional, and social status. Our purpose in this study is to evaluate whether temporal cues are sufficient to support both speech and speaker recognition. Ten cochlear-implant and six normal-hearing subjects were presented with vowel tokens spoken by three men, three women, two boys, and two girls. In one condition, the subject was asked to recognize the vowel. In the other condition, the subject was asked to identify the speaker. Extensive training was provided for the speaker recognition task. Normal-hearing subjects achieved nearly perfect performance in both tasks. Cochlear-implant subjects achieved good performance in vowel recognition but poor performance in speaker recognition. The level of the cochlear implant performance was functionally equivalent to normal performance with eight spectral bands for vowel recognition but only to one band for speaker recognition. These results show a disassociation between speech and speaker recognition with primarily temporal cues, highlighting the limitation of current speech processing strategies in cochlear implants. Several methods, including explicit encoding of fundamental frequency and frequency modulation, are proposed to improve speaker recognition for current cochlear implant users.

  8. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  9. Advertisement recognition using mode voting acoustic fingerprint

    NASA Astrophysics Data System (ADS)

    Fahmi, Reza; Abedi Firouzjaee, Hosein; Janalizadeh Choobbasti, Ali; Mortazavi Najafabadi, S. H. E.; Safavi, Saeid

    2017-12-01

    Emergence of media outlets and public relations tools such as TV, radio and the Internet since the 20th century provided the companies with a good platform for advertising their goods and services. Advertisement recognition is an important task that can help companies measure the efficiency of their advertising campaigns in the market and make it possible to compare their performance with competitors in order to get better business insights. Advertisement recognition is usually performed manually with help of human labor or is done through automated methods that are mainly based on heuristics features, these methods usually lack abilities such as scalability, being able to be generalized and be used in different situations. In this paper, we present an automated method for advertisement recognition based on audio processing method that could make this process fairly simple and eliminate the human factor out of the equation. This method has ultimately been used in Miras information technology in order to monitor 56 TV channels to detect all ad video clips broadcast over some networks.

  10. Acoustic Transmitters for Underwater Neutrino Telescopes

    PubMed Central

    Ardid, Miguel; Martínez-Mora, Juan A.; Bou-Cabo, Manuel; Larosa, Giuseppina; Adrián-Martínez, Silvia; Llorens, Carlos D.

    2012-01-01

    In this paper acoustic transmitters that were developed for use in underwater neutrino telescopes are presented. Firstly, an acoustic transceiver has been developed as part of the acoustic positioning system of neutrino telescopes. These infrastructures are not completely rigid and require a positioning system in order to monitor the position of the optical sensors which move due to sea currents. To guarantee a reliable and versatile system, the transceiver has the requirements of reduced cost, low power consumption, high pressure withstanding (up to 500 bars), high intensity for emission, low intrinsic noise, arbitrary signals for emission and the capacity of acquiring and processing received signals. Secondly, a compact acoustic transmitter array has been developed for the calibration of acoustic neutrino detection systems. The array is able to mimic the signature of ultra-high-energy neutrino interaction in emission directivity and signal shape. The technique of parametric acoustic sources has been used to achieve the proposed aim. The developed compact array has practical features such as easy manageability and operation. The prototype designs and the results of different tests are described. The techniques applied for these two acoustic systems are so powerful and versatile that may be of interest in other marine applications using acoustic transmitters. PMID:22666022

  11. Single-sensor multispeaker listening with acoustic metamaterials

    PubMed Central

    Xie, Yangbo; Tsai, Tsung-Han; Konneker, Adam; Popa, Bogdan-Ioan; Brady, David J.; Cummer, Steven A.

    2015-01-01

    Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications. PMID:26261314

  12. High-fidelity simulation capability for virtual testing of seismic and acoustic sensors

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.

    2005-05-01

    This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.

  13. Geo-Acoustic Doppler Spectroscopy: A Novel Acoustic Technique For Surveying The Seabed

    NASA Astrophysics Data System (ADS)

    Buckingham, Michael J.

    2010-09-01

    An acoustic inversion technique, known as Geo-Acoustic Doppler Spectroscopy, has recently been developed for estimating the geo-acoustic parameters of the seabed in shallow water. The technique is unusual in that it utilizes a low-flying, propeller-driven light aircraft as an acoustic source. Both the engine and propeller produce sound and, since they are rotating sources, the acoustic signature of each takes the form of a sequence of narrow-band harmonics. Although the coupling of the harmonics across the air-sea interface is inefficient, due to the large impedance mismatch between air and water, sufficient energy penetrates the sea surface to provide a useable underwater signal at sensors either in the water column or buried in the sediment. The received signals, which are significantly Doppler shifted due to the motion of the aircraft, will have experienced a number of reflections from the seabed and thus they contain information about the sediment. A geo-acoustic inversion of the Doppler-shifted modes associated with each harmonic yields an estimate of the sound speed in the sediment; and, once the sound speed has been determined, the known correlations between it and the remaining geo-acoustic parameters allow all of the latter to be computed. This inversion technique has been applied to aircraft data collected in the shallow water north of Scripps pier, returning values of the sound speed, shear speed, porosity, density and grain size that are consistent with the known properties of the sandy sediment in the channel.

  14. Acoustic emission from a growing crack

    NASA Technical Reports Server (NTRS)

    Jacobs, Laurence J.

    1989-01-01

    An analytical method is being developed to determine the signature of an acoustic emission waveform from a growing crack and the results of this analysis are compared to experimentally obtained values. Within the assumptions of linear elastic fracture mechanics, a two dimensional model is developed to examine a semi-infinite crack that, after propagating with a constant velocity, suddenly stops. The analytical model employs an integral equation method for the analysis of problems of dynamic fracture mechanics. The experimental procedure uses an interferometric apparatus that makes very localized absolute measurements with very high fidelity and without acoustically loading the specimen.

  15. Detection and recognition of targets by using signal polarization properties

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Peralta-Fabi, Ricardo; Popov, Anatoly V.; Babakov, Mikhail F.

    1999-08-01

    The quality of radar target recognition can be enhanced by exploiting its polarization signatures. A specialized X-band polarimetric radar was used for target recognition in experimental investigations. The following polarization characteristics connected to the object geometrical properties were investigated: the amplitudes of the polarization matrix elements; an anisotropy coefficient; depolarization coefficient; asymmetry coefficient; the energy of a backscattering signal; object shape factor. A large quantity of polarimetric radar data was measured and processed to form a database of different object and different weather conditions. The histograms of polarization signatures were approximated by a Nakagami distribution, then used for real- time target recognition. The Neyman-Pearson criterion was used for the target detection, and the criterion of the maximum of a posterior probability was used for recognition problem. Some results of experimental verification of pattern recognition and detection of objects with different electrophysical and geometrical characteristics urban in clutter are presented in this paper.

  16. Potential Benefits of an Integrated Electric-Acoustic Sound Processor with Children: A Preliminary Report.

    PubMed

    Wolfe, Jace; Neumann, Sara; Schafer, Erin; Marsh, Megan; Wood, Mark; Baker, R Stanley

    2017-02-01

    A number of published studies have demonstrated the benefits of electric-acoustic stimulation (EAS) over conventional electric stimulation for adults with functional low-frequency acoustic hearing and severe-to-profound high-frequency hearing loss. These benefits potentially include better speech recognition in quiet and in noise, better localization, improvements in sound quality, better music appreciation and aptitude, and better pitch recognition. There is, however, a paucity of published reports describing the potential benefits and limitations of EAS for children with functional low-frequency acoustic hearing and severe-to-profound high-frequency hearing loss. The objective of this study was to explore the potential benefits of EAS for children. A repeated measures design was used to evaluate performance differences obtained with EAS stimulation versus acoustic- and electric-only stimulation. Seven users of Cochlear Nucleus Hybrid, Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. Sentence recognition (assayed using the pediatric version of the AzBio sentence recognition test) was evaluated in quiet and at three fixed signal-to-noise ratios (SNR) (0, +5, and +10 dB). Functional hearing performance was also evaluated with the use of questionnaires, including the comparative version of the Speech, Spatial, and Qualities, the Listening Inventory for Education Revised, and the Children's Home Inventory for Listening Difficulties. Speech recognition in noise was typically better with EAS compared to participants' performance with acoustic- and electric-only stimulation, particularly when evaluated at the less favorable SNR. Additionally, in real-world situations, children generally preferred to use EAS compared to electric-only stimulation. Also, the participants' classroom teachers observed better hearing performance in the classroom with the use of EAS. Use of EAS provided better speech recognition in quiet and in noise when compared to

  17. Rotation, scale, and translation invariant pattern recognition using feature extraction

    NASA Astrophysics Data System (ADS)

    Prevost, Donald; Doucet, Michel; Bergeron, Alain; Veilleux, Luc; Chevrette, Paul C.; Gingras, Denis J.

    1997-03-01

    A rotation, scale and translation invariant pattern recognition technique is proposed.It is based on Fourier- Mellin Descriptors (FMD). Each FMD is taken as an independent feature of the object, and a set of those features forms a signature. FMDs are naturally rotation invariant. Translation invariance is achieved through pre- processing. A proper normalization of the FMDs gives the scale invariance property. This approach offers the double advantage of providing invariant signatures of the objects, and a dramatic reduction of the amount of data to process. The compressed invariant feature signature is next presented to a multi-layered perceptron neural network. This final step provides some robustness to the classification of the signatures, enabling good recognition behavior under anamorphically scaled distortion. We also present an original feature extraction technique, adapted to optical calculation of the FMDs. A prototype optical set-up was built, and experimental results are presented.

  18. Remote Acoustic Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    Watson, Z.; Hart, M.

    Identification and characterization of orbiting objects that are not spatially resolved are challenging problems for traditional remote sensing methods. Hyper temporal imaging, enabled by fast, low-noise electro-optical detectors is a new sensing modality which may allow the direct detection of acoustic resonances on satellites enabling a new regime of signature and state detection. Detectable signatures may be caused by the oscillations of solar panels, high-gain antennae, or other on-board subsystems driven by thermal gradients, fluctuations in solar radiation pressure, worn reaction wheels, or orbit maneuvers. Herein we present the first hyper-temporal observations of geosynchronous satellites. Data were collected at the Kuiper 1.54-meter telescope in Arizona using an experimental dual-channel imaging instrument that simultaneously measures light in two orthogonally polarized beams at sampling rates extending up to 1 kHz. In these observations, we see evidence of acoustic resonances in the polarization state of satellites. The technique is expected to support object identification and characterization of on-board components and to act as a discriminant between active satellites, debris, and passive bodies.

  19. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation maymore » decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.« less

  20. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences.

    PubMed

    Koeritzer, Margaret A; Rogers, Chad S; Van Engen, Kristin J; Peelle, Jonathan E

    2018-03-15

    The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. https://doi.org/10.23641/asha.5848059.

  1. Fifty years of progress in acoustic phonetics

    NASA Astrophysics Data System (ADS)

    Stevens, Kenneth N.

    2004-10-01

    Three events that occurred 50 or 60 years ago shaped the study of acoustic phonetics, and in the following few decades these events influenced research and applications in speech disorders, speech development, speech synthesis, speech recognition, and other subareas in speech communication. These events were: (1) the source-filter theory of speech production (Chiba and Kajiyama; Fant); (2) the development of the sound spectrograph and its interpretation (Potter, Kopp, and Green; Joos); and (3) the birth of research that related distinctive features to acoustic patterns (Jakobson, Fant, and Halle). Following these events there has been systematic exploration of the articulatory, acoustic, and perceptual bases of phonological categories, and some quantification of the sources of variability in the transformation of this phonological representation of speech into its acoustic manifestations. This effort has been enhanced by studies of how children acquire language in spite of this variability and by research on speech disorders. Gaps in our knowledge of this inherent variability in speech have limited the directions of applications such as synthesis and recognition of speech, and have led to the implementation of data-driven techniques rather than theoretical principles. Some examples of advances in our knowledge, and limitations of this knowledge, are reviewed.

  2. Counter-narcotic acoustic buoy (CNAB)

    NASA Astrophysics Data System (ADS)

    Bailey, Mark E.

    2004-09-01

    As a means to detect drug trafficking in a maritime environment, the Counter Narcotic Acoustic Buoy is part of an inexpensive system designed to detect "Go Fast" boats and report via satellite to a designated location. A go fast boat for this evaluation is defined as any boat with twin 200 horsepower outboard engines. The buoy is designed for deployment in salt water at depths ranging from 50 to 600 feet and can be easily deployed by one or two persons. Detections are based on noise energy exceeding a preset level within a frequency band associated with the go fast boat's acoustic signature. Detection ranges have been demonstrated to greater than three nautical miles.

  3. A Pattern Recognition Approach to Acoustic Emission Data Originating from Fatigue of Wind Turbine Blades

    PubMed Central

    Tang, Jialin; Soua, Slim; Mares, Cristinel; Gan, Tat-Hean

    2017-01-01

    The identification of particular types of damage in wind turbine blades using acoustic emission (AE) techniques is a significant emerging field. In this work, a 45.7-m turbine blade was subjected to flap-wise fatigue loading for 21 days, during which AE was measured by internally mounted piezoelectric sensors. This paper focuses on using unsupervised pattern recognition methods to characterize different AE activities corresponding to different fracture mechanisms. A sequential feature selection method based on a k-means clustering algorithm is used to achieve a fine classification accuracy. The visualization of clusters in peak frequency−frequency centroid features is used to correlate the clustering results with failure modes. The positions of these clusters in time domain features, average frequency−MARSE, and average frequency−peak amplitude are also presented in this paper (where MARSE represents the Measured Area under Rectified Signal Envelope). The results show that these parameters are representative for the classification of the failure modes. PMID:29104245

  4. A Pattern Recognition Approach to Acoustic Emission Data Originating from Fatigue of Wind Turbine Blades.

    PubMed

    Tang, Jialin; Soua, Slim; Mares, Cristinel; Gan, Tat-Hean

    2017-11-01

    The identification of particular types of damage in wind turbine blades using acoustic emission (AE) techniques is a significant emerging field. In this work, a 45.7-m turbine blade was subjected to flap-wise fatigue loading for 21 days, during which AE was measured by internally mounted piezoelectric sensors. This paper focuses on using unsupervised pattern recognition methods to characterize different AE activities corresponding to different fracture mechanisms. A sequential feature selection method based on a k-means clustering algorithm is used to achieve a fine classification accuracy. The visualization of clusters in peak frequency-frequency centroid features is used to correlate the clustering results with failure modes. The positions of these clusters in time domain features, average frequency-MARSE, and average frequency-peak amplitude are also presented in this paper (where MARSE represents the Measured Area under Rectified Signal Envelope). The results show that these parameters are representative for the classification of the failure modes.

  5. Effective Use of Molecular Recognition in Gas Sensing: Results from Acoustic Wave and In-Situ FTIR Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodenhofer, K,; Gopel, W.; Hierlemann, A.

    To probe directly the analyte/film interactions that characterize molecular recognition in gas sensors, we recorded changes to the in-situ surface vibrational spectra of specifically fictionalized surface acoustic wave (SAW) devices concurrently with analyte exposure and SAW measurement of the extent of sorption. Fourier-lmnsform infrared external- reflectance spectra (FTIR-ERS) were collected from operating 97-MH2 SAW delay lines during exposure to a range of analytes as they interacted with thin-film coatings previously shown to be selective: cyclodextrins for chiral recognition, Ni-camphorates for Lewis bases such as pyridine and organophosphonates, and phthalocyanines for aromatic compounds. In most cases where specific chemical interactions-metal coordination,more » "cage" compound inclusion, or z stacking-were expected, analyte dosing caused distinctive changes in the IR spectr~ together with anomalously large SAW sensor responses. In contrast, control experiments involving the physisorption of the same analytes by conventional organic polymers did not cause similar changes in the IR spectra, and the SAW responses were smaller. For a given conventional polymer, the partition coefficients (or SAW sensor signals) roughly followed the analyte fraction of saturation vapor pressure. These SAW/FTIR results support earlier conclusions derived from thickness-shear mode resonator data.« less

  6. Recognition of speaker-dependent continuous speech with KEAL

    NASA Astrophysics Data System (ADS)

    Mercier, G.; Bigorgne, D.; Miclet, L.; Le Guennec, L.; Querre, M.

    1989-04-01

    A description of the speaker-dependent continuous speech recognition system KEAL is given. An unknown utterance, is recognized by means of the followng procedures: acoustic analysis, phonetic segmentation and identification, word and sentence analysis. The combination of feature-based, speaker-independent coarse phonetic segmentation with speaker-dependent statistical classification techniques is one of the main design features of the acoustic-phonetic decoder. The lexical access component is essentially based on a statistical dynamic programming technique which aims at matching a phonemic lexical entry containing various phonological forms, against a phonetic lattice. Sentence recognition is achieved by use of a context-free grammar and a parsing algorithm derived from Earley's parser. A speaker adaptation module allows some of the system parameters to be adjusted by matching known utterances with their acoustical representation. The task to be performed, described by its vocabulary and its grammar, is given as a parameter of the system. Continuously spoken sentences extracted from a 'pseudo-Logo' language are analyzed and results are presented.

  7. Songbirds use spectral shape, not pitch, for sound pattern recognition

    PubMed Central

    Bregman, Micah R.; Patel, Aniruddh D.; Gentner, Timothy Q.

    2016-01-01

    Humans easily recognize “transposed” musical melodies shifted up or down in log frequency. Surprisingly, songbirds seem to lack this capacity, although they can learn to recognize human melodies and use complex acoustic sequences for communication. Decades of research have led to the widespread belief that songbirds, unlike humans, are strongly biased to use absolute pitch (AP) in melody recognition. This work relies almost exclusively on acoustically simple stimuli that may belie sensitivities to more complex spectral features. Here, we investigate melody recognition in a species of songbird, the European Starling (Sturnus vulgaris), using tone sequences that vary in both pitch and timbre. We find that small manipulations altering either pitch or timbre independently can drive melody recognition to chance, suggesting that both percepts are poor descriptors of the perceptual cues used by birds for this task. Instead we show that melody recognition can generalize even in the absence of pitch, as long as the spectral shapes of the constituent tones are preserved. These results challenge conventional views regarding the use of pitch cues in nonhuman auditory sequence recognition. PMID:26811447

  8. The CoRoT target HD 49933: a possible seismic signature of heavy elements ionization in the deep convective zone

    NASA Astrophysics Data System (ADS)

    Brito, Ana; Lopes, Ilídio

    2017-04-01

    We use a seismic diagnostic, based on the derivative of the phase shift of the acoustic waves reflected by the surface, to probe the outer layers of the star HD 49933. This diagnostic is particularly sensitive to partial ionization processes occurring above the base of the convective zone. The regions of partial ionization of light elements, hydrogen and helium, have well-known seismological signatures. In this work, we detect a different seismic signature in the acoustic frequencies, which we showed to correspond to the location where the partial ionization of heavy elements occurs. The location of the corresponding acoustic glitch lies between the region of the second ionization of helium and the base of the convective zone, approximately 5 per cent below the surface of the stars.

  9. Toward noncooperative iris recognition: a classification approach using multiple signatures.

    PubMed

    Proença, Hugo; Alexandre, Luís A

    2007-04-01

    This paper focuses on noncooperative iris recognition, i.e., the capture of iris images at large distances, under less controlled lighting conditions, and without active participation of the subjects. This increases the probability of capturing very heterogeneous images (regarding focus, contrast, or brightness) and with several noise factors (iris obstructions and reflections). Current iris recognition systems are unable to deal with noisy data and substantially increase their error rates, especially the false rejections, in these conditions. We propose an iris classification method that divides the segmented and normalized iris image into six regions, makes an independent feature extraction and comparison for each region, and combines each of the dissimilarity values through a classification rule. Experiments show a substantial decrease, higher than 40 percent, of the false rejection rates in the recognition of noisy iris images.

  10. Place recognition using batlike sonar.

    PubMed

    Vanderelst, Dieter; Steckel, Jan; Boen, Andre; Peremans, Herbert; Holderied, Marc W

    2016-08-02

    Echolocating bats have excellent spatial memory and are able to navigate to salient locations using bio-sonar. Navigating and route-following require animals to recognize places. Currently, it is mostly unknown how bats recognize places using echolocation. In this paper, we propose template based place recognition might underlie sonar-based navigation in bats. Under this hypothesis, bats recognize places by remembering their echo signature - rather than their 3D layout. Using a large body of ensonification data collected in three different habitats, we test the viability of this hypothesis assessing two critical properties of the proposed echo signatures: (1) they can be uniquely classified and (2) they vary continuously across space. Based on the results presented, we conclude that the proposed echo signatures satisfy both criteria. We discuss how these two properties of the echo signatures can support navigation and building a cognitive map.

  11. A micro-Doppler sonar for acoustic surveillance in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaonian

    Wireless sensor networks have been employed in a wide variety of applications, despite the limited energy and communication resources at each sensor node. Low power custom VLSI chips implementing passive acoustic sensing algorithms have been successfully integrated into an acoustic surveillance unit and demonstrated for detection and location of sound sources. In this dissertation, I explore active and passive acoustic sensing techniques, signal processing and classification algorithms for detection and classification in a multinodal sensor network environment. I will present the design and characterization of a continuous-wave micro-Doppler sonar to image objects with articulated moving components. As an example application for this system, we use it to image gaits of humans and four-legged animals. I will present the micro-Doppler gait signatures of a walking person, a dog and a horse. I will discuss the resolution and range of this micro-Doppler sonar and use experimental results to support the theoretical analyses. In order to reduce the data rate and make the system amenable to wireless sensor networks, I will present a second micro-Doppler sonar that uses bandpass sampling for data acquisition. Speech recognition algorithms are explored for biometric identifications from one's gait, and I will present and compare the classification performance of the two systems. The acoustic micro-Doppler sonar design and biometric identification results are the first in the field as the previous work used either video camera or microwave technology. I will also review bearing estimation algorithms and present results of applying these algorithms for bearing estimation and tracking of moving vehicles. Another major source of the power consumption at each sensor node is the wireless interface. To address the need of low power communications in a wireless sensor network, I will also discuss the design and implementation of ultra wideband transmitters in a three dimensional

  12. Pinniped bioacoustics: Atmospheric and hydrospheric signal production, reception, and function

    NASA Astrophysics Data System (ADS)

    Schusterman, Ronald J.; Kastak, David; Reichmuth Kastak, Colleen; Holt, Marla; Southall, Brandon L.

    2004-05-01

    There is no convincing evidence that any of the 33 pinniped species evolved acoustic specializations for echolocation. However, all species produce and localize signals amphibiously in different communicative contexts. In the setting of sexual selection, aquatic mating male phocids and walruses tend to emit underwater calls, while male otariids and phocids that breed terrestrially emit airborne calls. Signature vocalizations are widespread among pinnipeds. There is evidence that males use signature threat calls, and it is possible that vocal recognition may be used by territorial males to form categories consisting of neighbors and strangers. In terms of mother-offspring recognition, both otariid females and their pups use acoustical cues for mutual recognition. In contrast, reunions between phocid females and their dependent pups depend mostly on pup vocalizations. In terms of signal reception, audiometric studies show that otariids are highly sensitive to aerial sounds but slightly less sensitive to underwater sounds. Conversely, except for deep-diving elephant seals, phocids are quite sensitive to acoustic signals both in air and under water. Finally, despite differences in absolute hearing sensitivity, pinnipeds have similar masked hearing capabilities in both media, supporting the notion that cochlear mechanics determine the effects of noise on hearing.

  13. Tools to Compare Diving-Animal Kinematics With Acoustic Behavior and Exposure

    DTIC Science & Technology

    2009-09-30

    the level of detail that is required. Figure 2. Left: The acoustic spectrogram from a beaked whale foraging at 967m depth revealing clicks and...buzzes. Data courtesy of Brandon Southall. Right: The typical acoustic signature of a lunging humback feeding on krill. The overlaid plot shows speed...other characteristics . 3 Figure 3. Left: A trackPlot image of a Florida Manatee. The insert shows one of the plot options based on a fourier

  14. Compositional Signatures in Acoustic Backscatter Over Vegetated and Unvegetated Mixed Sand-Gravel Riverbeds

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Grams, P. E.; Kaplinski, M. A.

    2017-10-01

    Multibeam acoustic backscatter has considerable utility for remote characterization of spatially heterogeneous bed sediment composition over vegetated and unvegetated riverbeds of mixed sand and gravel. However, the use of high-frequency, decimeter-resolution acoustic backscatter for sediment classification in shallow water is hampered by significant topographic contamination of the signal. In mixed sand-gravel riverbeds, changes in the abiotic composition of sediment (such as homogeneous sand to homogeneous gravel) tend to occur over larger spatial scales than is characteristic of small-scale bedform topography (ripples, dunes, and bars) or biota (such as vascular plants and periphyton). A two-stage method is proposed to filter out the morphological contributions to acoustic backscatter. First, the residual supragrain-scale topographic effects in acoustic backscatter with small instantaneous insonified areas, caused by ambiguity in the local (beam-to-beam) bed-sonar geometry, are removed. Then, coherent scales between high-resolution topography and backscatter are identified using cospectra, which are used to design a frequency domain filter that decomposes backscatter into the (unwanted) high-pass component associated with bedform topography (ripples, dunes, and sand waves) and vegetation, and the (desired) low-frequency component associated with the composition of sediment patches superimposed on the topography. This process strengthens relationships between backscatter and sediment composition. A probabilistic framework is presented for classifying vegetated and unvegetated substrates based on acoustic backscatter at decimeter resolution. This capability is demonstrated using data collected from diverse settings within a 386 km reach of a canyon river whose bed varies among sand, gravel, cobbles, boulders, and submerged vegetation.

  15. Compositional signatures in acoustic backscatter over vegetated and unvegetated mixed sand-gravel riverbeds

    USGS Publications Warehouse

    Buscombe, Daniel; Grams, Paul E.; Kaplinski, Matt A.

    2017-01-01

    Multibeam acoustic backscatter has considerable utility for remote characterization of spatially heterogeneous bed sediment composition over vegetated and unvegetated riverbeds of mixed sand and gravel. However, the use of high-frequency, decimeter-resolution acoustic backscatter for sediment classification in shallow water is hampered by significant topographic contamination of the signal. In mixed sand-gravel riverbeds, changes in the abiotic composition of sediment (such as homogeneous sand to homogeneous gravel) tend to occur over larger spatial scales than is characteristic of small-scale bedform topography (ripples, dunes, and bars) or biota (such as vascular plants and periphyton). A two-stage method is proposed to filter out the morphological contributions to acoustic backscatter. First, the residual supragrain-scale topographic effects in acoustic backscatter with small instantaneous insonified areas, caused by ambiguity in the local (beam-to-beam) bed-sonar geometry, are removed. Then, coherent scales between high-resolution topography and backscatter are identified using cospectra, which are used to design a frequency domain filter that decomposes backscatter into the (unwanted) high-pass component associated with bedform topography (ripples, dunes, and sand waves) and vegetation, and the (desired) low-frequency component associated with the composition of sediment patches superimposed on the topography. This process strengthens relationships between backscatter and sediment composition. A probabilistic framework is presented for classifying vegetated and unvegetated substrates based on acoustic backscatter at decimeter resolution. This capability is demonstrated using data collected from diverse settings within a 386 km reach of a canyon river whose bed varies among sand, gravel, cobbles, boulders, and submerged vegetation.

  16. Character Recognition Method by Time-Frequency Analyses Using Writing Pressure

    NASA Astrophysics Data System (ADS)

    Watanabe, Tatsuhito; Katsura, Seiichiro

    With the development of information and communication technology, personal verification becomes more and more important. In the future ubiquitous society, the development of terminals handling personal information requires the personal verification technology. The signature is one of the personal verification methods; however, the number of characters is limited in the case of the signature and therefore false signature is used easily. Thus, personal identification is difficult from handwriting. This paper proposes a “haptic pen” that extracts the writing pressure, and shows a character recognition method by time-frequency analyses. Although the figures of characters written by different amanuenses are similar, the differences appear in the time-frequency domain. As a result, it is possible to use the proposed character recognition for personal identification more exactly. The experimental results showed the viability of the proposed method.

  17. Development of Microbubble Contrast Agents with Biochemical Recognition and Tunable Acoustic Response

    NASA Astrophysics Data System (ADS)

    Nakatsuka, Matthew Allan Masao

    Microbubbles, consisting of gas-filled cores encapsulated within phospholipid or polymer shells, are the most widely used ultrasound contrast agents in the world. Because of their acoustic impedance mismatch with surrounding tissues and compressible gaseous interiors, they have high echogenicities that allow for efficient backscatter of ultrasound. They can also generate unique harmonic frequencies when insonated near their resonance frequency, depending on physical microbubble properties such as the stiffness and thickness of the encapsulating shell. Microbubbles are used to detect a number of cardiovascular diseases, but current methodologies lack the ability to detect and distinguish small, rapidly growing abnormalities that do not produce visible blockage or slowing of blood flow. This work describes the development, formulation, and validation of microbubbles with various polymer shell architectures designed to modulate their acoustic ability. We demonstrate that the addition of a thick disulfide crosslinked, poly(acrylic acid) encapsulating shell increases a bubble's resistance to cavitation and changes its resonance frequency. Modification of this shell architecture to use hybridized DNA strands to form crosslinks between the polymer chains allows for tuning of the bubble acoustic response. When the DNA crosslinks are in place, shell stiffness is increased so the bubbles do not oscillate and acoustic signal is muted. Subsequently, when these DNA strands are displaced, partial acoustic activity is restored. By using aptamer sequences with a specific affinity towards the biomolecule thrombin as the DNA crosslinking strand, this acoustic "ON/OFF" behavior can be specifically tailored towards the presence of a specific biomarker, and produces a change in acoustic signal at concentrations of thrombin consistent with acute deep venous thrombosis. Incorporation of the emulsifying agent poly(ethylene glycol) into the encapsulating shell improves microbubble yield

  18. Segment-based acoustic models for continuous speech recognition

    NASA Astrophysics Data System (ADS)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  19. Type I Rehearsal and Recognition.

    ERIC Educational Resources Information Center

    Glenberg, Arthur; Adams, Frederick

    1978-01-01

    Rote, repetitive Type I Rehearsal is defined as the continuous maintenance of information in memory using the minimum cognitive capacity necessary for maintenance. An analysis of errors made on a forced-choice recognition test supported the hypothesis that acoustic-phonemic components of the memory trace are added or strengthened by this…

  20. Place recognition using batlike sonar

    PubMed Central

    Vanderelst, Dieter; Steckel, Jan; Boen, Andre; Peremans, Herbert; Holderied, Marc W

    2016-01-01

    Echolocating bats have excellent spatial memory and are able to navigate to salient locations using bio-sonar. Navigating and route-following require animals to recognize places. Currently, it is mostly unknown how bats recognize places using echolocation. In this paper, we propose template based place recognition might underlie sonar-based navigation in bats. Under this hypothesis, bats recognize places by remembering their echo signature - rather than their 3D layout. Using a large body of ensonification data collected in three different habitats, we test the viability of this hypothesis assessing two critical properties of the proposed echo signatures: (1) they can be uniquely classified and (2) they vary continuously across space. Based on the results presented, we conclude that the proposed echo signatures satisfy both criteria. We discuss how these two properties of the echo signatures can support navigation and building a cognitive map. DOI: http://dx.doi.org/10.7554/eLife.14188.001 PMID:27481189

  1. XV-15 Tiltrotor Aircraft: 1999 Acoustic Testing - Test Report

    NASA Technical Reports Server (NTRS)

    Edwards, Bryan D.; Conner, David A.

    2003-01-01

    An XV-15 acoustic test is discussed, and measured results are presented. The test was conducted by NASA Langley and Bell Helicopter Textron, Inc., during October 1999, at the BHTI test site near Waxahachie, Texas. As part of the NASA-sponsored Short Haul Civil Tiltrotor noise reduction initiative, this was the third in a series of three major XV-15 acoustic tests. Their purpose was to document the acoustic signature of the XV-15 tiltrotor aircraft for a variety of flight conditions and to minimize the noise signature during approach. Tradeoffs between flight procedures and the measured noise are presented to illustrate the noise abatement flight procedures. The test objectives were to support operation of future tiltrotors by further developing and demonstrating low-noise flight profiles, while maintaining acceptable handling and ride qualities, and refine approach profiles, selected from previous (1995 & 1997) tiltrotor testing, to incorporate Instrument Flight Rules (IFR), handling qualities constraints, operations and tradeoffs with sound. Primary emphasis was given to the approach flight conditions where blade-vortex interaction (BVI) noise dominates, because this condition influences community noise impact more than any other. An understanding of this part of the noise generating process could guide the development of low noise flight operations and increase the tiltrotor's acceptance in the community.

  2. The acoustic communities: Definition, description and ecological role.

    PubMed

    Farina, Almo; James, Philip

    2016-09-01

    An acoustic community is defined as an aggregation of species that produces sound by using internal or extra-body sound-producing tools. Such communities occur in aquatic (freshwater and marine) and terrestrial environments. An acoustic community is the biophonic component of a soundtope and is characterized by its acoustic signature, which results from the distribution of sonic information associated with signal amplitude and frequency. Distinct acoustic communities can be described according to habitat, the frequency range of the acoustic signals, and the time of day or the season. Near and far fields can be identified empirically, thus the acoustic community can be used as a proxy for biodiversity richness. The importance of ecoacoustic research is rapidly growing due to the increasing awareness of the intrusion of anthropogenic sounds (technophonies) into natural and human-modified ecosystems and the urgent need to adopt more efficient predictive tools to compensate for the effects of climate change. The concept of an acoustic community provides an operational scale for a non-intrusive biodiversity survey and analysis that can be carried out using new passive audio recording technology, coupled with methods of vast data processing and storage. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. The effect of buildings on acoustic pulse propagation in an urban environment.

    PubMed

    Albert, Donald G; Liu, Lanbo

    2010-03-01

    Experimental measurements were conducted using acoustic pulse sources in a full-scale artificial village to investigate the reverberation, scattering, and diffraction produced as acoustic waves interact with buildings. These measurements show that a simple acoustic source pulse is transformed into a complex signature when propagating through this environment, and that diffraction acts as a low-pass filter on the acoustic pulse. Sensors located in non-line-of-sight (NLOS) positions usually recorded lower positive pressure maxima than sensors in line-of-sight positions. Often, the first arrival on a NLOS sensor located around a corner was not the largest arrival, as later reflection arrivals that traveled longer distances without diffraction had higher amplitudes. The waveforms are of such complexity that human listeners have difficulty identifying replays of the signatures generated by a single pulse, and the usual methods of source location based on the direction of arrivals may fail in many cases. Theoretical calculations were performed using a two-dimensional finite difference time domain (FDTD) method and compared to the measurements. The predicted peak positive pressure agreed well with the measured amplitudes for all but two sensor locations directly behind buildings, where the omission of rooftop ray paths caused the discrepancy. The FDTD method also produced good agreement with many of the measured waveform characteristics.

  4. Differential phase acoustic microscope for micro-NDE

    NASA Technical Reports Server (NTRS)

    Waters, David D.; Pusateri, T. L.; Huang, S. R.

    1992-01-01

    A differential phase scanning acoustic microscope (DP-SAM) was developed, fabricated, and tested in this project. This includes the acoustic lens and transducers, driving and receiving electronics, scanning stage, scanning software, and display software. This DP-SAM can produce mechanically raster-scanned acoustic microscopic images of differential phase, differential amplitude, or amplitude of the time gated returned echoes of the samples. The differential phase and differential amplitude images provide better image contrast over the conventional amplitude images. A specially designed miniature dual beam lens was used to form two foci to obtain the differential phase and amplitude information of the echoes. High image resolution (1 micron) was achieved by applying high frequency (around 1 GHz) acoustic signals to the samples and placing two foci close to each other (1 micron). Tone burst was used in this system to obtain a good estimation of the phase differences between echoes from the two adjacent foci. The system can also be used to extract the V(z) acoustic signature. Since two acoustic beams and four receiving modes are available, there are 12 possible combinations to produce an image or a V(z) scan. This provides a unique feature of this system that none of the existing acoustic microscopic systems can provide for the micro-nondestructive evaluation applications. The entire system, including the lens, electronics, and scanning control software, has made a competitive industrial product for nondestructive material inspection and evaluation and has attracted interest from existing acoustic microscope manufacturers.

  5. Privacy protection schemes for fingerprint recognition systems

    NASA Astrophysics Data System (ADS)

    Marasco, Emanuela; Cukic, Bojan

    2015-05-01

    The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.

  6. XV-15 Tiltrotor Aircraft: 1997 Acoustic Testing

    NASA Technical Reports Server (NTRS)

    Edwards, Bryan D.; Conner, David A.

    2003-01-01

    XV-15 acoustic test is discussed, and measured results are presented. The test was conducted by NASA Langley and Bell Helicopter Textron, Inc., during June - July 1997, at the BHTI test site near Waxahachie, Texas. This was the second in a series of three XV-15 tests to document the acoustic signature of the XV-15 tiltrotor aircraft for a variety of flight conditions and minimize the noise signature during approach. Tradeoffs between flight procedures and the measured noise are presented to illustrate the noise abatement flight procedures. The test objectives were to: (1) support operation of future tiltrotors by further developing and demonstrating low-noise flight profiles, while maintaining acceptable handling and ride qualities, and (2) refine approach profiles, selected from previous (1995) tiltrotor testing, to incorporate Instrument Flight Rules (IFR), handling qualities constraints, operations and tradeoffs with sound. Primary emphasis was given to the approach flight conditions where blade-vortex interaction (BVI) noise dominates, because this condition influences community noise impact more than any other. An understanding of this part of the noise generating process could guide the development of low noise flight operations and increase the tiltrotor's acceptance in the community.

  7. Signal processing for passive detection and classification of underwater acoustic signals

    NASA Astrophysics Data System (ADS)

    Chung, Kil Woo

    2011-12-01

    This dissertation examines signal processing for passive detection, classification and tracking of underwater acoustic signals for improving port security and the security of coastal and offshore operations. First, we consider the problem of passive acoustic detection of a diver in a shallow water environment. A frequency-domain multi-band matched-filter approach to swimmer detection is presented. The idea is to break the frequency contents of the hydrophone signals into multiple narrow frequency bands, followed by time averaged (about half of a second) energy calculation over each band. Then, spectra composed of such energy samples over the chosen frequency bands are correlated to form a decision variable. The frequency bands with highest Signal/Noise ratio are used for detection. The performance of the proposed approach is demonstrated for experimental data collected for a diver in the Hudson River. We also propose a new referenceless frequency-domain multi-band detector which, unlike other reference-based detectors, does not require a diver specific signature. Instead, our detector matches to a general feature of the diver spectrum in the high frequency range: the spectrum is roughly periodic in time and approximately flat when the diver exhales. The performance of the proposed approach is demonstrated by using experimental data collected from the Hudson River. Moreover, we present detection, classification and tracking of small vessel signals. Hydroacoustic sensors can be applied for the detection of noise generated by vessels, and this noise can be used for vessel detection, classification and tracking. This dissertation presents recent improvements aimed at the measurement and separation of ship DEMON (Detection of Envelope Modulation on Noise) acoustic signatures in busy harbor conditions. Ship signature measurements were conducted in the Hudson River and NY Harbor. The DEMON spectra demonstrated much better temporal stability compared with the full ship

  8. Investigating acoustic-induced deformations in a foam using multiple light scattering.

    PubMed

    Erpelding, M; Guillermic, R M; Dollet, B; Saint-Jalmes, A; Crassous, J

    2010-08-01

    We have studied the effect of an external acoustic wave on bubble displacements inside an aqueous foam. The signature of the acoustic-induced bubble displacements is found using a multiple light scattering technique, and occurs as a modulation on the photon correlation curve. Measurements for various sound frequencies and amplitudes are compared to analytical predictions and numerical simulations. These comparisons finally allow us to elucidate the nontrivial acoustic displacement profile inside the foam; in particular, we find that the acoustic wave creates a localized shear in the vicinity of the solid walls holding the foam, as a consequence of inertial contributions. This study of how bubbles "dance" inside a foam as a response to sound turns out to provide new insights on foam acoustics and sound transmission into a foam, foam deformation at high frequencies, and analysis of light scattering data in samples undergoing nonhomogeneous deformations.

  9. Signature analysis of acoustic emission from graphite/epoxy composites

    NASA Technical Reports Server (NTRS)

    Russell, S. S.; Henneke, E. G., II

    1977-01-01

    Acoustic emissions were monitored for crack extension across and parallel to the fibers in a single ply and multiply laminates of graphite epoxy composites. Spectrum analysis was performed on the transient signal to ascertain if the fracture mode can be characterized by a particular spectral pattern. The specimens were loaded to failure quasistatically in a tensile machine. Visual observations were made via either an optical microscope or a television camera. The results indicate that several types of characteristics in the time and frequency domain correspond to different types of failure.

  10. Robust Real-Time and Rotation-Invariant American Sign Language Alphabet Recognition Using Range Camera

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2012-07-01

    The automatic interpretation of human gestures can be used for a natural interaction with computers without the use of mechanical devices such as keyboards and mice. The recognition of hand postures have been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem even with the use of 2D images. The objective of the current study is to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. An heuristic and voxelbased signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process and the tracking procedure have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 98.24% recognition rate after testing 12723 samples of 12 gestures taken from the alphabet of the American Sign Language.

  11. Characterizing phantom arteries with multi-channel laser ultrasonics and photo-acoustics.

    PubMed

    Johnson, Jami L; van Wijk, Kasper; Sabick, Michelle

    2014-03-01

    Multi-channel photo-acoustic and laser ultrasonic waves are used to sense the characteristics of proxies for healthy and diseased vessels. The acquisition system is non-contacting and non-invasive with a pulsed laser source and a laser vibrometer detector. As the wave signatures of our targets are typically low in amplitude, we exploit multi-channel acquisition and processing techniques. These are commonly used in seismology to improve the signal-to-noise ratio of data. We identify vessel proxies with a diameter on the order of 1 mm, at a depth of 18 mm. Variations in scattered and photo-acoustic signatures are related to differences in vessel wall properties and content. The methods described have the potential to improve imaging and better inform interventions for atherosclerotic vessels, such as the carotid artery. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Dragon Ears airborne acoustic array: CSP analysis applied to cross array to compute real-time 2D acoustic sound field

    NASA Astrophysics Data System (ADS)

    Cerwin, Steve; Barnes, Julie; Kell, Scott; Walters, Mark

    2003-09-01

    This paper describes development and application of a novel method to accomplish real-time solid angle acoustic direction finding using two 8-element orthogonal microphone arrays. The developed prototype system was intended for localization and signature recognition of ground-based sounds from a small UAV. Recent advances in computer speeds have enabled the implementation of microphone arrays in many audio applications. Still, the real-time presentation of a two-dimensional sound field for the purpose of audio target localization is computationally challenging. In order to overcome this challenge, a crosspower spectrum phase1 (CSP) technique was applied to each 8-element arm of a 16-element cross array to provide audio target localization. In this paper, we describe the technique and compare it with two other commonly used techniques; Cross-Spectral Matrix2 and MUSIC3. The results show that the CSP technique applied to two 8-element orthogonal arrays provides a computationally efficient solution with reasonable accuracy and tolerable artifacts, sufficient for real-time applications. Additional topics include development of a synchronized 16-channel transmitter and receiver to relay the airborne data to the ground-based processor and presentation of test data demonstrating both ground-mounted operation and airborne localization of ground-based gunshots and loud engine sounds.

  13. Study on Impact Acoustic-Visual Sensor-Based Sorting of ELV Plastic Materials.

    PubMed

    Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu

    2017-06-08

    This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles' (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41

  14. Speech Recognition: Acoustic-Phonetic Knowledge Acquisition and Representation.

    DTIC Science & Technology

    1987-09-25

    the release duration is the voice onset time, or VOT. For the purpose of this investigation, alveolar flaps ( as in "butter’) and and glottalized /t/’s...Cambridge, Massachusetts 02139 Abstract females and 8 males. The other sentence was said by 7 females We discuss a framework for an acoustic-phonetic...tarned a number of semivowels. One sentence was said by 6 vowels + + "jpporte.d by a Xerox Fellowhsp Table It Features which characterite

  15. Ipsilateral masking between acoustic and electric stimulations.

    PubMed

    Lin, Payton; Turner, Christopher W; Gantz, Bruce J; Djalilian, Hamid R; Zeng, Fan-Gang

    2011-08-01

    Residual acoustic hearing can be preserved in the same ear following cochlear implantation with minimally traumatic surgical techniques and short-electrode arrays. The combined electric-acoustic stimulation significantly improves cochlear implant performance, particularly speech recognition in noise. The present study measures simultaneous masking by electric pulses on acoustic pure tones, or vice versa, to investigate electric-acoustic interactions and their underlying psychophysical mechanisms. Six subjects, with acoustic hearing preserved at low frequencies in their implanted ear, participated in the study. One subject had a fully inserted 24 mm Nucleus Freedom array and five subjects had Iowa/Nucleus hybrid implants that were only 10 mm in length. Electric masking data of the long-electrode subject showed that stimulation from the most apical electrodes produced threshold elevations over 10 dB for 500, 625, and 750 Hz probe tones, but no elevation for 125 and 250 Hz tones. On the contrary, electric stimulation did not produce any electric masking in the short-electrode subjects. In the acoustic masking experiment, 125-750 Hz pure tones were used to acoustically mask electric stimulation. The acoustic masking results showed that, independent of pure tone frequency, both long- and short-electrode subjects showed threshold elevations at apical and basal electrodes. The present results can be interpreted in terms of underlying physiological mechanisms related to either place-dependent peripheral masking or place-independent central masking.

  16. Studies of recognition with multitemporal remote sensor data

    NASA Technical Reports Server (NTRS)

    Malila, W. A.; Hieber, R. H.; Cicone, R. C.

    1975-01-01

    Characteristics of multitemporal data and their use in recognition processing were investigated. Principal emphasis was on satellite data collected by the LANDSAT multispectral scanner and on temporal changes throughout a growing season. The effects of spatial misregistration on recognition performance with multitemporal data were examined. A capability to compute probabilities of detection and false alarm was developed and used with simulated distributions for misregistered pixels. Wheat detection was found to be degraded and false alarms increased by misregistration effects. Multitemporal signature characteristics and multitemporal recognition processing were studied to gain insights into problems associated with this approach and possible improvements. Recognition performance with one multitemporal data set displayed marked improvements over results from single-time data.

  17. Improving Acoustic Models by Watching Television

    NASA Technical Reports Server (NTRS)

    Witbrock, Michael J.; Hauptmann, Alexander G.

    1998-01-01

    Obtaining sufficient labelled training data is a persistent difficulty for speech recognition research. Although well transcribed data is expensive to produce, there is a constant stream of challenging speech data and poor transcription broadcast as closed-captioned television. We describe a reliable unsupervised method for identifying accurately transcribed sections of these broadcasts, and show how these segments can be used to train a recognition system. Starting from acoustic models trained on the Wall Street Journal database, a single iteration of our training method reduced the word error rate on an independent broadcast television news test set from 62.2% to 59.5%.

  18. Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients

    ERIC Educational Resources Information Center

    Peng, Shu-Chen; Lu, Hui-Ping; Lu, Nelson; Lin, Yung-Song; Deroche, Mickael L. D.; Chatterjee, Monita

    2017-01-01

    Purpose: The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers. Method: Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally…

  19. Design of a broadband ultra-large area acoustic cloak based on a fluid medium

    NASA Astrophysics Data System (ADS)

    Zhu, Jian; Chen, Tianning; Liang, Qingxuan; Wang, Xiaopeng; Jiang, Ping

    2014-10-01

    A broadband ultra-large area acoustic cloak based on fluid medium was designed and numerically implemented with homogeneous metamaterials according to the transformation acoustics. In the present work, fluid medium as the body of the inclusion could be tuned by changing the fluid to satisfy the variant acoustic parameters instead of redesign the whole cloak. The effective density and bulk modulus of the composite materials were designed to agree with the parameters calculated from the coordinate transformation methodology by using the effective medium theory. Numerical simulation results showed that the sound propagation and scattering signature could be controlled in the broadband ultra-large area acoustic invisibility cloak, and good cloaking performance has been achieved and physically realized with homogeneous materials. The broadband ultra-large area acoustic cloaking properties have demonstrated great potentials in the promotion of the practical applications of acoustic cloak.

  20. Combined Electric and Contralateral Acoustic Hearing: Word and Sentence Recognition with Bimodal Hearing

    ERIC Educational Resources Information Center

    Gifford, Rene H.; Dorman, Michael F.; McKarns, Sharon A.; Spahr, Anthony J.

    2007-01-01

    Purpose: The authors assessed whether (a) a full-insertion cochlear implant would provide a higher level of speech understanding than bilateral low-frequency acoustic hearing, (b) contralateral acoustic hearing would add to the speech understanding provided by the implant, and (c) the level of performance achieved with electric stimulation plus…

  1. The role of temporal call structure in species recognition of male Allobates talamancae (Cope, 1875): (Anura: Dendrobatidae).

    PubMed

    Kollarits, Dennis; Wappl, Christian; Ringler, Max

    2017-01-30

    Acoustic species recognition in anurans depends on spectral and temporal characteristics of the advertisement call. The recognition space of a species is shaped by the likelihood of heterospecific acoustic interference. The dendrobatid frogs Allobates talamancae (Cope, 1875) and Silverstoneia flotator (Dunn, 1931) occur syntopically in south-west Costa Rica. A previous study showed that these two species avoid acoustic interference by spectral stratification. In this study, the role of the temporal call structure in the advertisement call of A. talamancae was analyzed, in particular the internote-interval duration in providing species specific temporal cues. In playback trials, artificial advertisement calls with internote-intervals deviating up to ± 90 % from the population mean internote-interval were broadcast to vocally active territorial males. The phonotactic reactions of the males indicated that, unlike in closely related species, internote-interval duration is not a call property essential for species recognition in A. talamancae . However, temporal call structure may be used for species recognition when the likelihood of heterospecific interference is high. Also, the close-encounter courtship call of male A. talamancae is described.

  2. Acoustic communication at the water's edge: evolutionary insights from a mudskipper.

    PubMed

    Polgar, Gianluca; Malavasi, Stefano; Cipolato, Giacomo; Georgalas, Vyron; Clack, Jennifer A; Torricelli, Patrizia

    2011-01-01

    Coupled behavioural observations and acoustical recordings of aggressive dyadic contests showed that the mudskipper Periophthalmodon septemradiatus communicates acoustically while out of water. An analysis of intraspecific variability showed that specific acoustic components may act as tags for individual recognition, further supporting the sounds' communicative value. A correlative analysis amongst acoustical properties and video-acoustical recordings in slow-motion supported first hypotheses on the emission mechanism. Acoustic transmission through the wet exposed substrate was also discussed. These observations were used to support an "exaptation hypothesis", i.e. the maintenance of key adaptations during the first stages of water-to-land vertebrate eco-evolutionary transitions (based on eco-evolutionary and palaeontological considerations), through a comparative bioacoustic analysis of aquatic and semiterrestrial gobiid taxa. In fact, a remarkable similarity was found between mudskipper vocalisations and those emitted by gobioids and other soniferous benthonic fishes.

  3. Aging in Biometrics: An Experimental Analysis on On-Line Signature

    PubMed Central

    Galbally, Javier; Martinez-Diaz, Marcos; Fierrez, Julian

    2013-01-01

    The first consistent and reproducible evaluation of the effect of aging on dynamic signature is reported. Experiments are carried out on a database generated from two previous datasets which were acquired, under very similar conditions, in 6 sessions distributed in a 15-month time span. Three different systems, representing the current most popular approaches in signature recognition, are used in the experiments, proving the degradation suffered by this trait with the passing of time. Several template update strategies are also studied as possible measures to reduce the impact of aging on the system’s performance. Different results regarding the way in which signatures tend to change with time, and their most and least stable features, are also given. PMID:23894557

  4. Novel underwater soundscape: acoustic repertoire of plainfin midshipman fish.

    PubMed

    McIver, Eileen L; Marchaterre, Margaret A; Rice, Aaron N; Bass, Andrew H

    2014-07-01

    Toadfishes are among the best-known groups of sound-producing (vocal) fishes and include species commonly known as toadfish and midshipman. Although midshipman have been the subject of extensive investigation of the neural mechanisms of vocalization, this is the first comprehensive, quantitative analysis of the spectro-temporal characters of their acoustic signals and one of the few for fishes in general. Field recordings of territorial, nest-guarding male midshipman during the breeding season identified a diverse vocal repertoire composed of three basic sound types that varied widely in duration, harmonic structure and degree of amplitude modulation (AM): 'hum', 'grunt' and 'growl'. Hum duration varied nearly 1000-fold, lasting for minutes at a time, with stable harmonic stacks and little envelope modulation throughout the sound. By contrast, grunts were brief, ~30-140 ms, broadband signals produced both in isolation and repetitively as a train of up to 200 at intervals of ~0.5-1.0 s. Growls were also produced alone or repetitively, but at variable intervals of the order of seconds with durations between those of grunts and hums, ranging 60-fold from ~200 ms to 12 s. Growls exhibited prominent harmonics with sudden shifts in pulse repetition rate and highly variable AM patterns, unlike the nearly constant AM of grunt trains and flat envelope of hums. Behavioral and neurophysiological studies support the hypothesis that each sound type's unique acoustic signature contributes to signal recognition mechanisms. Nocturnal production of these sounds against a background chorus dominated constantly for hours by a single sound type, the multi-harmonic hum, reveals a novel underwater soundscape for fish. © 2014. Published by The Company of Biologists Ltd.

  5. Use of Acoustic Emission and Pattern Recognition for Crack Detection of a Large Carbide Anvil

    PubMed Central

    Chen, Bin; Wang, Yanan; Yan, Zhaoli

    2018-01-01

    Large-volume cubic high-pressure apparatus is commonly used to produce synthetic diamond. Due to the high pressure, high temperature and alternative stresses in practical production, cracks often occur in the carbide anvil, thereby resulting in significant economic losses or even casualties. Conventional methods are unsuitable for crack detection of the carbide anvil. This paper is concerned with acoustic emission-based crack detection of carbide anvils, regarded as a pattern recognition problem; this is achieved using a microphone, with methods including sound pulse detection, feature extraction, feature optimization and classifier design. Through analyzing the characteristics of background noise, the cracked sound pulses are separated accurately from the originally continuous signal. Subsequently, three different kinds of features including a zero-crossing rate, sound pressure levels, and linear prediction cepstrum coefficients are presented for characterizing the cracked sound pulses. The original high-dimensional features are adaptively optimized using principal component analysis. A hybrid framework of a support vector machine with k nearest neighbors is designed to recognize the cracked sound pulses. Finally, experiments are conducted in a practical diamond workshop to validate the feasibility and efficiency of the proposed method. PMID:29382144

  6. Use of Acoustic Emission and Pattern Recognition for Crack Detection of a Large Carbide Anvil.

    PubMed

    Chen, Bin; Wang, Yanan; Yan, Zhaoli

    2018-01-29

    Large-volume cubic high-pressure apparatus is commonly used to produce synthetic diamond. Due to the high pressure, high temperature and alternative stresses in practical production, cracks often occur in the carbide anvil, thereby resulting in significant economic losses or even casualties. Conventional methods are unsuitable for crack detection of the carbide anvil. This paper is concerned with acoustic emission-based crack detection of carbide anvils, regarded as a pattern recognition problem; this is achieved using a microphone, with methods including sound pulse detection, feature extraction, feature optimization and classifier design. Through analyzing the characteristics of background noise, the cracked sound pulses are separated accurately from the originally continuous signal. Subsequently, three different kinds of features including a zero-crossing rate, sound pressure levels, and linear prediction cepstrum coefficients are presented for characterizing the cracked sound pulses. The original high-dimensional features are adaptively optimized using principal component analysis. A hybrid framework of a support vector machine with k nearest neighbors is designed to recognize the cracked sound pulses. Finally, experiments are conducted in a practical diamond workshop to validate the feasibility and efficiency of the proposed method.

  7. Assessing the acoustical climate of underground stations.

    PubMed

    Nowicka, Elzbieta

    2007-01-01

    Designing a proper acoustical environment--indispensable to speech recognition--in long enclosures is difficult. Although there is some literature on the acoustical conditions in underground stations, there is still little information about methods that make estimation of correct reverberation conditions possible. This paper discusses the assessment of the reverberation conditions of underground stations. A comparison of the measurements of reverberation time in Warsaw's underground stations with calculated data proves there are divergences between measured and calculated early decay time values, especially for long source-receiver distances. Rapid speech transmission index values for measured stations are also presented.

  8. Acoustic Signal Processing in Photorefractive Optical Systems.

    NASA Astrophysics Data System (ADS)

    Zhou, Gan

    This thesis discusses applications of the photorefractive effect in the context of acoustic signal processing. The devices and systems presented here illustrate the ideas and optical principles involved in holographic processing of acoustic information. The interest in optical processing stems from the similarities between holographic optical systems and contemporary models for massively parallel computation, in particular, neural networks. An initial step in acoustic processing is the transformation of acoustic signals into relevant optical forms. A fiber-optic transducer with photorefractive readout transforms acoustic signals into optical images corresponding to their short-time spectrum. The device analyzes complex sound signals and interfaces them with conventional optical correlators. The transducer consists of 130 multimode optical fibers sampling the spectral range of 100 Hz to 5 kHz logarithmically. A physical model of the human cochlea can help us understand some characteristics of human acoustic transduction and signal representation. We construct a life-sized cochlear model using elastic membranes coupled with two fluid-filled chambers, and use a photorefractive novelty filter to investigate its response. The detection sensitivity is determined to be 0.3 angstroms per root Hz at 2 kHz. Qualitative agreement is found between the model response and physiological data. Delay lines map time-domain signals into space -domain and permit holographic processing of temporal information. A parallel optical delay line using dynamic beam coupling in a rotating photorefractive crystal is presented. We experimentally demonstrate a 64 channel device with 0.5 seconds of time-delay and 167 Hz bandwidth. Acoustic signal recognition is described in a photorefractive system implementing the time-delay neural network model. The system consists of a photorefractive optical delay-line and a holographic correlator programmed in a LiNbO_3 crystal. We demonstrate the recognition

  9. A signature correlation study of ground target VHF/UHF ISAR imagery

    NASA Astrophysics Data System (ADS)

    Gatesman, Andrew J.; Beaudoin, Christopher J.; Giles, Robert H.; Kersey, William T.; Waldman, Jerry; Carter, Steve; Nixon, William E.

    2003-09-01

    VV and HH-polarized radar signatures of several ground targets were acquired in the VHF/UHF band (171-342 MHz) by using 1/35th scale models and an indoor radar range operating from 6 to 12 GHz. Data were processed into medianized radar cross sections as well as focused, ISAR imagery. Measurement validation was confirmed by comparing the radar cross section of a test object with a method of moments radar cross section prediction code. The signatures of several vehicles from three vehicle classes (tanks, trunks, and TELs) were measured and a signature cross-correlation study was performed. The VHF/UHF band is currently being exploited for its foliage penetration ability, however, the coarse image resolution which results from the relatively long radar wavelengths suggests a more challenging target recognition problem. One of the study's goals was to determine the amount of unique signature content in VHF/UHF ISAR imagery of military ground vehicles. Open-field signatures are compared with each other as well as with simplified shapes of similar size. Signatures were also acquired on one vehicle in a variety of configurations to determine the impact of monitor target variations on the signature content at these frequencies.

  10. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  11. Acoustic-Phonetic Versus Lexical Processing in Nonnative Listeners Differing in Their Dominant Language.

    PubMed

    Shi, Lu-Feng; Koenig, Laura L

    2016-09-01

    Nonnative listeners have difficulty recognizing English words due to underdeveloped acoustic-phonetic and/or lexical skills. The present study used Boothroyd and Nittrouer's (1988)j factor to tease apart these two components of word recognition. Participants included 15 native English and 29 native Russian listeners. Fourteen and 15 of the Russian listeners reported English (ED) and Russian (RD) to be their dominant language, respectively. Listeners were presented 119 consonant-vowel-consonant real and nonsense words in speech-spectrum noise at +6 dB SNR. Responses were scored for word and phoneme recognition, the logarithmic quotient of which yielded j. Word and phoneme recognition was comparable between native and ED listeners but poorer in RD listeners. Analysis of j indicated less effective use of lexical information in RD than in native and ED listeners. Lexical processing was strongly correlated with the length of residence in the United States. Language background is important for nonnative word recognition. Lexical skills can be regarded as nativelike in ED nonnative listeners. Compromised word recognition in ED listeners is unlikely a result of poor lexical processing. Performance should be interpreted with caution for listeners dominant in their first language, whose word recognition is affected by both lexical and acoustic-phonetic factors.

  12. The Impact of Very High Frequency Surface Reverberation on Coherent Acoustic Propagation and Modeling

    DTIC Science & Technology

    2015-09-30

    in review]. Glowacki, O., G. B. Deane, M. Moskalik, Ph. Blondel, J. Tegowski and M. Blaszczyk, “Underwater acoustic signatures of glacier calving.” Geophys. Res. Let. 2014. DOI: 10.1002/2014GL062859 [published, refereed].

  13. Bearing defect signature analysis using advanced nonlinear signal analysis in a controlled environment

    NASA Technical Reports Server (NTRS)

    Zoladz, T.; Earhart, E.; Fiorucci, T.

    1995-01-01

    Utilizing high-frequency data from a highly instrumented rotor assembly, seeded bearing defect signatures are characterized using both conventional linear approaches, such as power spectral density analysis, and recently developed nonlinear techniques such as bicoherence analysis. Traditional low-frequency (less than 20 kHz) analysis and high-frequency envelope analysis of both accelerometer and acoustic emission data are used to recover characteristic bearing distress information buried deeply in acquired data. The successful coupling of newly developed nonlinear signal analysis with recovered wideband envelope data from accelerometers and acoustic emission sensors is the innovative focus of this research.

  14. Acoustic Communication at the Water's Edge: Evolutionary Insights from a Mudskipper

    PubMed Central

    Polgar, Gianluca; Malavasi, Stefano; Cipolato, Giacomo; Georgalas, Vyron; Clack, Jennifer A.; Torricelli, Patrizia

    2011-01-01

    Coupled behavioural observations and acoustical recordings of aggressive dyadic contests showed that the mudskipper Periophthalmodon septemradiatus communicates acoustically while out of water. An analysis of intraspecific variability showed that specific acoustic components may act as tags for individual recognition, further supporting the sounds' communicative value. A correlative analysis amongst acoustical properties and video-acoustical recordings in slow-motion supported first hypotheses on the emission mechanism. Acoustic transmission through the wet exposed substrate was also discussed. These observations were used to support an “exaptation hypothesis”, i.e. the maintenance of key adaptations during the first stages of water-to-land vertebrate eco-evolutionary transitions (based on eco-evolutionary and palaeontological considerations), through a comparative bioacoustic analysis of aquatic and semiterrestrial gobiid taxa. In fact, a remarkable similarity was found between mudskipper vocalisations and those emitted by gobioids and other soniferous benthonic fishes. PMID:21738663

  15. Acoustic cue integration in speech intonation recognition with cochlear implants.

    PubMed

    Peng, Shu-Chen; Chatterjee, Monita; Lu, Nelson

    2012-06-01

    The present article reports on the perceptual weighting of prosodic cues in question-statement identification by adult cochlear implant (CI) listeners. Acoustic analyses of normal-hearing (NH) listeners' production of sentences spoken as questions or statements confirmed that in English the last bisyllabic word in a sentence carries the dominant cues (F0, duration, and intensity patterns) for the contrast. Furthermore, these analyses showed that the F0 contour is the primary cue for the question-statement contrast, with intensity and duration changes conveying important but less reliable information. On the basis of these acoustic findings, the authors examined adult CI listeners' performance in two question-statement identification tasks. In Task 1, 13 CI listeners' question-statement identification accuracy was measured using naturally uttered sentences matched for their syntactic structures. In Task 2, the same listeners' perceptual cue weighting in question-statement identification was assessed using resynthesized single-word stimuli, within which fundamental frequency (F0), intensity, and duration properties were systematically manipulated. Both tasks were also conducted with four NH listeners with full-spectrum and noise-band-vocoded stimuli. Perceptual cue weighting was assessed by comparing the estimated coefficients in logistic models fitted to the data. Of the 13 CI listeners, 7 achieved high performance levels in Task 1. The results of Task 2 indicated that multiple sources of acoustic cues for question-statement identification were utilized to different extents depending on the listening conditions (e.g., full spectrum vs. spectrally degraded) or the listeners' hearing and amplification status (e.g., CI vs. NH).

  16. Distributed Recognition of Natural Songs by European Starlings

    ERIC Educational Resources Information Center

    Knudsen, Daniel; Thompson, Jason V.; Gentner, Timothy Q.

    2010-01-01

    Individual vocal recognition behaviors in songbirds provide an excellent framework for the investigation of comparative psychological and neurobiological mechanisms that support the perception and cognition of complex acoustic communication signals. To this end, the complex songs of European starlings have been studied extensively. Yet, several…

  17. Syllable acoustics, temporal patterns, and call composition vary with behavioral context in Mexican free-tailed bats

    PubMed Central

    Bohn, Kirsten M.; Schmidt-French, Barbara; Ma, Sean T.; Pollak, George D.

    2008-01-01

    Recent research has shown that some bat species have rich vocal repertoires with diverse syllable acoustics. Few studies, however, have compared vocalizations across different behavioral contexts or examined the temporal emission patterns of vocalizations. In this paper, a comprehensive examination of the vocal repertoire of Mexican free-tailed bats, T. brasiliensis, is presented. Syllable acoustics and temporal emission patterns for 16 types of vocalizations including courtship song revealed three main findings. First, although in some cases syllables are unique to specific calls, other syllables are shared among different calls. Second, entire calls associated with one behavior can be embedded into more complex vocalizations used in entirely different behavioral contexts. Third, when different calls are composed of similar syllables, distinctive temporal emission patterns may facilitate call recognition. These results indicate that syllable acoustics alone do not likely provide enough information for call recognition; rather, the acoustic context and temporal emission patterns of vocalizations may affect meaning. PMID:19045674

  18. A universal entropy-driven mechanism for thioredoxin–target recognition

    PubMed Central

    Palde, Prakash B.; Carroll, Kate S.

    2015-01-01

    Cysteine residues in cytosolic proteins are maintained in their reduced state, but can undergo oxidation owing to posttranslational modification during redox signaling or under conditions of oxidative stress. In large part, the reduction of oxidized protein cysteines is mediated by a small 12-kDa thiol oxidoreductase, thioredoxin (Trx). Trx provides reducing equivalents for central metabolic enzymes and is implicated in redox regulation of a wide number of target proteins, including transcription factors. Despite its importance in cellular redox homeostasis, the precise mechanism by which Trx recognizes target proteins, especially in the absence of any apparent signature binding sequence or motif, remains unknown. Knowledge of the forces associated with the molecular recognition that governs Trx–protein interactions is fundamental to our understanding of target specificity. To gain insight into Trx–target recognition, we have thermodynamically characterized the noncovalent interactions between Trx and target proteins before S-S reduction using isothermal titration calorimetry (ITC). Our findings indicate that Trx recognizes the oxidized form of its target proteins with exquisite selectivity, compared with their reduced counterparts. Furthermore, we show that recognition is dependent on the conformational restriction inherent to oxidized targets. Significantly, the thermodynamic signatures for multiple Trx targets reveal favorable entropic contributions as the major recognition force dictating these protein–protein interactions. Taken together, our data afford significant new insight into the molecular forces responsible for Trx–target recognition and should aid the design of new strategies for thiol oxidoreductase inhibition. PMID:26080424

  19. Speech Recognition Using Multiple Features and Multiple Recognizers

    DTIC Science & Technology

    1991-12-03

    6 2.1 Introduction ............................................... 6 2.2 Human Speech Communication Process...119 How to Setup ASRT.......................................... 119 How to Use Interactive Menus .................................. 120...recognize a word from an acoustic signal. The human ear and brain perform this type of recognition with incredible speed and precision. Even though

  20. Phase change events of volatile liquid perfluorocarbon contrast agents produce unique acoustic signatures

    PubMed Central

    Sheeran, Paul S.; Matsunaga, Terry O.; Dayton, Paul A.

    2015-01-01

    Phase-change contrast agents (PCCAs) provide a dynamic platform to approach problems in medical ultrasound (US). Upon US-mediated activation, the liquid core vaporizes and expands to produce a gas bubble ideal for US imaging and therapy. In this study, we demonstrate through high-speed video microscopy and US interrogation that PCCAs composed of highly volatile perfluorocarbons (PFCs) exhibit unique acoustic behavior that can be detected and differentiated from standard microbubble contrast agents. Experimental results show that when activated with short pulses PCCAs will over-expand and undergo unforced radial oscillation while settling to a final bubble diameter. The size-dependent oscillation phenomenon generates a unique acoustic signal that can be passively detected in both time and frequency domain using confocal piston transducers with an ‘activate high’ (8 MHz, 2 cycles), ‘listen low’ (1 MHz) scheme. Results show that the magnitude of the acoustic ‘signature’ increases as PFC boiling point decreases. By using a band-limited spectral processing technique, the droplet signals can be isolated from controls and used to build experimental relationships between concentration and vaporization pressure. The techniques shown here may be useful for physical studies as well as development of droplet-specific imaging techniques. PMID:24351961

  1. Dual function seal: visualized digital signature for electronic medical record systems.

    PubMed

    Yu, Yao-Chang; Hou, Ting-Wei; Chiang, Tzu-Chiang

    2012-10-01

    Digital signature is an important cryptography technology to be used to provide integrity and non-repudiation in electronic medical record systems (EMRS) and it is required by law. However, digital signatures normally appear in forms unrecognizable to medical staff, this may reduce the trust from medical staff that is used to the handwritten signatures or seals. Therefore, in this paper we propose a dual function seal to extend user trust from a traditional seal to a digital signature. The proposed dual function seal is a prototype that combines the traditional seal and digital seal. With this prototype, medical personnel are not just can put a seal on paper but also generate a visualized digital signature for electronic medical records. Medical Personnel can then look at the visualized digital signature and directly know which medical personnel generated it, just like with a traditional seal. Discrete wavelet transform (DWT) is used as an image processing method to generate a visualized digital signature, and the peak signal to noise ratio (PSNR) is calculated to verify that distortions of all converted images are beyond human recognition, and the results of our converted images are from 70 dB to 80 dB. The signature recoverability is also tested in this proposed paper to ensure that the visualized digital signature is verifiable. A simulated EMRS is implemented to show how the visualized digital signature can be integrity into EMRS.

  2. Acoustics and dynamics of coaxial interacting vortex rings

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Leonard, Anthony; Zabusky, Norman J.; Ferziger, Joel H.

    1988-01-01

    Using a contour dynamics method for inviscid axisymmetric flow we examine the effects of core deformation on the dynamics and acoustic signatures of coaxial interacting vortex rings. Both 'passage' and 'collision' (head-on) interactions are studied for initially identical vortices. Good correspondence with experiments is obtained. A simple model which retains only the elliptic degree of freedom in the core shape is used to explain some of the calculated features.

  3. Helmet-mounted acoustic array for hostile fire detection and localization in an urban environment

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.

    2008-04-01

    The detection and localization of hostile weapons firing has been demonstrated successfully with acoustic sensor arrays on unattended ground sensors (UGS), ground-vehicles, and unmanned aerial vehicles (UAVs). Some of the more mature systems have demonstrated significant capabilities and provide direct support to ongoing counter-sniper operations. The Army Research Laboratory (ARL) is conducting research and development for a helmet-mounted system to acoustically detect and localize small arms firing, or other events such as RPG, mortars, and explosions, as well as other non-transient signatures. Since today's soldier is quickly being asked to take on more and more reconnaissance, surveillance, & target acquisition (RSTA) functions, sensor augmentation enables him to become a mobile and networked sensor node on the complex and dynamic battlefield. Having a body-worn threat detection and localization capability for events that pose an immediate danger to the soldiers around him can significantly enhance their survivability and lethality, as well as enable him to provide and use situational awareness clues on the networked battlefield. This paper addresses some of the difficulties encountered by an acoustic system in an urban environment. Complex reverberation, multipath, diffraction, and signature masking by building structures makes this a very harsh environment for robust detection and classification of shockwaves and muzzle blasts. Multifunctional acoustic detection arrays can provide persistent surveillance and enhanced situational awareness for every soldier.

  4. Effects of Cognitive Load on Speech Recognition

    ERIC Educational Resources Information Center

    Mattys, Sven L.; Wiget, Lukas

    2011-01-01

    The effect of cognitive load (CL) on speech recognition has received little attention despite the prevalence of CL in everyday life, e.g., dual-tasking. To assess the effect of CL on the interaction between lexically-mediated and acoustically-mediated processes, we measured the magnitude of the "Ganong effect" (i.e., lexical bias on phoneme…

  5. Optical generation and detection of gigahertz-frequency longitudinal and shear acoustic waves in liquids: Theory and experiment

    NASA Astrophysics Data System (ADS)

    Klieber, Christoph; Pezeril, Thomas; Andrieu, Stéphane; Nelson, Keith A.

    2012-07-01

    We describe an adaptation of picosecond laser ultrasonics tailored for study of GHz-frequency longitudinal and shear acoustic waves in liquids. Time-domain coherent Brillouin scattering is used to detect multicycle acoustic waves after their propagation through variable thickness liquid layers into a solid substrate. A specialized optical pulse shaping method is used to generate sequences of pulses whose repetition rate determines the acoustic frequency. The measurements reveal the viscoelastic liquid properties and also include signatures of the optical and acoustic cavities formed by the multilayer sample assembly. Modeling of the signals allows their features to be distinguished so that liquid properties can be extracted reliably. Longitudinal and shear acoustic wave data from glycerol and from the silicon oil DC704 are presented.

  6. Speech Perception in Complex Acoustic Environments: Developmental Effects

    ERIC Educational Resources Information Center

    Leibold, Lori J.

    2017-01-01

    Purpose: The ability to hear and understand speech in complex acoustic environments follows a prolonged time course of development. The purpose of this article is to provide a general overview of the literature describing age effects in susceptibility to auditory masking in the context of speech recognition, including a summary of findings related…

  7. Reduced isothermal feature set for long wave infrared (LWIR) face recognition

    NASA Astrophysics Data System (ADS)

    Donoso, Ramiro; San Martín, Cesar; Hermosilla, Gabriel

    2017-06-01

    In this paper, we introduce a new concept in the thermal face recognition area: isothermal features. This consists of a feature vector built from a thermal signature that depends on the emission of the skin of the person and its temperature. A thermal signature is the appearance of the face to infrared sensors and is unique to each person. The infrared face is decomposed into isothermal regions that present the thermal features of the face. Each isothermal region is modeled as circles within a center representing the pixel of the image, and the feature vector is composed of a maximum radius of the circles at the isothermal region. This feature vector corresponds to the thermal signature of a person. The face recognition process is built using a modification of the Expectation Maximization (EM) algorithm in conjunction with a proposed probabilistic index to the classification process. Results obtained using an infrared database are compared with typical state-of-the-art techniques showing better performance, especially in uncontrolled acquisition conditions scenarios.

  8. Modeling the effect of channel number and interaction on consonant recognition in a cochlear implant peak-picking strategy.

    PubMed

    Verschuur, Carl

    2009-03-01

    Difficulties in speech recognition experienced by cochlear implant users may be attributed both to information loss caused by signal processing and to information loss associated with the interface between the electrode array and auditory nervous system, including cross-channel interaction. The objective of the work reported here was to attempt to partial out the relative contribution of these different factors to consonant recognition. This was achieved by comparing patterns of consonant feature recognition as a function of channel number and presence/absence of background noise in users of the Nucleus 24 device with normal hearing subjects listening to acoustic models that mimicked processing of that device. Additionally, in the acoustic model experiment, a simulation of cross-channel spread of excitation, or "channel interaction," was varied. Results showed that acoustic model experiments were highly correlated with patterns of performance in better-performing cochlear implant users. Deficits to consonant recognition in this subgroup could be attributed to cochlear implant processing, whereas channel interaction played a much smaller role in determining performance errors. The study also showed that large changes to channel number in the Advanced Combination Encoder signal processing strategy led to no substantial changes in performance.

  9. Finding Acoustic Regularities in Speech: Applications to Phonetic Recognition

    DTIC Science & Technology

    1988-12-01

    University Press, Indiana, I 1977. [12] N. Chomsky and M. Halle, The Sound Patterns of English, Harper and Row, New York, 1968. l 129 I BIBLIOGRAPHY [13] Y.L...segments are related to the phonemes by a grammar which is determined using. automated procedures operating on a set of training data. Thus important...segments which are described completely in acoustic terms. Next, these acous- tic segments are related to the phonemes by a grammar which is determined

  10. Infrared Signature Modeling and Analysis of Aircraft Plume

    NASA Astrophysics Data System (ADS)

    Rao, Arvind G.

    2011-09-01

    In recent years, the survivability of an aircraft has been put to task more than ever before. One of the main reasons is the increase in the usage of Infrared (IR) guided Anti-Aircraft Missiles, especially due to the availability of Man Portable Air Defence System (MANPADS) with some terrorist groups. Thus, aircraft IR signatures are gaining more importance as compared to their radar, visual, acoustic, or any other signatures. The exhaust plume ejected from the aircraft is one of the important sources of IR signature in military aircraft that use low bypass turbofan engines for propulsion. The focus of the present work is modelling of spectral IR radiation emission from the exhaust jet of a typical military aircraft and to evaluate the aircraft susceptibility in terms of the aircraft lock-on range due to its plume emission, for a simple case against a typical Surface to Air Missile (SAM). The IR signature due to the aircraft plume is examined in a holistic manner. A comprehensive methodology of computing IR signatures and its affect on aircraft lock-on range is elaborated. Commercial CFD software has been used to predict the plume thermo-physical properties and subsequently an in-house developed code was used for evaluating the IR radiation emitted by the plume. The LOWTRAN code has been used for modeling the atmospheric IR characteristics. The results obtained from these models are in reasonable agreement with some available experimental data. The analysis carried out in this paper succinctly brings out the intricacy of the radiation emitted by various gaseous species in the plume and the role of atmospheric IR transmissivity in dictating the plume IR signature as perceived by an IR guided SAM.

  11. Bio-inspired UAV routing, source localization, and acoustic signature classification for persistent surveillance

    NASA Astrophysics Data System (ADS)

    Burman, Jerry; Hespanha, Joao; Madhow, Upamanyu; Pham, Tien

    2011-06-01

    A team consisting of Teledyne Scientific Company, the University of California at Santa Barbara and the Army Research Laboratory* is developing technologies in support of automated data exfiltration from heterogeneous battlefield sensor networks to enhance situational awareness for dismounts and command echelons. Unmanned aerial vehicles (UAV) provide an effective means to autonomously collect data from a sparse network of unattended ground sensors (UGSs) that cannot communicate with each other. UAVs are used to reduce the system reaction time by generating autonomous collection routes that are data-driven. Bio-inspired techniques for search provide a novel strategy to detect, capture and fuse data. A fast and accurate method has been developed to localize an event by fusing data from a sparse number of UGSs. This technique uses a bio-inspired algorithm based on chemotaxis or the motion of bacteria seeking nutrients in their environment. A unique acoustic event classification algorithm was also developed based on using swarm optimization. Additional studies addressed the problem of routing multiple UAVs, optimally placing sensors in the field and locating the source of gunfire at helicopters. A field test was conducted in November of 2009 at Camp Roberts, CA. The field test results showed that a system controlled by bio-inspired software algorithms can autonomously detect and locate the source of an acoustic event with very high accuracy and visually verify the event. In nine independent test runs of a UAV, the system autonomously located the position of an explosion nine times with an average accuracy of 3 meters. The time required to perform source localization using the UAV was on the order of a few minutes based on UAV flight times. In June 2011, additional field tests of the system will be performed and will include multiple acoustic events, optimal sensor placement based on acoustic phenomenology and the use of the International Technology Alliance (ITA

  12. Combining Passive Thermography and Acoustic Emission for Large Area Fatigue Damage Growth Assessment of a Composite Structure

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Horne, Michael R.; Madaras, Eric I.; Burke, Eric R.

    2016-01-01

    Passive thermography and acoustic emission data were obtained for improved real time damage detection during fatigue loading. A strong positive correlation was demonstrated between acoustic energy event location and thermal heating, especially if the structure under load was nearing ultimate failure. An image processing routine was developed to map the acoustic emission data onto the thermal imagery. This required removing optical barrel distortion and angular rotation from the thermal data. The acoustic emission data were then mapped onto thermal data, revealing the cluster of acoustic emission event locations around the thermal signatures of interest. By combining both techniques, progression of damage growth is confirmed and areas of failure are identified. This technology provides improved real time inspections of advanced composite structures during fatigue testing.Keywords: Thermal nondestructive evaluation, fatigue damage detection, aerospace composite inspection, acoustic emission, passive thermography

  13. Seismic and acoustic signal identification algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LADD,MARK D.; ALAM,M. KATHLEEN; SLEEFE,GERARD E.

    2000-04-03

    This paper will describe an algorithm for detecting and classifying seismic and acoustic signals for unattended ground sensors. The algorithm must be computationally efficient and continuously process a data stream in order to establish whether or not a desired signal has changed state (turned-on or off). The paper will focus on describing a Fourier based technique that compares the running power spectral density estimate of the data to a predetermined signature in order to determine if the desired signal has changed state. How to establish the signature and the detection thresholds will be discussed as well as the theoretical statisticsmore » of the algorithm for the Gaussian noise case with results from simulated data. Actual seismic data results will also be discussed along with techniques used to reduce false alarms due to the inherent nonstationary noise environments found with actual data.« less

  14. RecceMan: an interactive recognition assistance for image-based reconnaissance: synergistic effects of human perception and computational methods for object recognition, identification, and infrastructure analysis

    NASA Astrophysics Data System (ADS)

    El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno

    2015-10-01

    This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The

  15. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    NASA Astrophysics Data System (ADS)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  16. Hydrodynamic influences on acoustical and optical backscatter in a fringing reef environment

    NASA Astrophysics Data System (ADS)

    Pawlak, Geno; Moline, Mark A.; Terrill, Eric J.; Colin, Patrick L.

    2017-01-01

    Observations of hydrodynamics along with optical and acoustical water characteristics in a tropical fringing reef environment reveal a distinct signature associated with flow characteristics and tidal conditions. Flow conditions are dominated by tidal forcing with an offshore component from the reef flat during ebb. Measurements span variable wave conditions enabling identification of wave effects on optical and acoustical water properties. High-frequency acoustic backscatter (6 MHz) is strongly correlated with tidal forcing increasing with offshore directed flow and modulated by wave height, indicating dominant hydrodynamic influence. Backscatter at 300 and 1200 kHz is predominantly diurnal suggesting a biological component. Optical backscatter is closely correlated with high-frequency acoustic backscatter across the range of study conditions. Acoustic backscatter frequency dependence is used along with changes in optical properties to interpret particle-size variations. Changes across wave heights suggest shifts in particle-size distributions with increases in relative concentrations of smaller particles for larger wave conditions. Establishing a connection between the physical processes of a fringing tropical reef and the resulting acoustical and optical signals allows for interpretation and forecasting of the remote sensing response of these phenomena over larger scales.

  17. Quantum Signature of Analog Hawking Radiation in Momentum Space.

    PubMed

    Boiron, D; Fabbri, A; Larré, P-É; Pavloff, N; Westbrook, C I; Ziń, P

    2015-07-10

    We consider a sonic analog of a black hole realized in the one-dimensional flow of a Bose-Einstein condensate. Our theoretical analysis demonstrates that one- and two-body momentum distributions accessible by present-day experimental techniques provide clear direct evidence (i) of the occurrence of a sonic horizon, (ii) of the associated acoustic Hawking radiation, and (iii) of the quantum nature of the Hawking process. The signature of the quantum behavior persists even at temperatures larger than the chemical potential.

  18. Feature-based RNN target recognition

    NASA Astrophysics Data System (ADS)

    Bakircioglu, Hakan; Gelenbe, Erol

    1998-09-01

    Detection and recognition of target signatures in sensory data obtained by synthetic aperture radar (SAR), forward- looking infrared, or laser radar, have received considerable attention in the literature. In this paper, we propose a feature based target classification methodology to detect and classify targets in cluttered SAR images, that makes use of selective signature data from sensory data, together with a neural network technique which uses a set of trained networks based on the Random Neural Network (RNN) model (Gelenbe 89, 90, 91, 93) which is trained to act as a matched filter. We propose and investigate radial features of target shapes that are invariant to rotation, translation, and scale, to characterize target and clutter signatures. These features are then used to train a set of learning RNNs which can be used to detect targets within clutter with high accuracy, and to classify the targets or man-made objects from natural clutter. Experimental data from SAR imagery is used to illustrate and validate the proposed method, and to calculate Receiver Operating Characteristics which illustrate the performance of the proposed algorithm.

  19. The disassociation of visual and acoustic conspecific cues decreases discrimination by female zebra finches (Taeniopygia guttata).

    PubMed

    Campbell, Dana L M; Hauber, Mark E

    2009-08-01

    Female zebra finches (Taeniopygia guttata) use visual and acoustic traits for accurate recognition of male conspecifics. Evidence from video playbacks confirms that both sensory modalities are important for conspecific and species discrimination, but experimental evidence of the individual roles of these cue types affecting live conspecific recognition is limited. In a spatial paradigm to test discrimination, the authors used live male zebra finch stimuli of 2 color morphs, wild-type (conspecific) and white with a painted black beak (foreign), producing 1 of 2 vocalization types: songs and calls learned from zebra finch parents (conspecific) or cross-fostered songs and calls learned from Bengalese finch (Lonchura striata vars. domestica) foster parents (foreign). The authors found that female zebra finches consistently preferred males with conspecific visual and acoustic cues over males with foreign cues, but did not discriminate when the conspecific and foreign visual and acoustic cues were mismatched. These results indicate the importance of both visual and acoustic features for female zebra finches when discriminating between live conspecific males. Copyright 2009 APA, all rights reserved.

  20. Autonomous target recognition using remotely sensed surface vibration measurements

    NASA Astrophysics Data System (ADS)

    Geurts, James; Ruck, Dennis W.; Rogers, Steven K.; Oxley, Mark E.; Barr, Dallas N.

    1993-09-01

    The remotely measured surface vibration signatures of tactical military ground vehicles are investigated for use in target classification and identification friend or foe (IFF) systems. The use of remote surface vibration sensing by a laser radar reduces the effects of partial occlusion, concealment, and camouflage experienced by automatic target recognition systems using traditional imagery in a tactical battlefield environment. Linear Predictive Coding (LPC) efficiently represents the vibration signatures and nearest neighbor classifiers exploit the LPC feature set using a variety of distortion metrics. Nearest neighbor classifiers achieve an 88 percent classification rate in an eight class problem, representing a classification performance increase of thirty percent from previous efforts. A novel confidence figure of merit is implemented to attain a 100 percent classification rate with less than 60 percent rejection. The high classification rates are achieved on a target set which would pose significant problems to traditional image-based recognition systems. The targets are presented to the sensor in a variety of aspects and engine speeds at a range of 1 kilometer. The classification rates achieved demonstrate the benefits of using remote vibration measurement in a ground IFF system. The signature modeling and classification system can also be used to identify rotary and fixed-wing targets.

  1. Directionality and Maneuvering Effects on a Surface Ship Underwater Acoustic Signature

    DTIC Science & Technology

    2008-08-01

    peaks are at 375 and 1175 Hz . Note that the determination of the effective source depth for a ship is not straightforward, as in reality a ship is a...contained buoys with GPS positioning, each recording two calibrated hydrophones with effective acoustic bandwidth from 150 Hz to 5 kHz. In straight, constant...footprints can extend significant distances, poten- tially of the order tens to hundreds of kilometers for LF 100 Hz signals. In the civilian domain

  2. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    PubMed

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  3. Within-individual variation in bullfrog vocalizations: implications for a vocally mediated social recognition system.

    PubMed

    Bee, Mark A

    2004-12-01

    Acoustic signals provide a basis for social recognition in a wide range of animals. Few studies, however, have attempted to relate the patterns of individual variation in signals to behavioral discrimination thresholds used by receivers to discriminate among individuals. North American bullfrogs (Rana catesbeiana) discriminate among familiar and unfamiliar individuals based on individual variation in advertisement calls. The sources, patterns, and magnitudes of variation in eight acoustic properties of multiple-note advertisement calls were examined to understand how patterns of within-individual variation might either constrain, or provide additional cues for, vocal recognition. Six of eight acoustic properties exhibited significant note-to-note variation within multiple-note calls. Despite this source of within-individual variation, all call properties varied significantly among individuals, and multivariate analyses indicated that call notes were individually distinct. Fine-temporal and spectral call properties exhibited less within-individual variation compared to gross-temporal properties and contributed most toward statistically distinguishing among individuals. Among-individual differences in the patterns of within-individual variation in some properties suggest that within-individual variation could also function as a recognition cue. The distributions of among-individual and within-individual differences were used to generate hypotheses about the expected behavioral discrimination thresholds of receivers.

  4. A speech processing study using an acoustic model of a multiple-channel cochlear implant

    NASA Astrophysics Data System (ADS)

    Xu, Ying

    1998-10-01

    A cochlear implant is an electronic device designed to provide sound information for adults and children who have bilateral profound hearing loss. The task of representing speech signals as electrical stimuli is central to the design and performance of cochlear implants. Studies have shown that the current speech- processing strategies provide significant benefits to cochlear implant users. However, the evaluation and development of speech-processing strategies have been complicated by hardware limitations and large variability in user performance. To alleviate these problems, an acoustic model of a cochlear implant with the SPEAK strategy is implemented in this study, in which a set of acoustic stimuli whose psychophysical characteristics are as close as possible to those produced by a cochlear implant are presented on normal-hearing subjects. To test the effectiveness and feasibility of this acoustic model, a psychophysical experiment was conducted to match the performance of a normal-hearing listener using model- processed signals to that of a cochlear implant user. Good agreement was found between an implanted patient and an age-matched normal-hearing subject in a dynamic signal discrimination experiment, indicating that this acoustic model is a reasonably good approximation of a cochlear implant with the SPEAK strategy. The acoustic model was then used to examine the potential of the SPEAK strategy in terms of its temporal and frequency encoding of speech. It was hypothesized that better temporal and frequency encoding of speech can be accomplished by higher stimulation rates and a larger number of activated channels. Vowel and consonant recognition tests were conducted on normal-hearing subjects using speech tokens processed by the acoustic model, with different combinations of stimulation rate and number of activated channels. The results showed that vowel recognition was best at 600 pps and 8 activated channels, but further increases in stimulation rate and

  5. Acoustic classification of zooplankton

    NASA Astrophysics Data System (ADS)

    Martin Traykovski, Linda V.

    1998-11-01

    Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 k

  6. The path to COVIS: A review of acoustic imaging of hydrothermal flow regimes

    NASA Astrophysics Data System (ADS)

    Bemis, Karen G.; Silver, Deborah; Xu, Guangyu; Light, Russ; Jackson, Darrell; Jones, Christopher; Ozer, Sedat; Liu, Li

    2015-11-01

    Acoustic imaging of hydrothermal flow regimes started with the incidental recognition of a plume on a routine sonar scan for obstacles in the path of the human-occupied submersible ALVIN. Developments in sonar engineering, acoustic data processing and scientific visualization have been combined to develop technology which can effectively capture the behavior of focused and diffuse hydrothermal discharge. This paper traces the development of these acoustic imaging techniques for hydrothermal flow regimes from their conception through to the development of the Cabled Observatory Vent Imaging Sonar (COVIS). COVIS has monitored such flow eight times a day for several years. Successful acoustic techniques for estimating plume entrainment, bending, vertical rise, volume flux, and heat flux are presented as is the state-of-the-art in diffuse flow detection.

  7. Face recognition in the thermal infrared domain

    NASA Astrophysics Data System (ADS)

    Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.

    2017-10-01

    Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.

  8. Acoustical standards in engineering acoustics

    NASA Astrophysics Data System (ADS)

    Burkhard, Mahlon D.

    2004-05-01

    The Engineering Acoustics Technical Committee is concerned with the evolution and improvement of acoustical techniques and apparatus, and with the promotion of new applications of acoustics. As cited in the Membership Directory and Handbook (2002), the interest areas include transducers and arrays; underwater acoustic systems; acoustical instrumentation and monitoring; applied sonics, promotion of useful effects, information gathering and transmission; audio engineering; acoustic holography and acoustic imaging; acoustic signal processing (equipment and techniques); and ultrasound and infrasound. Evident connections between engineering and standards are needs for calibration, consistent terminology, uniform presentation of data, reference levels, or design targets for product development. Thus for the acoustical engineer standards are both a tool for practices, for communication, and for comparison of his efforts with those of others. Development of many standards depends on knowledge of the way products are put together for the market place and acoustical engineers provide important input to the development of standards. Acoustical engineers and members of the Engineering Acoustics arm of the Society both benefit from and contribute to the Acoustical Standards of the Acoustical Society.

  9. Improving Speaker Recognition by Biometric Voice Deconstruction

    PubMed Central

    Mazaira-Fernandez, Luis Miguel; Álvarez-Marquina, Agustín; Gómez-Vilda, Pedro

    2015-01-01

    Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions. PMID:26442245

  10. Improving Speaker Recognition by Biometric Voice Deconstruction.

    PubMed

    Mazaira-Fernandez, Luis Miguel; Álvarez-Marquina, Agustín; Gómez-Vilda, Pedro

    2015-01-01

    Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions.

  11. Robust Recognition of Loud and Lombard speech in the Fighter Cockpit Environment

    DTIC Science & Technology

    1988-08-01

    the latter as inter-speaker variability. According to Zue [Z85j, inter-speaker variabilities can be attributed to sociolinguistic background, dialect...34 Journal of the Acoustical Society of America , Vol 50, 1971. [At74I B. S. Atal, "Linear prediction for speaker identification," Journal of the Acoustical...Society of America , Vol 55, 1974. [B771 B. Beek, E. P. Neuberg, and D. C. Hodge, "An Assessment of the Technology of Automatic Speech Recognition for

  12. Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children.

    PubMed

    Lewis, Dawna; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat

    The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed

  13. Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children

    PubMed Central

    Lewis, Dawna E.; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat

    2016-01-01

    Objectives The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- vs. low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design Sixteen CHH with mild-to-moderate hearing loss and 16 age-matched CNH participated (5–12 yrs). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a 5- or 3-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably to CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition

  14. Effects of cooperating and conflicting cues on speech intonation recognition by cochlear implant users and normal hearing listeners.

    PubMed

    Peng, Shu-Chen; Lu, Nelson; Chatterjee, Monita

    2009-01-01

    Cochlear implant (CI) recipients have only limited access to fundamental frequency (F0) information, and thus exhibit deficits in speech intonation recognition. For speech intonation, F0 serves as the primary cue, and other potential acoustic cues (e.g. intensity properties) may also contribute. This study examined the effects of cooperating or conflicting acoustic cues on speech intonation recognition by adult CI and normal hearing (NH) listeners with full-spectrum and spectrally degraded speech stimuli. Identification of speech intonation that signifies question and statement contrasts was measured in 13 CI recipients and 4 NH listeners, using resynthesized bi-syllabic words, where F0 and intensity properties were systematically manipulated. The stimulus set was comprised of tokens whose acoustic cues (i.e. F0 contour and intensity patterns) were either cooperating or conflicting. Subjects identified if each stimulus is a 'statement' or a 'question' in a single-interval, 2-alternative forced-choice (2AFC) paradigm. Logistic models were fitted to the data, and estimated coefficients were compared under cooperating and conflicting conditions, between the subject groups (CI vs. NH), and under full-spectrum and spectrally degraded conditions for NH listeners. The results indicated that CI listeners' intonation recognition was enhanced by cooperating F0 contour and intensity cues, but was adversely affected by these cues being conflicting. On the other hand, with full-spectrum stimuli, NH listeners' intonation recognition was not affected by cues being cooperating or conflicting. The effects of cues being cooperating or conflicting were comparable between the CI group and NH listeners with spectrally degraded stimuli. These findings suggest the importance of taking multiple acoustic sources for speech recognition into consideration in aural rehabilitation for CI recipients. Copyright (C) 2009 S. Karger AG, Basel.

  15. Effects of cooperating and conflicting cues on speech intonation recognition by cochlear implant users and normal hearing listeners

    PubMed Central

    Peng, Shu-Chen; Lu, Nelson; Chatterjee, Monita

    2009-01-01

    Cochlear implant (CI) recipients have only limited access to fundamental frequency (F0) information, and thus exhibit deficits in speech intonation recognition. For speech intonation, F0 serves as the primary cue, and other potential acoustic cues (e.g., intensity properties) may also contribute. This study examined the effects of acoustic cues being cooperating or conflicting on speech intonation recognition by adult cochlear implant (CI), and normal-hearing (NH) listeners with full-spectrum and spectrally degraded speech stimuli. Identification of speech intonation that signifies question and statement contrasts was measured in 13 CI recipients and 4 NH listeners, using resynthesized bi-syllabic words, where F0 and intensity properties were systematically manipulated. The stimulus set was comprised of tokens whose acoustic cues, i.e., F0 contour and intensity patterns, were either cooperating or conflicting. Subjects identified if each stimulus is a “statement” or a “question” in a single-interval, two-alternative forced-choice (2AFC) paradigm. Logistic models were fitted to the data, and estimated coefficients were compared under cooperating and conflicting conditions, between the subject groups (CI vs. NH), and under full-spectrum and spectrally degraded conditions for NH listeners. The results indicated that CI listeners’ intonation recognition was enhanced by F0 contour and intensity cues being cooperating, but was adversely affected by these cues being conflicting. On the other hand, with full-spectrum stimuli, NH listeners’ intonation recognition was not affected by cues being cooperating or conflicting. The effects of cues being cooperating or conflicting were comparable between the CI group and NH listeners with spectrally-degraded stimuli. These findings suggest the importance of taking multiple acoustic sources for speech recognition into consideration in aural rehabilitation for CI recipients. PMID:19372651

  16. Acoustic Emission Patterns and the Transition to Ductility in Sub-Micron Scale Laboratory Earthquakes

    NASA Astrophysics Data System (ADS)

    Ghaffari, H.; Xia, K.; Young, R.

    2013-12-01

    We report observation of a transition from the brittle to ductile regime in precursor events from different rock materials (Granite, Sandstone, Basalt, and Gypsum) and Polymers (PMMA, PTFE and CR-39). Acoustic emission patterns associated with sub-micron scale laboratory earthquakes are mapped into network parameter spaces (functional damage networks). The sub-classes hold nearly constant timescales, indicating dependency of the sub-phases on the mechanism governing the previous evolutionary phase, i.e., deformation and failure of asperities. Based on our findings, we propose that the signature of the non-linear elastic zone around a crack tip is mapped into the details of the evolutionary phases, supporting the formation of a strongly weak zone in the vicinity of crack tips. Moreover, we recognize sub-micron to micron ruptures with signatures of 'stiffening' in the deformation phase of acoustic-waveforms. We propose that the latter rupture fronts carry critical rupture extensions, including possible dislocations faster than the shear wave speed. Using 'template super-shear waveforms' and their network characteristics, we show that the acoustic emission signals are possible super-shear or intersonic events. Ref. [1] Ghaffari, H. O., and R. P. Young. "Acoustic-Friction Networks and the Evolution of Precursor Rupture Fronts in Laboratory Earthquakes." Nature Scientific reports 3 (2013). [2] Xia, Kaiwen, Ares J. Rosakis, and Hiroo Kanamori. "Laboratory earthquakes: The sub-Rayleigh-to-supershear rupture transition." Science 303.5665 (2004): 1859-1861. [3] Mello, M., et al. "Identifying the unique ground motion signatures of supershear earthquakes: Theory and experiments." Tectonophysics 493.3 (2010): 297-326. [4] Gumbsch, Peter, and Huajian Gao. "Dislocations faster than the speed of sound." Science 283.5404 (1999): 965-968. [5] Livne, Ariel, et al. "The near-tip fields of fast cracks." Science 327.5971 (2010): 1359-1363. [6] Rycroft, Chris H., and Eran Bouchbinder

  17. Artillery/mortar type classification based on detected acoustic transients

    NASA Astrophysics Data System (ADS)

    Morcos, Amir; Grasing, David; Desai, Sachi

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  18. A survey of the 2001 to 2005 quartz crystal microbalance biosensor literature: applications of acoustic physics to the analysis of biomolecular interactions.

    PubMed

    Cooper, Matthew A; Singleton, Victoria T

    2007-01-01

    The widespread exploitation of biosensors in the analysis of molecular recognition has its origins in the mid-1990s following the release of commercial systems based on surface plasmon resonance (SPR). More recently, platforms based on piezoelectric acoustic sensors (principally 'bulk acoustic wave' (BAW), 'thickness shear mode' (TSM) sensors or 'quartz crystal microbalances' (QCM)), have been released that are driving the publication of a large number of papers analysing binding specificities, affinities, kinetics and conformational changes associated with a molecular recognition event. This article highlights salient theoretical and practical aspects of the technologies that underpin acoustic analysis, then reviews exemplary papers in key application areas involving small molecular weight ligands, carbohydrates, proteins, nucleic acids, viruses, bacteria, cells and lipidic and polymeric interfaces. Key differentiators between optical and acoustic sensing modalities are also reviewed. Copyright (c) 2007 John Wiley & Sons, Ltd.

  19. Study of acoustic correlates associate with emotional speech

    NASA Astrophysics Data System (ADS)

    Yildirim, Serdar; Lee, Sungbok; Lee, Chul Min; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Ebrahim; Narayanan, Shrikanth

    2004-10-01

    This study investigates the acoustic characteristics of four different emotions expressed in speech. The aim is to obtain detailed acoustic knowledge on how a speech signal is modulated by changes from neutral to a certain emotional state. Such knowledge is necessary for automatic emotion recognition and classification and emotional speech synthesis. Speech data obtained from two semi-professional actresses are analyzed and compared. Each subject produces 211 sentences with four different emotions; neutral, sad, angry, happy. We analyze changes in temporal and acoustic parameters such as magnitude and variability of segmental duration, fundamental frequency and the first three formant frequencies as a function of emotion. Acoustic differences among the emotions are also explored with mutual information computation, multidimensional scaling and acoustic likelihood comparison with normal speech. Results indicate that speech associated with anger and happiness is characterized by longer duration, shorter interword silence, higher pitch and rms energy with wider ranges. Sadness is distinguished from other emotions by lower rms energy and longer interword silence. Interestingly, the difference in formant pattern between [happiness/anger] and [neutral/sadness] are better reflected in back vowels such as /a/(/father/) than in front vowels. Detailed results on intra- and interspeaker variability will be reported.

  20. Variabilities detected by acoustic emission from filament-wound Aramid fiber/epoxy composite pressure vessels

    NASA Technical Reports Server (NTRS)

    Hamstad, M. A.

    1978-01-01

    Two hundred and fifty Aramid fiber/epoxy pressure vessels were filament-wound over spherical aluminum mandrels under controlled conditions typical for advanced filament-winding. A random set of 30 vessels was proof-tested to 74% of the expected burst pressure; acoustic emission data were obtained during the proof test. A specially designed fixture was used to permit in situ calibration of the acoustic emission system for each vessel by the fracture of a 4-mm length of pencil lead (0.3 mm in diameter) which was in contact with the vessel. Acoustic emission signatures obtained during testing showed larger than expected variabilities in the mechanical damage done during the proof tests. To date, identification of the cause of these variabilities has not been determined.

  1. Simulation of Acoustic Noise Generated by an Airbreathing, Beam-Powered Launch Vehicle

    NASA Astrophysics Data System (ADS)

    Kennedy, W. C.; Van Laak, P.; Scarton, H. A.; Myrabo, L. N.

    2005-04-01

    A simple acoustic model is developed for predicting the noise signature vs. power level for advanced laser-propelled lightcraft — capable of single-stage flights into low Earth orbit. This model predicts the noise levels generated by a pulsed detonation engine (PDE) during the initial lift-off and acceleration phase, for two representative `tractor-beam' lightcraft designs: a 1-place `Mercury' vehicle (2.5-m diameter, 900-kg); and a larger 5-place `Apollo' vehicle (5-m diameter, 5555-kg) — both the subject of an earlier study. The use of digital techniques to simulate the expected PDE noise signature is discussed, and three examples of fly-by noise signatures are presented. The reduction, or complete elimination of perceptible noise from such engines, can be accomplished by shifting the pulse frequency into the supra-audible or sub-audible range.

  2. Coherent Acoustic Vibration of Metal Nanoshells

    NASA Astrophysics Data System (ADS)

    Guillon, C.; Langot, P.; Del Fatti, N.; Vallée, F.; Kirakosyan, A. S.; Shahbazyan, T. V.; Cardinal, T.; Treguer, M.

    2007-01-01

    Using time-resolved pump-probe spectroscopy we have performed the first investigation of the vibrational modes of gold nanoshells. The fundamental isotropic mode launched by a femtosecond pump pulse manifests itself in a pronounced time-domain modulation of the differential transmission probed at the frequency of nanoshell surface plasmon resonance. The modulation amplitude is significantly stronger and the period is longer than in a gold nanoparticle of the same overall size, in agreement with theoretical calculations. This distinct acoustical signature of nanoshells provides a new and efficient method for identifying these versatile nanostructures and for studying their mechanical and structural properties.

  3. Non-contact multi-radar smart probing of body orientation based on micro-Doppler signatures.

    PubMed

    Li, Yiran; Pal, Ranadip; Li, Changzhi

    2014-01-01

    Micro-Doppler signatures carry useful information about body movements and have been widely applied to different applications such as human activity recognition and gait analysis. In this paper, micro-Doppler signatures are used to identify body orientation. Four AC-coupled continuous-wave (CW) smart radar sensors were used to form a multiple-radar network to carry out the experiments in this paper. 162 tests were performed in total. The experiment results showed a 100% accuracy in recognizing eight body orientations, i.e., facing north, northeast, east, southeast, south, southwest, west, and northwest.

  4. Photonic quantum digital signatures operating over kilometer ranges in installed optical fiber

    NASA Astrophysics Data System (ADS)

    Collins, Robert J.; Fujiwara, Mikio; Amiri, Ryan; Honjo, Toshimori; Shimizu, Kaoru; Tamaki, Kiyoshi; Takeoka, Masahiro; Andersson, Erika; Buller, Gerald S.; Sasaki, Masahide

    2016-10-01

    The security of electronic communications is a topic that has gained noteworthy public interest in recent years. As a result, there is an increasing public recognition of the existence and importance of mathematically based approaches to digital security. Many of these implement digital signatures to ensure that a malicious party has not tampered with the message in transit, that a legitimate receiver can validate the identity of the signer and that messages are transferable. The security of most digital signature schemes relies on the assumed computational difficulty of solving certain mathematical problems. However, reports in the media have shown that certain implementations of such signature schemes are vulnerable to algorithmic breakthroughs and emerging quantum processing technologies. Indeed, even without quantum processors, the possibility remains that classical algorithmic breakthroughs will render these schemes insecure. There is ongoing research into information-theoretically secure signature schemes, where the security is guaranteed against an attacker with arbitrary computational resources. One such approach is quantum digital signatures. Quantum signature schemes can be made information-theoretically secure based on the laws of quantum mechanics while comparable classical protocols require additional resources such as anonymous broadcast and/or a trusted authority. Previously, most early demonstrations of quantum digital signatures required dedicated single-purpose hardware and operated over restricted ranges in a laboratory environment. Here, for the first time, we present a demonstration of quantum digital signatures conducted over several kilometers of installed optical fiber. The system reported here operates at a higher signature generation rate than previous fiber systems.

  5. INNOVATIVE ACOUSTIC SENSOR TECHNOLOGIES FOR LEAK DETECTION IN CHALLENGING PIPE TYPES

    DTIC Science & Technology

    2016-12-30

    through focused acoustic surveys that are typically conducted at the correlated location prior to marking the leak location. All three technologies were...shift” survey with cross- correlation Echologics LeakFinderRTT M Field survey of leak signatures. Recommended every 3-5 years Contractor...cross- correlation features to detect and pinpoint leaks in challenging pipe types, as well as metallic pipes. 15. SUBJECT TERMS Leak detection

  6. Clinical Validation of a Sound Processor Upgrade in Direct Acoustic Cochlear Implant Subjects

    PubMed Central

    Kludt, Eugen; D’hondt, Christiane; Lenarz, Thomas; Maier, Hannes

    2017-01-01

    Objective: The objectives of the investigation were to evaluate the effect of a sound processor upgrade on the speech reception threshold in noise and to collect long-term safety and efficacy data after 2½ to 5 years of device use of direct acoustic cochlear implant (DACI) recipients. Study Design: The study was designed as a mono-centric, prospective clinical trial. Setting: Tertiary referral center. Patients: Fifteen patients implanted with a direct acoustic cochlear implant. Intervention: Upgrade with a newer generation of sound processor. Main Outcome Measures: Speech recognition test in quiet and in noise, pure tone thresholds, subject-reported outcome measures. Results: The speech recognition in quiet and in noise is superior after the sound processor upgrade and stable after long-term use of the direct acoustic cochlear implant. The bone conduction thresholds did not decrease significantly after long-term high level stimulation. Conclusions: The new sound processor for the DACI system provides significant benefits for DACI users for speech recognition in both quiet and noise. Especially the noise program with the use of directional microphones (Zoom) allows DACI patients to have much less difficulty when having conversations in noisy environments. Furthermore, the study confirms that the benefits of the sound processor upgrade are available to the DACI recipients even after several years of experience with a legacy sound processor. Finally, our study demonstrates that the DACI system is a safe and effective long-term therapy. PMID:28406848

  7. Multiscale moment-based technique for object matching and recognition

    NASA Astrophysics Data System (ADS)

    Thio, HweeLi; Chen, Liya; Teoh, Eam-Khwang

    2000-03-01

    A new method is proposed to extract features from an object for matching and recognition. The features proposed are a combination of local and global characteristics -- local characteristics from the 1-D signature function that is defined to each pixel on the object boundary, global characteristics from the moments that are generated from the signature function. The boundary of the object is first extracted, then the signature function is generated by computing the angle between two lines from every point on the boundary as a function of position along the boundary. This signature function is position, scale and rotation invariant (PSRI). The shape of the signature function is then described quantitatively by using moments. The moments of the signature function are the global characters of a local feature set. Using moments as the eventual features instead of the signature function reduces the time and complexity of an object matching application. Multiscale moments are implemented to produce several sets of moments that will generate more accurate matching. Basically multiscale technique is a coarse to fine procedure and makes the proposed method more robust to noise. This method is proposed to match and recognize objects under simple transformation, such as translation, scale changes, rotation and skewing. A simple logo indexing system is implemented to illustrate the performance of the proposed method.

  8. The role of multimodal signals in species recognition between tree-killing bark beetles in a narrow sympatric zone.

    Treesearch

    Deepa S. Pureswaran; Richard W. Hofstetter; Brian Sullivan; Kristen A. Potter

    2016-01-01

    When related species coexist, selection pressure should favor evolution of species recognition mechanisms to prevent interspecific pairing and wasteful reproductive encounters. We investigated the potential role of pheromone and acoustic signals in species recognition between two species of tree-killing bark beetles, the southern pine beetle, Dendroctonus frontalis...

  9. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.

  10. Effects of Talker Variability on Vowel Recognition in Cochlear Implants

    ERIC Educational Resources Information Center

    Chang, Yi-ping; Fu, Qian-Jie

    2006-01-01

    Purpose: To investigate the effects of talker variability on vowel recognition by cochlear implant (CI) users and by normal-hearing (NH) participants listening to 4-channel acoustic CI simulations. Method: CI users were tested with their clinically assigned speech processors. For NH participants, 3 CI processors were simulated, using different…

  11. Airborne DoA estimation of gunshot acoustic signals using drones with application to sniper localization systems

    NASA Astrophysics Data System (ADS)

    Fernandes, Rigel P.; Ramos, António L. L.; Apolinário, José A.

    2017-05-01

    Shooter localization systems have been subject of a growing attention lately owing to its wide span of possible applications, e.g., civil protection, law enforcement, and support to soldiers in missions where snipers might pose a serious threat. These devices are based on the processing of electromagnetic or acoustic signatures associated with the firing of a gun. This work is concerned with the latter, where the shooter's position can be obtained based on the estimation of the direction-of-arrival (DoA) of the acoustic components of a gunshot signal (muzzle blast and shock wave). A major limitation of current commercially available acoustic sniper localization systems is the impossibility of finding the shooter's position when one of these acoustic signatures is not detected. This is very likely to occur in real-life situations, especially when the microphones are not in the field of view of the shockwave or when the presence of obstacles like buildings can prevent a direct-path to sensors. This work addresses the problem of DoA estimation of the muzzle blast using a planar array of sensors deployed in a drone. Results supported by actual gunshot data from a realistic setup are very promising and pave the way for the development of enhanced sniper localization systems featuring two main advantages over stationary ones: (1) wider surveillance area; and (2) increased likelihood of a direct-path detection of at least one of the gunshot signals, thereby adding robustness and reliability to the system.

  12. Segmental concatenation of individual signatures and context cues in banded mongoose (Mungos mungo) close calls.

    PubMed

    Jansen, David A W A M; Cant, Michael A; Manser, Marta B

    2012-12-03

    All animals are anatomically constrained in the number of discrete call types they can produce. Recent studies suggest that by combining existing calls into meaningful sequences, animals can increase the information content of their vocal repertoire despite these constraints. Additionally, signalers can use vocal signatures or cues correlated to other individual traits or contexts to increase the information encoded in their vocalizations. However, encoding multiple vocal signatures or cues using the same components of vocalizations usually reduces the signals' reliability. Segregation of information could effectively circumvent this trade-off. In this study we investigate how banded mongooses (Mungos mungo) encode multiple vocal signatures or cues in their frequently emitted graded single syllable close calls. The data for this study were collected on a wild, but habituated, population of banded mongooses. Using behavioral observations and acoustical analysis we found that close calls contain two acoustically different segments. The first being stable and individually distinct, and the second being graded and correlating with the current behavior of the individual, whether it is digging, searching or moving. This provides evidence of Marler's hypothesis on temporal segregation of information within a single syllable call type. Additionally, our work represents an example of an identity cue integrated as a discrete segment within a single call that is independent from context. This likely functions to avoid ambiguity between individuals or receivers having to keep track of several context-specific identity cues. Our study provides the first evidence of segmental concatenation of information within a single syllable in non-human vocalizations. By reviewing descriptions of call structures in the literature, we suggest a general application of this mechanism. Our study indicates that temporal segregation and segmental concatenation of vocal signatures or cues is

  13. R&D 100 Winner 2010: Acoustic Wave Biosensors

    ScienceCinema

    Larson, Richard; Branch, Darren; Edwards, Thayne

    2018-01-16

    The acoustic wave biosensor is innovative device that is a handheld, battery-powered, portable detection system capable of multiplex identification of a wide range of medically relevant pathogens and their biomolecular signatures — viruses, bacteria, proteins, and DNA — at clinically relevant levels. This detection occurs within minutes — not hours — at the point of care, whether that care is in a physician's office, a hospital bed, or at the scene of a biodefense or biomedical emergency.

  14. Individual recognition of human infants on the basis of cries alone.

    PubMed

    Green, J A; Gustafson, G E

    1983-11-01

    Human parents were asked to identify their infants on the basis of tape-recorded cries that they had not previously heard. The cries of twenty 30-day-old infants were recorded just prior to a feeding, then rerecorded onto a test tape containing cries from three other infants. Eighty percent of mothers were able to recognize their infants' cries, as were 45% of fathers. An additional 140 adults (non-parents) were tested in order to determine if the process of dubbing cries onto test tapes had left extraneous auditory cues to infants' identities and if the foil infants were equally discriminable. The results indicated that parents' recognition was not based on extraneous cues and that, overall, the foils were appropriate distractors in the parents' task. Thus, the majority of parents can recognize their 30-day-old infants on the sole basis of acoustic cues contained in the infants' cries. The acoustic features that underlie this recognition are now being investigated.

  15. Sex differences in razorbill (Family: Alcidae) parent-offspring vocal recognition

    NASA Astrophysics Data System (ADS)

    Insley, Stephen J.; Paredes Vela, Rosana; Jones, Ian L.

    2002-05-01

    In this study we examines how a pattern of parental care may result in a sex bias in vocal recognition. In Razorbills (Alca torda), both sexes provide parental care to their chicks while at the nest, after which the male is the sole caregiver for an additional period at sea. Selection pressure acting on recognition behavior is expected to be strongest during the time when males and chicks are together at sea, and as a result, parent-offspring recognition was predicted to be better developed in the male parent, that is, show a paternal bias. In order to test this hypothesis, vocal playback experiments were conducted on breeding Razorbills at the Gannet Islands, Labrador, 2001. The data provide clear evidence of mutual vocal recognition between the male parent and chick but not between the female parent and chick, supporting the hypothesis that parent-offspring recognition is male biased in this species. In addition to acoustic recognition, such a bias could have important social implications for a variety of behavioral and basic life history traits such as cooperation and sex-biased dispersal.

  16. Acoustic Microfluidics for Bioanalytical Application

    NASA Astrophysics Data System (ADS)

    Lopez, Gabriel

    2013-03-01

    This talk will present new methods the use of ultrasonic standing waves in microfluidic systems to manipulate microparticles for the purpose of bioassays and bioseparations. We have recently developed multi-node acoustic focusing flow cells that can position particles into many parallel flow streams and have demonstrated the potential of such flow cells in the development of high throughput, parallel flow cytometers. These experiments show the potential for the creation of high throughput flow cytometers in applications requiring high flow rates and rapid detection of rare cells. This talk will also present the development of elastomeric capture microparticles and their use in acoustophoretic separations. We have developed simple methods to form elastomeric particles that are surface functionalized with biomolecular recognition reagents. These compressible particles exhibit negative acoustic contrast in ultrasound when suspended in aqueous media, blood serum or diluted blood. These particles can be continuously separated from cells by flowing them through a microfluidic device that uses an ultrasonic standing wave to align the blood cells, which exhibit positive acoustic contrast, at a node in the acoustic pressure distribution while aligning the negative acoustic contrast elastomeric particles at the antinodes. Laminar flow of the separated particles to downstream collection ports allows for collection of the separated negative contrast particles and cells. Separated elastomeric particles were analyzed via flow cytometry to demonstrate nanomolar detection for prostate specific antigen in aqueous buffer and picomolar detection for IgG in plasma and diluted blood samples. This approach has potential applications in the development of rapid assays that detect the presence of low concentrations of biomarkers (including biomolecules and cells) in a number of biological sample types. We acknowledge support through the NSF Research Triangle MRSEC.

  17. Characterizing riverbed sediment using high-frequency acoustics 2: scattering signatures of Colorado River bed sediment in Marble and Grand Canyons

    USGS Publications Warehouse

    Buscombe, Daniel D.; Grams, Paul E.; Kaplinski, Matt A.

    2014-01-01

    In this, the second of a pair of papers on the statistical signatures of riverbed sediment in high-frequency acoustic backscatter, spatially explicit maps of the stochastic geometries (length- and amplitude-scales) of backscatter are related to patches of riverbed surfaces composed of known sediment types, as determined by geo-referenced underwater video observations. Statistics of backscatter magnitudes alone are found to be poor discriminators between sediment types. However, the variance of the power spectrum, and the intercept and slope from a power-law spectral form (termed the spectral strength and exponent, respectively) successfully discriminate between sediment types. A decision-tree approach was able to classify spatially heterogeneous patches of homogeneous sands, gravels (and sand-gravel mixtures), and cobbles/boulders with 95, 88, and 91% accuracy, respectively. Application to sites outside the calibration, and surveys made at calibration sites at different times, were plausible based on observations from underwater video. Analysis of decision trees built with different training data sets suggested that the spectral exponent was consistently the most important variable in the classification. In the absence of theory concerning how spatially variable sediment surfaces scatter high-frequency sound, the primary advantage of this data-driven approach to classify bed sediment over alternatives is that spectral methods have well understood properties and make no assumptions about the distributional form of the fluctuating component of backscatter over small spatial scales.

  18. Acoustic-gravity waves, theory and application

    NASA Astrophysics Data System (ADS)

    Kadri, Usama; Farrell, William E.; Munk, Walter

    2015-04-01

    Acoustic-gravity waves (AGW) propagate in the ocean under the influence of both the compressibility of sea water and the restoring force of gravity. The gravity dependence vanishes if the wave vector is normal to the ocean surface, but becomes increasingly important as the wave vector acquires a horizontal tilt. They are excited by many sources, including non-linear surface wave interactions, disturbances of the ocean bottom (submarine earthquakes and landslides) and underwater explosions. In this introductory lecture on acoustic-gravity waves, we describe their properties, and their relation to organ pipe modes, to microseisms, and to deep ocean signatures by short surface waves. We discuss the generation of AGW by underwater earthquakes; knowledge of their behaviour with water depth can be applied for the early detection of tsunamis. We also discuss their generation by the non-linear interaction of surface gravity waves, which explains the major role they play in transforming energy from the ocean surface to the crust, as part of the microseisms phenomenon. Finally, they contribute to horizontal water transport at depth, which might affect benthic life.

  19. Syntactic Predictability in the Recognition of Carefully and Casually Produced Speech

    ERIC Educational Resources Information Center

    Viebahn, Malte C.; Ernestus, Mirjam; McQueen, James M.

    2015-01-01

    The present study investigated whether the recognition of spoken words is influenced by how predictable they are given their syntactic context and whether listeners assign more weight to syntactic predictability when acoustic-phonetic information is less reliable. Syntactic predictability was manipulated by varying the word order of past…

  20. Automatic speech recognition research at NASA-Ames Research Center

    NASA Technical Reports Server (NTRS)

    Coler, Clayton R.; Plummer, Robert P.; Huff, Edward M.; Hitchcock, Myron H.

    1977-01-01

    A trainable acoustic pattern recognizer manufactured by Scope Electronics is presented. The voice command system VCS encodes speech by sampling 16 bandpass filters with center frequencies in the range from 200 to 5000 Hz. Variations in speaking rate are compensated for by a compression algorithm that subdivides each utterance into eight subintervals in such a way that the amount of spectral change within each subinterval is the same. The recorded filter values within each subinterval are then reduced to a 15-bit representation, giving a 120-bit encoding for each utterance. The VCS incorporates a simple recognition algorithm that utilizes five training samples of each word in a vocabulary of up to 24 words. The recognition rate of approximately 85 percent correct for untrained speakers and 94 percent correct for trained speakers was not considered adequate for flight systems use. Therefore, the built-in recognition algorithm was disabled, and the VCS was modified to transmit 120-bit encodings to an external computer for recognition.

  1. Are you a good mimic? Neuro-acoustic signatures for speech imitation ability

    PubMed Central

    Reiterer, Susanne M.; Hu, Xiaochen; Sumathi, T. A.; Singh, Nandini C.

    2013-01-01

    We investigated individual differences in speech imitation ability in late bilinguals using a neuro-acoustic approach. One hundred and thirty-eight German-English bilinguals matched on various behavioral measures were tested for “speech imitation ability” in a foreign language, Hindi, and categorized into “high” and “low ability” groups. Brain activations and speech recordings were obtained from 26 participants from the two extreme groups as they performed a functional neuroimaging experiment which required them to “imitate” sentences in three conditions: (A) German, (B) English, and (C) German with fake English accent. We used recently developed novel acoustic analysis, namely the “articulation space” as a metric to compare speech imitation abilities of the two groups. Across all three conditions, direct comparisons between the two groups, revealed brain activations (FWE corrected, p < 0.05) that were more widespread with significantly higher peak activity in the left supramarginal gyrus and postcentral areas for the low ability group. The high ability group, on the other hand showed significantly larger articulation space in all three conditions. In addition, articulation space also correlated positively with imitation ability (Pearson's r = 0.7, p < 0.01). Our results suggest that an expanded articulation space for high ability individuals allows access to a larger repertoire of sounds, thereby providing skilled imitators greater flexibility in pronunciation and language learning. PMID:24155739

  2. A comparison of the acoustic and aerodynamic measurements of a model rotor tested in two anechoic wind tunnels

    NASA Technical Reports Server (NTRS)

    Boxwell, D. A.; Schmitz, F. H.; Splettstoesser, W. R.; Schultz, K. J.; Lewy, S.; Caplot, M.

    1986-01-01

    Two aeroacoustic facilities--the CEPRA 19 in France and the DNW in the Netherlands--are compared. The two facilities have unique acoustic characteristics that make them appropriate for acoustic testing of model-scale helicopter rotors. An identical pressure-instrumented model-scale rotor was tested in each facility and acoustic test results are compared with full-scale-rotor test results. Blade surface pressures measured in both tunnels were used to correlated nominal rotor operating conditions in each tunnel, and also used to assess the steadiness of the rotor in each tunnel's flow. In-the-flow rotor acoustic signatures at moderate forward speeds (35-50 m/sec) are presented for each facility and discussed in relation to the differences in tunnel geometries and aeroacoustic characteristics. Both reports are presented in appendices to this paper. ;.);

  3. Preserved Acoustic Hearing in Cochlear Implantation Improves Speech Perception

    PubMed Central

    Sheffield, Sterling W.; Jahn, Kelly; Gifford, René H.

    2015-01-01

    Background With improved surgical techniques and electrode design, an increasing number of cochlear implant (CI) recipients have preserved acoustic hearing in the implanted ear, thereby resulting in bilateral acoustic hearing. There are currently no guidelines, however, for clinicians with respect to audio-metric criteria and the recommendation of amplification in the implanted ear. The acoustic bandwidth necessary to obtain speech perception benefit from acoustic hearing in the implanted ear is unknown. Additionally, it is important to determine if, and in which listening environments, acoustic hearing in both ears provides more benefit than hearing in just one ear, even with limited residual hearing. Purpose The purposes of this study were to (1) determine whether acoustic hearing in an ear with a CI provides as much speech perception benefit as an equivalent bandwidth of acoustic hearing in the non-implanted ear, and (2) determine whether acoustic hearing in both ears provides more benefit than hearing in just one ear. Research Design A repeated-measures, within-participant design was used to compare performance across listening conditions. Study Sample Seven adults with CIs and bilateral residual acoustic hearing (hearing preservation) were recruited for the study. Data Collection and Analysis Consonant-nucleus-consonant word recognition was tested in four conditions: CI alone, CI + acoustic hearing in the nonimplanted ear, CI + acoustic hearing in the implanted ear, and CI + bilateral acoustic hearing. A series of low-pass filters were used to examine the effects of acoustic bandwidth through an insert earphone with amplification. Benefit was defined as the difference among conditions. The benefit of bilateral acoustic hearing was tested in both diffuse and single-source background noise. Results were analyzed using repeated-measures analysis of variance. Results Similar benefit was obtained for equivalent acoustic frequency bandwidth in either ear. Acoustic

  4. Auditory perception vs. recognition: representation of complex communication sounds in the mouse auditory cortical fields.

    PubMed

    Geissler, Diana B; Ehret, Günter

    2004-02-01

    Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.

  5. Static hand gesture recognition from a video

    NASA Astrophysics Data System (ADS)

    Rokade, Rajeshree S.; Doye, Dharmpal

    2011-10-01

    A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.

  6. Thermal imaging as a biometrics approach to facial signature authentication.

    PubMed

    Guzman, A M; Goryawala, M; Wang, Jin; Barreto, A; Andrian, J; Rishe, N; Adjouadi, M

    2013-01-01

    A new thermal imaging framework with unique feature extraction and similarity measurements for face recognition is presented. The research premise is to design specialized algorithms that would extract vasculature information, create a thermal facial signature and identify the individual. The proposed algorithm is fully integrated and consolidates the critical steps of feature extraction through the use of morphological operators, registration using the Linear Image Registration Tool and matching through unique similarity measures designed for this task. The novel approach at developing a thermal signature template using four images taken at various instants of time ensured that unforeseen changes in the vasculature over time did not affect the biometric matching process as the authentication process relied only on consistent thermal features. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using the similarity measures showed an average accuracy of 88.46% for skeletonized signatures and 90.39% for anisotropically diffused signatures. The highly accurate results obtained in the matching process clearly demonstrate the ability of the thermal infrared system to extend in application to other thermal imaging based systems. Empirical results applying this approach to an existing database of thermal images proves this assertion.

  7. Aero-acoustics of Drag Generating Swirling Exhaust Flows

    NASA Technical Reports Server (NTRS)

    Shah, P. N.; Mobed, D.; Spakovszky, Z. S.; Brooks, T. F.; Humphreys, W. M. Jr.

    2007-01-01

    Aircraft on approach in high-drag and high-lift configuration create unsteady flow structures which inherently generate noise. For devices such as flaps, spoilers and the undercarriage there is a strong correlation between overall noise and drag such that, in the quest for quieter aircraft, one challenge is to generate drag at low noise levels. This paper presents a rigorous aero-acoustic assessment of a novel drag concept. The idea is that a swirling exhaust flow can yield a steady, and thus relatively quiet, streamwise vortex which is supported by a radial pressure gradient responsible for pressure drag. Flows with swirl are naturally limited by instabilities such as vortex breakdown. The paper presents a first aero-acoustic assessment of ram pressure driven swirling exhaust flows and their associated instabilities. The technical approach combines an in-depth aerodynamic analysis, plausibility arguments to qualitatively describe the nature of acoustic sources, and detailed, quantitative acoustic measurements using a medium aperture directional microphone array in combination with a previously established Deconvolution Approach for Mapping of Acoustic Sources (DAMAS). A model scale engine nacelle with stationary swirl vanes was designed and tested in the NASA Langley Quiet Flow Facility at a full-scale approach Mach number of 0.17. The analysis shows that the acoustic signature is comprised of quadrupole-type turbulent mixing noise of the swirling core flow and scattering noise from vane boundary layers and turbulent eddies of the burst vortex structure near sharp edges. The exposed edges are the nacelle and pylon trailing edge and the centerbody supporting the vanes. For the highest stable swirl angle setting a nacelle area based drag coefficient of 0.8 was achieved with a full-scale Overall Sound Pressure Level (OASPL) of about 40dBA at the ICAO approach certification point.

  8. Sub-Poissonian phonon statistics in an acoustical resonator coupled to a pumped two-level emitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceban, V., E-mail: victor.ceban@phys.asm.md; Macovei, M. A., E-mail: macovei@phys.asm.md

    2015-11-15

    The concept of an acoustical analog of the optical laser has been developed recently in both theoretical and experimental works. We here discuss a model of a coherent phonon generator with a direct signature of the quantum properties of sound vibrations. The considered setup is made of a laser-driven quantum dot embedded in an acoustical nanocavity. The system dynamics is solved for a single phonon mode in the steady-state and in the strong quantum dot—phonon coupling regime beyond the secular approximation. We demonstrate that the phonon statistics exhibits quantum features, i.e., is sub-Poissonian.

  9. Comparison of Neural Networks and Tabular Nearest Neighbor Encoding for Hyperspectral Signature Classification in Unresolved Object Detection

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Ritter, G.; Key, R.

    Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false

  10. Landscape cultivation alters δ30Si signature in terrestrial ecosystems

    PubMed Central

    Vandevenne, Floor I.; Delvaux, Claire; Hughes, Harold J.; André, Luc; Ronchi, Benedicta; Clymans, Wim; Barão, Lúcia; Govers, Gerard; Meire, Patrick; Struyf, Eric

    2015-01-01

    Despite increasing recognition of the relevance of biological cycling for Si cycling in ecosystems and for Si export from soils to fluvial systems, effects of human cultivation on the Si cycle are still relatively understudied. Here we examined stable Si isotope (δ30Si) signatures in soil water samples across a temperate land use gradient. We show that – independent of geological and climatological variation – there is a depletion in light isotopes in soil water of intensive croplands and managed grasslands relative to native forests. Furthermore, our data suggest a divergence in δ30Si signatures along the land use change gradient, highlighting the imprint of vegetation cover, human cultivation and intensity of disturbance on δ30Si patterns, on top of more conventionally acknowledged drivers (i.e. mineralogy and climate). PMID:25583031

  11. Pupil size changes during recognition memory.

    PubMed

    Otero, Samantha C; Weekes, Brendan S; Hutton, Samuel B

    2011-10-01

    Pupils dilate to a greater extent when participants view old compared to new items during recognition memory tests. We report three experiments investigating the cognitive processes associated with this pupil old/new effect. Using a remember/know procedure, we found that the effect occurred for old items that were both remembered and known at recognition, although it was attenuated for known compared to remembered items. In Experiment 2, the pupil old/new effect was observed when items were presented acoustically, suggesting the effect does not depend on low-level visual processes. The pupil old/new effect was also greater for items encoded under deep compared to shallow orienting instructions, suggesting it may reflect the strength of the underlying memory trace. Finally, the pupil old/new effect was also found when participants falsely recognized items as being old. We propose that pupils respond to a strength-of-memory signal and suggest that pupillometry provides a useful technique for exploring the underlying mechanisms of recognition memory. Copyright © 2011 Society for Psychophysiological Research.

  12. The application of standard definitions of sound to the fields of underwater acoustics and acoustical oceanography

    NASA Astrophysics Data System (ADS)

    Carey, William M.

    2004-05-01

    Recent societal concerns have focused attention on the use of sound as a probe to investigate the oceans and its use in naval sonar applications. The concern is the impact the use of sound may have on marine mammals and fishes. The focus has changed the fields of acoustical oceanography (AO) and underwater acoustics (UW) because of the requirement to communicate between disciplines. Multiple National Research Council publications, Dept. of Navy reports, and several monographs have been written on this subject, and each reveals the importance as well as the misapplication of ASA standards. The ANSI-ASA standards are comprehensive, however not widely applied. The clear definition of standards and recommendations of their use is needed for both scientists and government agencies. Traditionally the U.S. Navy has been responsible for UW standards and calibration; the ANSI-ASA standards have been essential. However, recent changes in the Navy and its laboratory structure may necessitate a more formal recognition of ANSI-ASA standards and perhaps incorporation of UW-AO in the Bureau of Standards. A separate standard for acoustical terminology, reference levels, and notation used in the UW-AO is required. Since the problem is global, a standard should be compatible and cross referenced with the International Standard (CEI/IEC 27-3).

  13. Application of a laser Doppler vibrometer for air-water to subsurface signature detection

    NASA Astrophysics Data System (ADS)

    Land, Phillip; Roeder, James; Robinson, Dennis; Majumdar, Arun

    2015-05-01

    There is much interest in detecting a target and optical communications from an airborne platform to a platform submerged under water. Accurate detection and communications between underwater and aerial platforms would increase the capabilities of surface, subsurface, and air, manned and unmanned vehicles engaged in oversea and undersea activities. The technique introduced in this paper involves a Laser Doppler Vibrometer (LDV) for acousto-optic sensing for detecting acoustic information propagated towards the water surface from a submerged platform inside a 12 gallon water tank. The LDV probes and penetrates the water surface from an aerial platform to detect air-water surface interface vibrations caused by an amplifier to a speaker generating a signal generated from underneath the water surface (varied water depth from 1" to 8"), ranging between 50Hz to 5kHz. As a comparison tool, a hydrophone was used simultaneously inside the water tank for recording the acoustic signature of the signal generated between 50Hz to 5kHz. For a signal generated by a submerged platform, the LDV can detect the signal. The LDV detects the signal via surface perturbations caused by the impinging acoustic pressure field; proving a technique of transmitting/sending information/messages from a submerged platform acoustically to the surface of the water and optically receiving the information/message using the LDV, via the Doppler Effect, allowing the LDV to become a high sensitivity optical-acoustic device. The technique developed has much potential usage in commercial oceanography applications. The present work is focused on the reception of acoustic information from an object located underwater.

  14. What are the Mechanisms Behind a Parasite-Induced Decline in Nestmate Recognition in Ants?

    PubMed

    Beros, Sara; Foitzik, Susanne; Menzel, Florian

    2017-09-01

    Social insects have developed sophisticated recognition skills to defend their nests against intruders. They do this by aggressively discriminating against non-nestmates with deviant cuticular hydrocarbon (CHC) signatures. Studying nestmate recognition can be challenging as individual insects do not only vary in their discriminatory abilities, but also in their motivation to behave aggressively. To disentangle the influence of signaling and behavioral motivation on nestmate recognition, we investigated the ant Temnothorax nylanderi, where the presence of tapeworm-infected nestmates leads to reduced nestmate recognition among uninfected workers. The parasite-induced decline in nestmate recognition could be caused by higher intra-colonial cue diversity as tapeworm-infected workers are known to exhibit a modified hydrocarbon signature. This in turn may broaden the neuronal template of their nestmates, leading to a higher tolerance towards alien conspecifics. To test this hypothesis, we exchanged infected ants between colonies and analyzed their impact on CHC profiles of uninfected workers. We demonstrate that despite frequent grooming, which should promote the transfer of recognition cues, CHC profiles of uninfected workers neither changed in the presence of tapeworm-infected ants, nor did it increase cue diversity among uninfected nestmates within or between colonies. However, CHC profiles were systematically affected by the removal of nestmates and addition of non-nestmates, independently from the ants' infection status. For example, when non-nestmates were present workers expressed more dimethyl alkanes and higher overall CHC quantities, possibly to achieve a better distinction from non-nestmates. Workers showed clear task-specific profiles with tapeworm-infected workers resembling more closely young nurses than older foragers. Our results show that the parasite-induced decline in nestmate recognition is not due to increased recognition cue diversity or altered CHC

  15. The Use of Structural-Acoustic Techniques to Assess Potential Structural Damage From Sonic Booms

    NASA Technical Reports Server (NTRS)

    Garrelick, Joel; Martini, Kyle

    1996-01-01

    The potential impact of supersonic operations includes structural damage from the sonic boom overpressure. This paper describes a study of how structural-acoustic modeling and testing techniques may be used to assess the potential for such damage in the absence of actual flyovers. Procedures are described whereby transfer functions relating structural response to sonic boom signature may be obtained with a stationary acoustic source and appropriate data processing. Further, by invoking structural-acoustic reciprocity, these transfer functions may also be acquired by measuring the radiated sound from the structure under a mechanical drive. The approach is based on the fundamental assumption of linearity, both with regard to the (acoustic) propagation of the boom in the vicinity of the structure and to the structure's response. Practical issues revolve around acoustic far field and source directivity requirements. The technique was implemented on a specially fabricated test structure at Edwards AFB, CA with the support of Wyle Laboratories, Inc. Blank shots from a cannon served as our acoustic source and taps from an instrumented hammer generated the mechanical drive. Simulated response functions were constructed. Results of comparisons with corresponding measurements recorded during dedicated supersonic flyovers with F-15 aircraft are presented for a number of sensor placements.

  16. Validity and reliability of acoustic analysis of respiratory sounds in infants

    PubMed Central

    Elphick, H; Lancaster, G; Solis, A; Majumdar, A; Gupta, R; Smyth, R

    2004-01-01

    Objective: To investigate the validity and reliability of computerised acoustic analysis in the detection of abnormal respiratory noises in infants. Methods: Blinded, prospective comparison of acoustic analysis with stethoscope examination. Validity and reliability of acoustic analysis were assessed by calculating the degree of observer agreement using the κ statistic with 95% confidence intervals (CI). Results: 102 infants under 18 months were recruited. Convergent validity for agreement between stethoscope examination and acoustic analysis was poor for wheeze (κ = 0.07 (95% CI, –0.13 to 0.26)) and rattles (κ = 0.11 (–0.05 to 0.27)) and fair for crackles (κ = 0.36 (0.18 to 0.54)). Both the stethoscope and acoustic analysis distinguished well between sounds (discriminant validity). Agreement between observers for the presence of wheeze was poor for both stethoscope examination and acoustic analysis. Agreement for rattles was moderate for the stethoscope but poor for acoustic analysis. Agreement for crackles was moderate using both techniques. Within-observer reliability for all sounds using acoustic analysis was moderate to good. Conclusions: The stethoscope is unreliable for assessing respiratory sounds in infants. This has important implications for its use as a diagnostic tool for lung disorders in infants, and confirms that it cannot be used as a gold standard. Because of the unreliability of the stethoscope, the validity of acoustic analysis could not be demonstrated, although it could discriminate between sounds well and showed good within-observer reliability. For acoustic analysis, targeted training and the development of computerised pattern recognition systems may improve reliability so that it can be used in clinical practice. PMID:15499065

  17. Optical ID Tags for Secure Verification of Multispectral Visible and NIR Signatures

    NASA Astrophysics Data System (ADS)

    Pérez-Cabré, Elisabet; Millán, María S.; Javidi, Bahram

    2008-04-01

    We propose to combine information from visible (VIS) and near infrared (NIR) spectral bands to increase robustness on security systems and deter from unauthorized use of optical tags that permit the identification of a given person or object. The signature that identifies the element under surveillance will be only obtained by the appropriate combination of the visible content and the NIR data. The fully-phase encryption technique is applied to avoid an easy recognition of the resultant signature at the naked eye and an easy reproduction using conventional devices for imaging or scanning. The obtained complex-amplitude encrypted distribution is encoded on an identity (ID) tag. Spatial multiplexing of the encrypted signature allows us to build a distortion-invariant ID tag, so that remote authentication can be achieved even if the tag is captured under rotation or at different distances. We explore the possibility of using partial information of the encrypted distribution. Simulation results are provided and discussed.

  18. Use of intonation contours for speech recognition in noise by cochlear implant recipients.

    PubMed

    Meister, Hartmut; Landwehr, Markus; Pyschny, Verena; Grugel, Linda; Walger, Martin

    2011-05-01

    The corruption of intonation contours has detrimental effects on sentence-based speech recognition in normal-hearing listeners Binns and Culling [(2007). J. Acoust. Soc. Am. 122, 1765-1776]. This paper examines whether this finding also applies to cochlear implant (CI) recipients. The subjects' F0-discrimination and speech perception in the presence of noise were measured, using sentences with regular and inverted F0-contours. The results revealed that speech recognition for regular contours was significantly better than for inverted contours. This difference was related to the subjects' F0-discrimination providing further evidence that the perception of intonation patterns is important for the CI-mediated speech recognition in noise.

  19. Acoustic emission data assisted process monitoring.

    PubMed

    Yen, Gary G; Lu, Haiming

    2002-07-01

    Gas-liquid two-phase flows are widely used in the chemical industry. Accurate measurements of flow parameters, such as flow regimes, are the key of operating efficiency. Due to the interface complexity of a two-phase flow, it is very difficult to monitor and distinguish flow regimes on-line and real time. In this paper we propose a cost-effective and computation-efficient acoustic emission (AE) detection system combined with artificial neural network technology to recognize four major patterns in an air-water vertical two-phase flow column. Several crucial AE parameters are explored and validated, and we found that the density of acoustic emission events and ring-down counts are two excellent indicators for the flow pattern recognition problems. Instead of the traditional Fair map, a hit-count map is developed and a multilayer Perceptron neural network is designed as a decision maker to describe an approximate transmission stage of a given two-phase flow system.

  20. Automatic removal of cosmic ray signatures in Deep Impact images

    NASA Astrophysics Data System (ADS)

    Ipatov, S. I.; A'Hearn, M. F.; Klaasen, K. P.

    The results of recognition of cosmic ray (CR) signatures on single images made during the Deep Impact mission were analyzed for several codes written by several authors. For automatic removal of CR signatures on many images, we suggest using the code imgclean ( http://pdssbn.astro.umd.edu/volume/didoc_0001/document/calibration_software/dical_v5/) written by E. Deutsch as other codes considered do not work properly automatically with a large number of images and do not run to completion for some images; however, other codes can be better for analysis of certain specific images. Sometimes imgclean detects false CR signatures near the edge of a comet nucleus, and it often does not recognize all pixels of long CR signatures. Our code rmcr is the only code among those considered that allows one to work with raw images. For most visual images made during low solar activity at exposure time t > 4 s, the number of clusters of bright pixels on an image per second per sq. cm of CCD was about 2-4, both for dark and normal sky images. At high solar activity, it sometimes exceeded 10. The ratio of the number of CR signatures consisting of n pixels obtained at high solar activity to that at low solar activity was greater for greater n. The number of clusters detected as CR signatures on a single infrared image is by at least a factor of several greater than the actual number of CR signatures; the number of clusters based on analysis of two successive dark infrared frames is in agreement with an expected number of CR signatures. Some glitches of false CR signatures include bright pixels repeatedly present on different infrared images. Our interactive code imr allows a user to choose the regions on a considered image where glitches detected by imgclean as CR signatures are ignored. In other regions chosen by the user, the brightness of some pixels is replaced by the local median brightness if the brightness of these pixels is greater by some factor than the median brightness. The

  1. Nest signature changes throughout colony cycle and after social parasite invasion in social wasps

    PubMed Central

    Blancato, Giuliano; Picchi, Laura; Lucas, Christophe

    2017-01-01

    Social insects recognize their nestmates by means of a cuticular hydrocarbon signature shared by colony members, but how nest signature changes across time has been rarely tested in longitudinal studies and in the field. In social wasps, the chemical signature is also deposited on the nest surface, where it is used by newly emerged wasps as a reference to learn their colony odor. Here, we investigate the temporal variations of the chemical signature that wasps have deposited on their nests. We followed the fate of the colonies of the social paper wasp Polistes biglumis in their natural environment from colony foundation to decline. Because some colonies were invaded by the social parasite Polistes atrimandibularis, we also tested the effects of social parasites on the nest signature. We observed that, as the season progresses, the nest signature changed; the overall abundance of hydrocarbons as well as the proportion of longer-chain and branched hydrocarbons increased. Where present, social parasites altered the host-nest signature qualitatively (adding parasite-specific alkenes) and quantitatively (by interfering with the increase in overall hydrocarbon abundance). Our results show that 1) colony odor is highly dynamic both in colonies controlled by legitimate foundresses and in those controlled by social parasites; 2) emerged offspring contribute little to colony signature, if at all, in comparison to foundresses; and 3) social parasites, that later mimic host signature, initially mark host nests with species-specific hydrocarbons. This study implies that important updating of the neural template used in nestmate recognition should occur in social insects. PMID:29261775

  2. First Detection of the Acoustic Oscillation Phase Shift Expected from the Cosmic Neutrino Background.

    PubMed

    Follin, Brent; Knox, Lloyd; Millea, Marius; Pan, Zhen

    2015-08-28

    The unimpeded relativistic propagation of cosmological neutrinos prior to recombination of the baryon-photon plasma alters gravitational potentials and therefore the details of the time-dependent gravitational driving of acoustic oscillations. We report here a first detection of the resulting shifts in the temporal phase of the oscillations, which we infer from their signature in the cosmic microwave background temperature power spectrum.

  3. Acoustically regulated optical emission dynamics from quantum dot-like emission centers in GaN/InGaN nanowire heterostructures

    NASA Astrophysics Data System (ADS)

    Lazić, S.; Chernysheva, E.; Hernández-Mínguez, A.; Santos, P. V.; van der Meulen, H. P.

    2018-03-01

    We report on experimental studies of the effects induced by surface acoustic waves on the optical emission dynamics of GaN/InGaN nanowire quantum dots. We employ stroboscopic optical excitation with either time-integrated or time-resolved photoluminescence detection. In the absence of the acoustic wave, the emission spectra reveal signatures originated from the recombination of neutral exciton and biexciton confined in the probed nanowire quantum dot. When the nanowire is perturbed by the propagating acoustic wave, the embedded quantum dot is periodically strained and its excitonic transitions are modulated by the acousto-mechanical coupling. Depending on the recombination lifetime of the involved optical transitions, we can resolve acoustically driven radiative processes over time scales defined by the acoustic cycle. At high acoustic amplitudes, we also observe distortions in the transmitted acoustic waveform, which are reflected in the time-dependent spectral response of our sensor quantum dot. In addition, the correlated intensity oscillations observed during temporal decay of the exciton and biexciton emission suggest an effect of the acoustic piezoelectric fields on the quantum dot charge population. The present results are relevant for the dynamic spectral and temporal control of photon emission in III-nitride semiconductor heterostructures.

  4. Emotionally conditioning the target-speech voice enhances recognition of the target speech under "cocktail-party" listening conditions.

    PubMed

    Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang

    2018-05-01

    Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.

  5. Location and analysis of acoustic infrasound pulses in lightning

    NASA Astrophysics Data System (ADS)

    Arechiga, R.; Stock, M.; Thomas, R.; Erives, H.; Rison, W.; Edens, H.; Lapierre, J.

    2014-07-01

    Acoustic, VHF, and electrostatic measurements throw new light onto the origin and production mechanism of the thunder infrasound signature (<10 Hz) from lightning. This signature, composed of an initial compression followed by a rarefaction pulse, has been the subject of several unconfirmed theories and models. The observations of two intracloud flashes which each produced multiple infrasound pulses were analyzed for this work. Once the variation of the speed of sound with temperature is taken into account, both the compression and rarefaction portions of the infrasound pulses are found to originate very near lightning channels mapped by the Lightning Mapping Array. We found that none of the currently proposed models can explain infrasound generation by lightning, and thus propose an alternate theory: The infrasound compression pulse is produced by electrostatic interaction of the charge deposited on the channel and in the streamer zone of the lightning channel.

  6. Tsunami and acoustic-gravity waves in water of constant depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendin, Gali; Stiassnie, Michael

    2013-08-15

    A study of wave radiation by a rather general bottom displacement, in a compressible ocean of otherwise constant depth, is carried out within the framework of a three-dimensional linear theory. Simple analytic expressions for the flow field, at large distance from the disturbance, are derived. Realistic numerical examples indicate that the Acoustic-Gravity waves, which significantly precede the Tsunami, are expected to leave a measurable signature on bottom-pressure records that should be considered for early detection of Tsunami.

  7. Transcranial ultrasonic therapy based on time reversal of acoustically induced cavitation bubble signature

    PubMed Central

    Gâteau, Jérôme; Marsac, Laurent; Pernot, Mathieu; Aubry, Jean-Francois; Tanter, Mickaël; Fink, Mathias

    2010-01-01

    Brain treatment through the skull with High Intensity Focused Ultrasound (HIFU) can be achieved with multichannel arrays and adaptive focusing techniques such as time-reversal. This method requires a reference signal to be either emitted by a real source embedded in brain tissues or computed from a virtual source, using the acoustic properties of the skull derived from CT images. This non-invasive computational method focuses with precision, but suffers from modeling and repositioning errors that reduce the accessible acoustic pressure at the focus in comparison with fully experimental time-reversal using an implanted hydrophone. In this paper, this simulation-based targeting has been used experimentally as a first step for focusing through an ex vivo human skull at a single location. It has enabled the creation of a cavitation bubble at focus that spontaneously emitted an ultrasonic wave received by the array. This active source signal has allowed 97%±1.1% of the reference pressure (hydrophone-based) to be restored at the geometrical focus. To target points around the focus with an optimal pressure level, conventional electronic steering from the initial focus has been combined with bubble generation. Thanks to step by step bubble generation, the electronic steering capabilities of the array through the skull were improved. PMID:19770084

  8. Learned recognition of maternal signature odors mediates the first suckling episode in mice

    PubMed Central

    Logan, Darren W.; Brunet, Lisa J.; Webb, William R.; Cutforth, Tyler; Ngai, John; Stowers, Lisa

    2012-01-01

    Summary Background Soon after birth all mammals must initiate milk suckling to survive. In rodents, this innate behavior is critically dependent on uncharacterized maternally-derived chemosensory ligands. Recently the first pheromone sufficient to initiate suckling was isolated from the rabbit. Identification of the olfactory cues that trigger first suckling in the mouse would provide the means to determine the neural mechanisms that generate innate behavior. Results Here we use behavioral analysis, metabolomics, and calcium imaging of primary sensory neurons and find no evidence of ligands with intrinsic bioactivity, such as pheromones, acting to promote first suckling in the mouse. Instead, we find that the initiation of suckling is dependent on variable blends of maternal ‘signature odors’ that are learned and recognized prior to first suckling. Conclusions As observed with pheromone-mediated behavior, the response to signature odors releases innate behavior. However, this mechanism tolerates variability in both the signaling ligands and sensory neurons which may maximize the probability that this first essential behavior is successfully initiated. These results suggest that mammalian species have evolved multiple strategies to ensure the onset of this critical behavior. PMID:23041191

  9. Emotional recognition from the speech signal for a virtual education agent

    NASA Astrophysics Data System (ADS)

    Tickle, A.; Raghu, S.; Elshaw, M.

    2013-06-01

    This paper explores the extraction of features from the speech wave to perform intelligent emotion recognition. A feature extract tool (openSmile) was used to obtain a baseline set of 998 acoustic features from a set of emotional speech recordings from a microphone. The initial features were reduced to the most important ones so recognition of emotions using a supervised neural network could be performed. Given that the future use of virtual education agents lies with making the agents more interactive, developing agents with the capability to recognise and adapt to the emotional state of humans is an important step.

  10. Differentiation of red wines using an electronic nose based on surface acoustic wave devices.

    PubMed

    García, M; Fernández, M J; Fontecha, J L; Lozano, J; Santos, J P; Aleixandre, M; Sayago, I; Gutiérrez, J; Horrillo, M C

    2006-02-15

    An electronic nose, utilizing the principle of surface acoustic waves (SAW), was used to differentiate among different wines of the same variety of grapes which come from the same cellar. The electronic nose is based on eight surface acoustic wave sensors, one is a reference sensor and the others are coated by different polymers by spray coating technique. Data analysis was performed by two pattern recognition methods; principal component analysis (PCA) and probabilistic neuronal network (PNN). The results showed that electronic nose was able to identify the tested wines.

  11. Utility of bilateral acoustic hearing in combination with electrical stimulation provided by the cochlear implant.

    PubMed

    Plant, Kerrie; Babic, Leanne

    2016-01-01

    The aim of the study was to quantify the benefit provided by having access to amplified acoustic hearing in the implanted ear for use in combination with contralateral acoustic hearing and the electrical stimulation provided by the cochlear implant. Measures of spatial and non-spatial hearing abilities were obtained to compare performance obtained with different configurations of acoustic hearing in combination with electrical stimulation. In the combined listening condition participants had access to bilateral acoustic hearing whereas the bimodal condition used acoustic hearing contralateral to the implanted ear only. Experience was provided with each of the listening conditions using a repeated-measures A-B-B-A experimental design. Sixteen post-linguistically hearing-impaired adults participated in the study. Group mean benefit was obtained with use of the combined mode on measures of speech recognition in coincident speech in noise, localization ability, subjective ratings of real-world benefit, and musical sound quality ratings. Access to bilateral acoustic hearing after cochlear implantation provides significant benefit on a range of functional measures.

  12. Individual differences in cortical face selectivity predict behavioral performance in face recognition

    PubMed Central

    Huang, Lijie; Song, Yiying; Li, Jingguang; Zhen, Zonglei; Yang, Zetian; Liu, Jia

    2014-01-01

    In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain. PMID:25071513

  13. Artificial neural networks for document analysis and recognition.

    PubMed

    Marinai, Simone; Gori, Marco; Soda, Giovanni; Society, Computer

    2005-01-01

    Artificial neural networks have been extensively applied to document analysis and recognition. Most efforts have been devoted to the recognition of isolated handwritten and printed characters with widely recognized successful results. However, many other document processing tasks, like preprocessing, layout analysis, character segmentation, word recognition, and signature verification, have been effectively faced with very promising results. This paper surveys the most significant problems in the area of offline document image processing, where connectionist-based approaches have been applied. Similarities and differences between approaches belonging to different categories are discussed. A particular emphasis is given on the crucial role of prior knowledge for the conception of both appropriate architectures and learning algorithms. Finally, the paper provides a critical analysis on the reviewed approaches and depicts the most promising research guidelines in the field. In particular, a second generation of connectionist-based models are foreseen which are based on appropriate graphical representations of the learning environment.

  14. Coherent acoustic vibrations of metal nanoshells

    NASA Astrophysics Data System (ADS)

    Kirakosyan, A. S.; Shahbazyan, T. V.; Guillon, C.; Langot, P.; Del Fatti, N.; Vallee, F.; Cardinal, T.; Treguer, M.

    2007-03-01

    We study vibrational modes of gold nanoshells grown on dielectric core by means of time-resolved pump-probe spectroscopy. The fundamental breathing mode launched by a femtosecond pump pulse manifests itself in a pronounced time-domain modulation of the differential transmission probed at the frequency of the nanoshell surface plasmon resonance. The modulation amplitude is significantly stronger while the period is longer than in a gold nanoparticle of the same overall size. A theoretical model describing breathing mode frequency and damping for a nanoshell in a medium is developed. A distinct acoustical signature of nanoshells provides a new and efficient method for identifying these versatile nanostructures and for studying their mechanical and structural properties.

  15. High levels of sound pressure: acoustic reflex thresholds and auditory complaints of workers with noise exposure.

    PubMed

    Duarte, Alexandre Scalli Mathias; Ng, Ronny Tah Yen; de Carvalho, Guilherme Machado; Guimarães, Alexandre Caixeta; Pinheiro, Laiza Araujo Mohana; Costa, Everardo Andrade da; Gusmão, Reinaldo Jordão

    2015-01-01

    The clinical evaluation of subjects with occupational noise exposure has been difficult due to the discrepancy between auditory complaints and auditory test results. This study aimed to evaluate the contralateral acoustic reflex thresholds of workers exposed to high levels of noise, and to compare these results to the subjects' auditory complaints. This clinical retrospective study evaluated 364 workers between 1998 and 2005; their contralateral acoustic reflexes were compared to auditory complaints, age, and noise exposure time by chi-squared, Fisher's, and Spearman's tests. The workers' age ranged from 18 to 50 years (mean=39.6), and noise exposure time from one to 38 years (mean=17.3). We found that 15.1% (55) of the workers had bilateral hearing loss, 38.5% (140) had bilateral tinnitus, 52.8% (192) had abnormal sensitivity to loud sounds, and 47.2% (172) had speech recognition impairment. The variables hearing loss, speech recognition impairment, tinnitus, age group, and noise exposure time did not show relationship with acoustic reflex thresholds; however, all complaints demonstrated a statistically significant relationship with Metz recruitment at 3000 and 4000Hz bilaterally. There was no significance relationship between auditory complaints and acoustic reflexes. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  16. Automated detection and recognition of wildlife using thermal cameras.

    PubMed

    Christiansen, Peter; Steen, Kim Arild; Jørgensen, Rasmus Nyholm; Karstoft, Henrik

    2014-07-30

    In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3-10 m and an accuracy of 75.2% for an altitude range of 10-20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3-10 of meters and 77.7% in an altitude range of 10-20 m.

  17. Trimodal speech perception: how residual acoustic hearing supplements cochlear-implant consonant recognition in the presence of visual cues.

    PubMed

    Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W

    2015-01-01

    As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues

  18. Detection and recognition of analytes based on their crystallization patterns

    DOEpatents

    Morozov, Victor [Manassas, VA; Bailey, Charles L [Cross Junction, VA; Vsevolodov, Nikolai N [Kensington, MD; Elliott, Adam [Manassas, VA

    2008-05-06

    The invention contemplates a method for recognition of proteins and other biological molecules by imaging morphology, size and distribution of crystalline and amorphous dry residues in droplets (further referred to as "crystallization pattern") containing predetermined amount of certain crystal-forming organic compounds (reporters) to which protein to be analyzed is added. It has been shown that changes in the crystallization patterns of a number of amino-acids can be used as a "signature" of a protein added. It was also found that both the character of changer in the crystallization patter and the fact of such changes can be used as recognition elements in analysis of protein molecules.

  19. Memory Distortion and Its Avoidance: An Event-Related Potentials Study on False Recognition and Correct Rejection

    PubMed Central

    Beato, Maria Soledad

    2016-01-01

    Memory researchers have long been captivated by the nature of memory distortions and have made efforts to identify the neural correlates of true and false memories. However, the underlying mechanisms of avoiding false memories by correctly rejecting related lures remains underexplored. In this study, we employed a variant of the Deese/Roediger-McDermott paradigm to explore neural signatures of committing and avoiding false memories. ERP were obtained for True recognition, False recognition, Correct rejection of new items, and, more importantly, Correct rejection of related lures. With these ERP data, early-frontal, left-parietal, and late right-frontal old/new effects (associated with familiarity, recollection, and monitoring processes, respectively) were analysed. Results indicated that there were similar patterns for True and False recognition in all three old/new effects analysed in our study. Also, False recognition and Correct rejection of related lures activities seemed to share common underlying familiarity-based processes. The ERP similarities between False recognition and Correct rejection of related lures disappeared when recollection processes were examined because only False recognition presented a parietal old/new effect. This finding supported the view that actual false recollections underlie false memories, providing evidence consistent with previous behavioural research and with most ERP and neuroimaging studies. Later, with the onset of monitoring processes, False recognition and Correct rejection of related lures waveforms presented, again, clearly dissociated patterns. Specifically, False recognition and True recognition showed more positive going patterns than Correct rejection of related lures signal and Correct rejection of new items signature. Since False recognition and Correct rejection of related lures triggered familiarity-recognition processes, our results suggest that deciding which items are studied is based more on recollection

  20. Memory Distortion and Its Avoidance: An Event-Related Potentials Study on False Recognition and Correct Rejection.

    PubMed

    Cadavid, Sara; Beato, Maria Soledad

    2016-01-01

    Memory researchers have long been captivated by the nature of memory distortions and have made efforts to identify the neural correlates of true and false memories. However, the underlying mechanisms of avoiding false memories by correctly rejecting related lures remains underexplored. In this study, we employed a variant of the Deese/Roediger-McDermott paradigm to explore neural signatures of committing and avoiding false memories. ERP were obtained for True recognition, False recognition, Correct rejection of new items, and, more importantly, Correct rejection of related lures. With these ERP data, early-frontal, left-parietal, and late right-frontal old/new effects (associated with familiarity, recollection, and monitoring processes, respectively) were analysed. Results indicated that there were similar patterns for True and False recognition in all three old/new effects analysed in our study. Also, False recognition and Correct rejection of related lures activities seemed to share common underlying familiarity-based processes. The ERP similarities between False recognition and Correct rejection of related lures disappeared when recollection processes were examined because only False recognition presented a parietal old/new effect. This finding supported the view that actual false recollections underlie false memories, providing evidence consistent with previous behavioural research and with most ERP and neuroimaging studies. Later, with the onset of monitoring processes, False recognition and Correct rejection of related lures waveforms presented, again, clearly dissociated patterns. Specifically, False recognition and True recognition showed more positive going patterns than Correct rejection of related lures signal and Correct rejection of new items signature. Since False recognition and Correct rejection of related lures triggered familiarity-recognition processes, our results suggest that deciding which items are studied is based more on recollection

  1. Magnetic Oscillations Mark Sites of Magnetic Transients in an Acoustically Active Flare

    NASA Astrophysics Data System (ADS)

    Lindsey, Charles A.; Donea, A.; Hudson, H. S.; Martinez Oliveros, J.; Hanson, C.

    2011-05-01

    The flare of 2011 February 15, in NOAA AR11158, was the first acoustically active flare of solar cycle 24, and the first observed by the Solar Dynamics Observatory (SDO). It was exceptional in a number of respects (Kosovichev 2011a,b). Sharp ribbon-like transient Doppler, and magnetic signatures swept over parts of the active region during the impulsive phase of the flare. We apply seismic holography to a 2-hr time series of HMI observations encompassing the flare. The acoustic source distribution appears to have been strongly concentrated in a single highly compact penumbral region in which the continuum-intensity signature was unusually weak. The line-of-sight magnetic transient was strong in parts of the active region, but relatively weak in the seismic-source region. On the other hand, the neighbourhoods of the regions visited by the strongest magnetic transients maintained conspicuous 5-minutes-period variations in the line of sight magnetic signature for the full 2-hr duration of the time series, before the flare as well as after. We apply standard helioseismic control diagnostics for clues as to the physics underlying 5-minute magnetic oscillations in regions conducive to magnetic transients during a flare and consider the prospective development of this property as an indicator of flare potentiality on some time scale. We make use of high-resolution data from AIA, using diffracted images where necessary to obtain good photometry where the image is otherwise saturated. This is relevant to seismic emission driven by thick-target heating in the absence of back-warming. We also use RHESSI imaging spectroscopy to compare the source distributions of HXR and seismic emission.

  2. Characteristics of acoustic emissions from shearing of granular media

    NASA Astrophysics Data System (ADS)

    Michlmayr, Gernot; Cohen, Denis; Or, Dani

    2010-05-01

    Deformation and abrupt formation of small failure cracks on hillslopes often precede sudden release of shallow landslides. The associated frictional sliding, breakage of cementing agents and rupture of embedded biological fibers or liquid bonds between grain contacts are associated with measurable acoustic emissions (AE). The aim of this study was to characterize small scale shear induced failure events (as models of precursors prior to a landslide) by capturing elastic body waves emitted from such events. We conducted a series of experiments with a specially-designed shear frame to measure and characterize high frequency (kHz range) acoustic emissions under different conditions using piezoelectric sensors. Tests were performed at different shear rates ranging from 0.01mm/sec to 2mm/sec with different dry and wet granular materials. In addition to acoustic emissions the setup allows to measure forces and deformations in both horizontal and vertical directions. Results provide means to define characteristic AE signature for different failure events. We observed an increase in AE activity during dilation of granular samples. In wet material AE signals were attributed to the snap-off of liquid bridges between single gains. Acoustic emissions clearly provide an experimental tool for exploring micro-mechanical processes in dry and wet material. Moreover, high sampling rates found in most AE systems coupled with waveguides to overcome signal attenuation offer a promise for field applications as an early warning method for observing the progressive development of slip planes prior to the onset of a landslide.

  3. Major depressive disorder discrimination using vocal acoustic features.

    PubMed

    Taguchi, Takaya; Tachikawa, Hirokazu; Nemoto, Kiyotaka; Suzuki, Masayuki; Nagano, Toru; Tachibana, Ryuki; Nishimura, Masafumi; Arai, Tetsuaki

    2018-01-01

    The voice carries various information produced by vibrations of the vocal cords and the vocal tract. Though many studies have reported a relationship between vocal acoustic features and depression, including mel-frequency cepstrum coefficients (MFCCs) which applied to speech recognition, there have been few studies in which acoustic features allowed discrimination of patients with depressive disorder. Vocal acoustic features as biomarker of depression could make differential diagnosis of patients with depressive state. In order to achieve differential diagnosis of depression, in this preliminary study, we examined whether vocal acoustic features could allow discrimination between depressive patients and healthy controls. Subjects were 36 patients who met the criteria for major depressive disorder and 36 healthy controls with no current or past psychiatric disorders. Voices of reading out digits before and after verbal fluency task were recorded. Voices were analyzed using OpenSMILE. The extracted acoustic features, including MFCCs, were used for group comparison and discriminant analysis between patients and controls. The second dimension of MFCC (MFCC 2) was significantly different between groups and allowed the discrimination between patients and controls with a sensitivity of 77.8% and a specificity of 86.1%. The difference in MFCC 2 between the two groups reflected an energy difference of frequency around 2000-3000Hz. The MFCC 2 was significantly different between depressive patients and controls. This feature could be a useful biomarker to detect major depressive disorder. Sample size was relatively small. Psychotropics could have a confounding effect on voice. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Augmenting the impact of technology adoption with financial incentive to improve radiology report signature times.

    PubMed

    Andriole, Katherine P; Prevedello, Luciano M; Dufault, Allen; Pezeshk, Parham; Bransfield, Robert; Hanson, Richard; Doubilet, Peter M; Seltzer, Steven E; Khorasani, Ramin

    2010-03-01

    Radiology report signature time (ST) can be a substantial component of total report turnaround time. Poor turnaround time resulting from lengthy ST can adversely affect patient care. The combination of technology adoption with financial incentive was evaluated to determine if ST improvement can be augmented and sustained. This prospective study was performed at a 751-bed, urban, tertiary care adult teaching hospital. Test-site imaging volume approximated 48,000 examinations per month. The radiology department has 100 trainees and 124 attending radiologists serving multiple institutions. Over a study period of 4 years and 4 months, three interventions focused on radiologist signature performance were implemented: 1) a notification paging application that alerted radiologists when reports were ready for signature, 2) a picture archiving and communications systems (PACS)-integrated speech recognition report generation system, and 3) a departmental financial incentive to reward radiologists semiannually for ST performance. Signature time was compared before and after the interventions. Wilcoxon and linear regression statistical analyses were used to assess the significance of trends. Technology adoption (paging plus speech recognition) reduced median ST from >5 to <1 hour (P < .001) and 80th-percentile ST from >24 to 15 to 18 hours (P < .001). Subsequent addition of a financial incentive further improved 80th-percentile ST to 4 to 8 hours (P < .001). The gains in median and 80th-percentile ST were sustained over the final 31 months of the study period. Technology interventions coupled with financial incentive can result in synergistic and sustainable improvement in radiologist report-signing behavior. The addition of a financial incentive leads to better performance than that achievable through technology alone.

  5. Detecting and visualizing weak signatures in hyperspectral data

    NASA Astrophysics Data System (ADS)

    MacPherson, Duncan James

    This thesis evaluates existing techniques for detecting weak spectral signatures from remotely sensed hyperspectral data. Algorithms are presented that successfully detect hard-to-find 'mystery' signatures in unknown cluttered backgrounds. The term 'mystery' is used to describe a scenario where the spectral target and background endmembers are unknown. Sub-Pixel analysis and background suppression are used to find deeply embedded signatures which can be less than 10% of the total signal strength. Existing 'mystery target' detection algorithms are derived and compared. Several techniques are shown to be superior both visually and quantitatively. Detection performance is evaluated using confidence metrics that are developed. A multiple algorithm approach is shown to improve detection confidence significantly. Although the research focuses on remote sensing applications, the algorithms presented can be applied to a wide variety of diverse fields such as medicine, law enforcement, manufacturing, earth science, food production, and astrophysics. The algorithms are shown to be general and can be applied to both the reflective and emissive parts of the electromagnetic spectrum. The application scope is a broad one and the final results open new opportunities for many specific applications including: land mine detection, pollution and hazardous waste detection, crop abundance calculations, volcanic activity monitoring, detecting diseases in food, automobile or airplane target recognition, cancer detection, mining operations, extracting galactic gas emissions, etc.

  6. Gesture recognition for smart home applications using portable radar sensors.

    PubMed

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes.

  7. Obligatory and facultative brain regions for voice-identity recognition

    PubMed Central

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Abstract Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal

  8. Armored Combat Vehicles Science and Technology Plan

    DTIC Science & Technology

    1982-11-01

    APPLICATION OF SENSORS Investigate the seismic, acoustic, and electromagnetic signatures of military and intruder -type targets and the theoretical aspects...a prototype sampling system which has the capability to monitor ambieut air both outside and inside vehicles and provide an early warning to the crew...and through various processing modules provide automated functions for simultaneous tracking of targets and automitic recognition, 74 f’," SENSING

  9. The Effects of Musical and Linguistic Components in Recognition of Real-World Musical Excerpts by Cochlear Implant Recipients and Normal-Hearing Adults

    PubMed Central

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Olszewski, Carol; Knutson, John F.; Turner, Christopher; Gantz, Bruce

    2011-01-01

    Background Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. Objective The purpose of this study was to examine how accurately adults who use CIs (n=87) and those with normal hearing (NH) (n=17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. Results CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Methods Participants were tested on melody recognition of complex melodies (pop, country, classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. Conclusions These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, listening for enjoyment). PMID:22803258

  10. The effects of musical and linguistic components in recognition of real-world musical excerpts by cochlear implant recipients and normal-hearing adults.

    PubMed

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Olszewski, Carol; Knutson, John F; Turner, Christopher; Gantz, Bruce

    2012-01-01

    Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. The purpose of this study was to examine how accurately adults who use CIs (n = 87) and those with normal hearing (NH) (n = 17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Participants were tested on melody recognition of complex melodies (pop, country, & classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, & listening for enjoyment).

  11. Acoustic measurements on aerofoils moving in a circle at high speed

    NASA Technical Reports Server (NTRS)

    Wright, S. E.; Crosby, W.; Lee, D. L.

    1982-01-01

    Features of the test apparatus, research objectives and sample test results at the Stanford University rotor aerodynamics and noise facility are described. A steel frame equipped to receive lead shot for damping vibrations supports the drive shaft for rotor blade elements. Sleeve bearings are employed to assure quietness, and a variable speed ac motor produces the rotations. The test stand can be configured for horizontal or vertical orientation of the drive shaft. The entire assembly is housed in an acoustically sealed room. Rotation conditions for hover and large angles of attack can be studied, together with rotational and blade element noises. Research is possible on broad band, discrete frequency, and high speed noise, with measurements taken 3 m from the center of the rotor. Acoustic signatures from Mach 0.3-0.93 trials with a NACA 0012 airfoil are provided.

  12. Selective Surface Acoustic Wave-Based Organophosphorus Sensor Employing a Host-Guest Self-Assembly Monolayer of β-Cyclodextrin Derivative

    PubMed Central

    Pan, Yong; Mu, Ning; Shao, Shengyu; Yang, Liu; Wang, Wen; Xie, Xiao; He, Shitang

    2015-01-01

    Self-assembly and molecular imprinting technologies are very attractive technologies for the development of artificial recognition systems and provide chemical recognition based on need and not happenstance. In this paper, we employed a β-cyclodextrin derivative surface acoustic wave (SAW) chemical sensor for detecting the chemical warfare agents (CWAs) sarin (O-Isoprophyl methylphosphonofluoridate, GB). Using sarin acid (isoprophyl hydrogen methylphosphonate) as an imprinting template, mono[6-deoxy-6-[(mercaptodecamethylene)thio

  13. LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.

    PubMed

    Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu

    2005-01-01

    Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

  14. Gigahertz acoustic vibrations of elastically anisotropic Indium–tin-oxide nanorod arrays [Gigahertz modulation of the full visible spectrum via acoustic vibrations of elastically anisotropic Indium-tin-oxide nanorod arrays

    DOE PAGES

    Guo, Peijun; Schaller, Richard D.; Ocola, Leonidas E.; ...

    2016-08-15

    Active control of light is important for photonic integrated circuits, optical switches,. and telecommunications. Coupling light with acoustic vibrations in nanoscale optical resonators offers optical modulation capabilities with high bandwidth and Small footprint Instead of using noble metals, here we introduce indium tin-oxide nanorod arrays (ITO-NRAs) as the operating media;and demonstrate optical modulation covering the visible spectral range (from 360 to 700 nm), with similar to 20 GHz bandwidth through the excitation of coherent acoustic vibrations in ITO-NRAs. This broadband modulation results from the collective optical diffraction by the dielectric ITO-NRAs, and a high differential transmission modulation up to 10%more » is achieved through efficient near-infrared, on-plasmon-resonance pumping. By combining the frequency signatures Of the vibrational modes with finite-element simulations, we,further determine the anisotropic elastic constants for single-crystalline ITO, which are not known-for the bulk phase. Furthermore, this technique to determine elastic constants using Coherent acoustic vibrations of uniform nanostructures can be generalized to the study of other inorganic materials.« less

  15. Gigahertz acoustic vibrations of elastically anisotropic Indium–tin-oxide nanorod arrays [Gigahertz modulation of the full visible spectrum via acoustic vibrations of elastically anisotropic Indium-tin-oxide nanorod arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Peijun; Schaller, Richard D.; Ocola, Leonidas E.

    Active control of light is important for photonic integrated circuits, optical switches,. and telecommunications. Coupling light with acoustic vibrations in nanoscale optical resonators offers optical modulation capabilities with high bandwidth and Small footprint Instead of using noble metals, here we introduce indium tin-oxide nanorod arrays (ITO-NRAs) as the operating media;and demonstrate optical modulation covering the visible spectral range (from 360 to 700 nm), with similar to 20 GHz bandwidth through the excitation of coherent acoustic vibrations in ITO-NRAs. This broadband modulation results from the collective optical diffraction by the dielectric ITO-NRAs, and a high differential transmission modulation up to 10%more » is achieved through efficient near-infrared, on-plasmon-resonance pumping. By combining the frequency signatures Of the vibrational modes with finite-element simulations, we,further determine the anisotropic elastic constants for single-crystalline ITO, which are not known-for the bulk phase. Furthermore, this technique to determine elastic constants using Coherent acoustic vibrations of uniform nanostructures can be generalized to the study of other inorganic materials.« less

  16. Topographic Signatures of Meandering Rivers with Differences in Outer Bank Cohesion

    NASA Astrophysics Data System (ADS)

    Kelly, S. A.; Belmont, P.

    2014-12-01

    Within a given valley setting, interactions between river hydraulics, sediment, topography, and vegetation determine attributes of channel morphology, including planform, width and depth, slope, and bed and bank properties. These feedbacks also govern river behavior, including migration and avulsion. Bank cohesion, from the addition of fine sediment and/or vegetation has been recognized in flume experiments as a necessary component to create and maintain a meandering channel planform. Greater bank cohesion slows bank erosion, limiting the rate at which a river can adjust laterally and preventing so-called "runaway widening" to a braided state. Feedbacks of bank cohesion on channel hydraulics and sediment transport may thus produce distinct topographic signatures, or patterns in channel width, depth, and point bar transverse slope. We expect that in bends of greater outer bank cohesion the channel will be narrower, deeper, and bars will have greater transverse slopes. Only recently have we recognized that biotic processes may imprint distinct topographic signatures on the landscape. This study explores topographic signatures of three US rivers: the lower Minnesota River, near Mankato, MN, the Le Sueur River, south central MN, and the Fall River, Rocky Mountain National Park, CO. Each of these rivers has variability in outer bank cohesion, quantified based on geotechnical and vegetation properties, and in-channel topography, which was derived from rtkGPS and acoustic bathymetry surveys. We present methods for incorporating biophysical feedbacks into geomorphic transport laws so that models can better simulate the spatial patterns and variability of topographic signatures.

  17. Developing an acoustic method for reducing North Atlantic right whale (Eubalaena glacialis) ship strike mortality along the United States eastern seaboard

    NASA Astrophysics Data System (ADS)

    Mullen, Kaitlyn Allen

    North Atlantic right whales (Eubalaena glacialis ) are among the world's most endangered cetaceans. Although protected from commercial whaling since 1949, North Atlantic right whales exhibit little to no population growth. Ship strike mortality is the leading known cause of North Atlantic right whale mortality. North Atlantic right whales exhibit developed auditory systems, and vocalize in the frequency range that dominates ship acoustic signatures. With no behavioral audiogram published, current literature assumes these whales should be able to acoustically detect signals in the same frequencies they vocalize. Recorded ship acoustic signatures occur at intensities that are similar or higher to those recorded by vocalizing North Atlantic right whales. If North Atlantic right whales are capable of acoustically detecting oncoming ship, why are they susceptible to ship strike mortality? This thesis models potential acoustic impediments to North Atlantic right whale detection of oncoming ships, and concludes the presence of modeled and observed bow null effect acoustic shadow zones, located directly ahead of oncoming ships, are likely to impair the ability of North Atlantic right whales to detect and/or localize oncoming shipping traffic. This lack of detection and/or localization likely leads to a lack of ship strike avoidance, and thus contributes to the observed high rates of North Atlantic right whale ship strike mortality. I propose that North Atlantic right whale ship strike mortality reduction is possible via reducing and/or eliminating the presence of bow null effect acoustic shadow zones. This thesis develops and tests one method for bow null effect acoustic shadow zone reduction on five ships. Finally, I review current United States policy towards North Atlantic right whale ship strike mortality in an effort to determine if the bow null effect acoustic shadow zone reduction method developed is a viable method for reducing North Atlantic right whale ship

  18. Acoustic/seismic signal propagation and sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Marlin, David H.; Mackay, Sean

    2007-04-01

    Performance, optimal employment, and interpretation of data from acoustic and seismic sensors depend strongly and in complex ways on the environment in which they operate. Software tools for guiding non-expert users of acoustic and seismic sensors are therefore much needed. However, such tools require that many individual components be constructed and correctly connected together. These components include the source signature and directionality, representation of the atmospheric and terrain environment, calculation of the signal propagation, characterization of the sensor response, and mimicking of the data processing at the sensor. Selection of an appropriate signal propagation model is particularly important, as there are significant trade-offs between output fidelity and computation speed. Attenuation of signal energy, random fading, and (for array systems) variations in wavefront angle-of-arrival should all be considered. Characterization of the complex operational environment is often the weak link in sensor modeling: important issues for acoustic and seismic modeling activities include the temporal/spatial resolution of the atmospheric data, knowledge of the surface and subsurface terrain properties, and representation of ambient background noise and vibrations. Design of software tools that address these challenges is illustrated with two examples: a detailed target-to-sensor calculation application called the Sensor Performance Evaluator for Battlefield Environments (SPEBE) and a GIS-embedded approach called Battlefield Terrain Reasoning and Awareness (BTRA).

  19. Auditory Perception in an Open Space: Detection and Recognition

    DTIC Science & Technology

    2015-06-01

    recognition ranges of most sounds were approximately 100–200 m. Therefore, it may be hypothesized that this range makes up the soundscape or the range of the... soundscapes . Acta Acustica united with Acustica. 2003;89:287–295. Delaney ME. Range predictions for siren sources. Teddington (UK): National...management of park soundscapes : a review. Applied Acoustics. 2008;69:77–92. Mirabella A, Goldstein D. The effects of ambient noise upon signal detection

  20. Real Traceable Signatures

    NASA Astrophysics Data System (ADS)

    Chow, Sherman S. M.

    Traceable signature scheme extends a group signature scheme with an enhanced anonymity management mechanism. The group manager can compute a tracing trapdoor which enables anyone to test if a signature is signed by a given misbehaving user, while the only way to do so for group signatures requires revealing the signer of all signatures. Nevertheless, it is not tracing in a strict sense. For all existing schemes, T tracing agents need to recollect all N' signatures ever produced and perform RN' “checks” for R revoked users. This involves a high volume of transfer and computations. Increasing T increases the degree of parallelism for tracing but also the probability of “missing” some signatures in case some of the agents are dishonest.

  1. The semantics of prosody: acoustic and perceptual evidence of prosodic correlates to word meaning.

    PubMed

    Nygaard, Lynne C; Herold, Debora S; Namy, Laura L

    2009-01-01

    This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language. Copyright © 2009 Cognitive Science Society, Inc.

  2. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.

    PubMed

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may

  3. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns

    PubMed Central

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J.

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10− and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may

  4. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 3; Aero-Acoustic Analyses and Experimental Validation

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.

    2008-01-01

    A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.

  5. Methods of extending signatures and training without ground information. [data processing, pattern recognition

    NASA Technical Reports Server (NTRS)

    Henderson, R. G.; Thomas, G. S.; Nalepka, R. F.

    1975-01-01

    Methods of performing signature extension, using LANDSAT-1 data, are explored. The emphasis is on improving the performance and cost-effectiveness of large area wheat surveys. Two methods were developed: ASC, and MASC. Two methods, Ratio, and RADIFF, previously used with aircraft data were adapted to and tested on LANDSAT-1 data. An investigation into the sources and nature of between scene data variations was included. Initial investigations into the selection of training fields without in situ ground truth were undertaken.

  6. Cochlear Implant Microphone Location Affects Speech Recognition in Diffuse Noise

    PubMed Central

    Kolberg, Elizabeth R.; Sheffield, Sterling W.; Davis, Timothy J.; Sunderhaus, Linsey W.; Gifford, René H.

    2015-01-01

    Background Despite improvements in cochlear implants (CIs), CI recipients continue to experience significant communicative difficulty in background noise. Many potential solutions have been proposed to help increase signal-to-noise ratio in noisy environments, including signal processing and external accessories. To date, however, the effect of microphone location on speech recognition in noise has focused primarily on hearing aid users. Purpose The purpose of this study was to (1) measure physical output for the T-Mic as compared with the integrated behind-the-ear(BTE) processor mic for various source azimuths, and (2) to investigate the effect of CI processor mic location for speech recognition in semi-diffuse noise with speech originating from various source azimuths as encountered in everyday communicative environments. Research Design A repeated-measures, within-participant design was used to compare performance across listening conditions. Study Sample A total of 11 adults with Advanced Bionics CIs were recruited for this study. Data Collection and Analysis Physical acoustic output was measured on a Knowles Experimental Mannequin for Acoustic Research (KEMAR) for the T-Mic and BTE mic, with broadband noise presented at 0 and 90° (directed toward the implant processor). In addition to physical acoustic measurements, we also assessed recognition of sentences constructed by researchers at Texas Instruments, the Massachusetts Institute of Technology, and the Stanford Research Institute (TIMIT sentences) at 60 dBA for speech source azimuths of 0, 90, and 270°. Sentences were presented in a semi-diffuse restaurant noise originating from the R-SPACE 8-loudspeaker array. Signal-to-noise ratio was determined individually to achieve approximately 50% correct in the unilateral implanted listening condition with speech at 0°. Performance was compared across the T-Mic, 50/50, and the integrated BTE processor mic. Results The integrated BTE mic provided approximately 5

  7. Cochlear implant microphone location affects speech recognition in diffuse noise.

    PubMed

    Kolberg, Elizabeth R; Sheffield, Sterling W; Davis, Timothy J; Sunderhaus, Linsey W; Gifford, René H

    2015-01-01

    Despite improvements in cochlear implants (CIs), CI recipients continue to experience significant communicative difficulty in background noise. Many potential solutions have been proposed to help increase signal-to-noise ratio in noisy environments, including signal processing and external accessories. To date, however, the effect of microphone location on speech recognition in noise has focused primarily on hearing aid users. The purpose of this study was to (1) measure physical output for the T-Mic as compared with the integrated behind-the-ear (BTE) processor mic for various source azimuths, and (2) to investigate the effect of CI processor mic location for speech recognition in semi-diffuse noise with speech originating from various source azimuths as encountered in everyday communicative environments. A repeated-measures, within-participant design was used to compare performance across listening conditions. A total of 11 adults with Advanced Bionics CIs were recruited for this study. Physical acoustic output was measured on a Knowles Experimental Mannequin for Acoustic Research (KEMAR) for the T-Mic and BTE mic, with broadband noise presented at 0 and 90° (directed toward the implant processor). In addition to physical acoustic measurements, we also assessed recognition of sentences constructed by researchers at Texas Instruments, the Massachusetts Institute of Technology, and the Stanford Research Institute (TIMIT sentences) at 60 dBA for speech source azimuths of 0, 90, and 270°. Sentences were presented in a semi-diffuse restaurant noise originating from the R-SPACE 8-loudspeaker array. Signal-to-noise ratio was determined individually to achieve approximately 50% correct in the unilateral implanted listening condition with speech at 0°. Performance was compared across the T-Mic, 50/50, and the integrated BTE processor mic. The integrated BTE mic provided approximately 5 dB attenuation from 1500-4500 Hz for signals presented at 0° as compared with 90

  8. Acoustic investigation of the aperture dynamics of an elastic membrane closing an overpressurized cylindrical cavity

    NASA Astrophysics Data System (ADS)

    Sánchez, Claudia; Vidal, Valérie; Melo, Francisco

    2015-08-01

    We report an experimental study of the acoustic signal produced by the rupture of an elastic membrane that initially closes a cylindrical overpressurized cavity. This configuration has been recently used as an experimental model system for the investigation of the acoustic emission from the bursting of elongated gas bubbles rising in a conduit. Here, we investigate the effect of the membrane rupture dynamics on the acoustic signal produced by the pressure release by changing the initial tension of the membrane. The initial overpressure in the cavity is fixed at a value such that the system remains in the linear acoustic regime. For large initial membrane deformation, the rupture time τ rup is small compared to the wave propagation time in the cavity and the pressure wave inside the conduit can be fully captured by the linear theory. For low membrane tension, a hole is pierced in the membrane but its rupture does not occur. For intermediate deformation, finally, the rupture progresses in two steps: first the membrane opens slowly; then, after reaching a critical size, the rupture accelerates. A transversal wave is excited along the membrane surface. The characteristic signature of each opening dynamics on the acoustic emission is described.

  9. The Effects of Acoustic Bandwidth on Simulated Bimodal Benefit in Children and Adults with Normal Hearing.

    PubMed

    Sheffield, Sterling W; Simha, Michelle; Jahn, Kelly N; Gifford, René H

    2016-01-01

    The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (<250 Hz) in the nonimplanted ear and increasing benefit with broader bandwidth

  10. Distributed acoustic cues for caller identity in macaque vocalization.

    PubMed

    Fukushima, Makoto; Doyle, Alex M; Mullarkey, Matthew P; Mishkin, Mortimer; Averbeck, Bruno B

    2015-12-01

    Individual primates can be identified by the sound of their voice. Macaques have demonstrated an ability to discern conspecific identity from a harmonically structured 'coo' call. Voice recognition presumably requires the integrated perception of multiple acoustic features. However, it is unclear how this is achieved, given considerable variability across utterances. Specifically, the extent to which information about caller identity is distributed across multiple features remains elusive. We examined these issues by recording and analysing a large sample of calls from eight macaques. Single acoustic features, including fundamental frequency, duration and Weiner entropy, were informative but unreliable for the statistical classification of caller identity. A combination of multiple features, however, allowed for highly accurate caller identification. A regularized classifier that learned to identify callers from the modulation power spectrum of calls found that specific regions of spectral-temporal modulation were informative for caller identification. These ranges are related to acoustic features such as the call's fundamental frequency and FM sweep direction. We further found that the low-frequency spectrotemporal modulation component contained an indexical cue of the caller body size. Thus, cues for caller identity are distributed across identifiable spectrotemporal components corresponding to laryngeal and supralaryngeal components of vocalizations, and the integration of those cues can enable highly reliable caller identification. Our results demonstrate a clear acoustic basis by which individual macaque vocalizations can be recognized.

  11. Distributed acoustic cues for caller identity in macaque vocalization

    PubMed Central

    Doyle, Alex M.; Mullarkey, Matthew P.; Mishkin, Mortimer; Averbeck, Bruno B.

    2015-01-01

    Individual primates can be identified by the sound of their voice. Macaques have demonstrated an ability to discern conspecific identity from a harmonically structured ‘coo’ call. Voice recognition presumably requires the integrated perception of multiple acoustic features. However, it is unclear how this is achieved, given considerable variability across utterances. Specifically, the extent to which information about caller identity is distributed across multiple features remains elusive. We examined these issues by recording and analysing a large sample of calls from eight macaques. Single acoustic features, including fundamental frequency, duration and Weiner entropy, were informative but unreliable for the statistical classification of caller identity. A combination of multiple features, however, allowed for highly accurate caller identification. A regularized classifier that learned to identify callers from the modulation power spectrum of calls found that specific regions of spectral–temporal modulation were informative for caller identification. These ranges are related to acoustic features such as the call’s fundamental frequency and FM sweep direction. We further found that the low-frequency spectrotemporal modulation component contained an indexical cue of the caller body size. Thus, cues for caller identity are distributed across identifiable spectrotemporal components corresponding to laryngeal and supralaryngeal components of vocalizations, and the integration of those cues can enable highly reliable caller identification. Our results demonstrate a clear acoustic basis by which individual macaque vocalizations can be recognized. PMID:27019727

  12. Ionospheric response to infrasonic-acoustic waves generated by natural hazard events

    NASA Astrophysics Data System (ADS)

    Zettergren, M. D.; Snively, J. B.

    2015-09-01

    Recent measurements of GPS-derived total electron content (TEC) reveal acoustic wave periods of ˜1-4 min in the F region ionosphere following natural hazard events, such as earthquakes, severe weather, and volcanoes. Here we simulate the ionospheric responses to infrasonic-acoustic waves, generated by vertical accelerations at the Earth's surface or within the lower atmosphere, using a compressible atmospheric dynamics model to perturb a multifluid ionospheric model. Response dependencies on wave source geometry and spectrum are investigated at middle, low, and equatorial latitudes. Results suggest constraints on wave amplitudes that are consistent with observations and that provide insight on the geographical variability of TEC signatures and their dependence on the geometry of wave velocity field perturbations relative to the ambient geomagnetic field. Asymmetries of responses poleward and equatorward from the wave sources indicate that electron perturbations are enhanced on the equatorward side while field aligned currents are driven principally on the poleward side, due to alignments of acoustic wave velocities parallel and perpendicular to field lines, respectively. Acoustic-wave-driven TEC perturbations are shown to have periods of ˜3-4 min, which are consistent with the fraction of the spectrum that remains following strong dissipation throughout the thermosphere. Furthermore, thermospheric acoustic waves couple with ion sound waves throughout the F region and topside ionosphere, driving plasma disturbances with similar periods and faster phase speeds. The associated magnetic perturbations of the simulated waves are calculated to be observable and may provide new observational insight in addition to that provided by GPS TEC measurements.

  13. Algorithms for Hyperspectral Endmember Extraction and Signature Classification with Morphological Dendritic Networks

    NASA Astrophysics Data System (ADS)

    Schmalz, M.; Ritter, G.

    Accurate multispectral or hyperspectral signature classification is key to the nonimaging detection and recognition of space objects. Additionally, signature classification accuracy depends on accurate spectral endmember determination [1]. Previous approaches to endmember computation and signature classification were based on linear operators or neural networks (NNs) expressed in terms of the algebra (R, +, x) [1,2]. Unfortunately, class separation in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of NN inputs. This can lead to poor endmember distinction, as well as potentially significant classification errors in the presence of noise or densely interleaved signatures. In contrast to traditional CNNs, autoassociative morphological memories (AMM) are a construct similar to Hopfield autoassociatived memories defined on the (R, +, ?,?) lattice algebra [3]. Unlimited storage and perfect recall of noiseless real valued patterns has been proven for AMMs [4]. However, AMMs suffer from sensitivity to specific noise models, that can be characterized as erosive and dilative noise. On the other hand, the prior definition of a set of endmembers corresponds to material spectra lying on vertices of the minimum convex region covering the image data. These vertices can be characterized as morphologically independent patterns. It has further been shown that AMMs can be based on dendritic computation [3,6]. These techniques yield improved accuracy and class segmentation/separation ability in the presence of highly interleaved signature data. In this paper, we present a procedure for endmember determination based on AMM noise sensitivity, which employs morphological dendritic computation. We show that detected endmembers can be exploited by AMM based classification techniques, to achieve accurate signature classification in the presence of noise, closely spaced or interleaved signatures, and

  14. Traversing Microphone Track Installed in NASA Lewis' Aero-Acoustic Propulsion Laboratory Dome

    NASA Technical Reports Server (NTRS)

    Bauman, Steven W.; Perusek, Gail P.

    1999-01-01

    The Aero-Acoustic Propulsion Laboratory is an acoustically treated, 65-ft-tall dome located at the NASA Lewis Research Center. Inside this laboratory is the Nozzle Acoustic Test Rig (NATR), which is used in support of Advanced Subsonics Technology (AST) and High Speed Research (HSR) to test engine exhaust nozzles for thrust and acoustic performance under simulated takeoff conditions. Acoustic measurements had been gathered by a far-field array of microphones located along the dome wall and 10-ft above the floor. Recently, it became desirable to collect acoustic data for engine certifications (as specified by the Federal Aviation Administration (FAA)) that would simulate the noise of an aircraft taking off as heard from an offset ground location. Since nozzles for the High-Speed Civil Transport have straight sides that cause their noise signature to vary radially, an additional plane of acoustic measurement was required. Desired was an arched array of 24 microphones, equally spaced from the nozzle and each other, in a 25 off-vertical plane. The various research requirements made this a challenging task. The microphones needed to be aimed at the nozzle accurately and held firmly in place during testing, but it was also essential that they be easily and routinely lowered to the floor for calibration and servicing. Once serviced, the microphones would have to be returned to their previous location near the ceiling. In addition, there could be no structure could between the microphones and the nozzle, and any structure near the microphones would have to be designed to minimize noise reflections. After many concepts were considered, a single arched truss structure was selected that would be permanently affixed to the dome ceiling and to one end of the dome floor.

  15. Isobaric Reconstruction of the Baryonic Acoustic Oscillation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Yu, Hao-Ran; Zhu, Hong-Ming; Yu, Yu; Pan, Qiaoyin; Pen, Ue-Li

    2017-06-01

    In this Letter, we report a significant recovery of the linear baryonic acoustic oscillation (BAO) signature by applying the isobaric reconstruction algorithm to the nonlinear matter density field. Assuming only the longitudinal component of the displacement being cosmologically relevant, this algorithm iteratively solves the coordinate transform between the Lagrangian and Eulerian frames without requiring any specific knowledge of the dynamics. For dark matter field, it produces the nonlinear displacement potential with very high fidelity. The reconstruction error at the pixel level is within a few percent and is caused only by the emergence of the transverse component after the shell-crossing. As it circumvents the strongest nonlinearity of the density evolution, the reconstructed field is well described by linear theory and immune from the bulk-flow smearing of the BAO signature. Therefore, this algorithm could significantly improve the measurement accuracy of the sound horizon scale s. For a perfect large-scale structure survey at redshift zero without Poisson or instrumental noise, the fractional error {{Δ }}s/s is reduced by a factor of ˜2.7, very close to the ideal limit with the linear power spectrum and Gaussian covariance matrix.

  16. Passive acoustic monitoring to detect spawning in large-bodied catostomids

    USGS Publications Warehouse

    Straight, Carrie A.; Freeman, Byron J.; Freeman, Mary C.

    2014-01-01

    Documenting timing, locations, and intensity of spawning can provide valuable information for conservation and management of imperiled fishes. However, deep, turbid or turbulent water, or occurrence of spawning at night, can severely limit direct observations. We have developed and tested the use of passive acoustics to detect distinctive acoustic signatures associated with spawning events of two large-bodied catostomid species (River Redhorse Moxostoma carinatum and Robust Redhorse Moxostoma robustum) in river systems in north Georgia. We deployed a hydrophone with a recording unit at four different locations on four different dates when we could both record and observe spawning activity. Recordings captured 494 spawning events that we acoustically characterized using dominant frequency, 95% frequency, relative power, and duration. We similarly characterized 46 randomly selected ambient river noises. Dominant frequency did not differ between redhorse species and ranged from 172.3 to 14,987.1 Hz. Duration of spawning events ranged from 0.65 to 11.07 s, River Redhorse having longer durations than Robust Redhorse. Observed spawning events had significantly higher dominant and 95% frequencies than ambient river noises. We additionally tested software designed to automate acoustic detection. The automated detection configurations correctly identified 80–82% of known spawning events, and falsely indentified spawns 6–7% of the time when none occurred. These rates were combined over all recordings; rates were more variable among individual recordings. Longer spawning events were more likely to be detected. Combined with sufficient visual observations to ascertain species identities and to estimate detection error rates, passive acoustic recording provides a useful tool to study spawning frequency of large-bodied fishes that displace gravel during egg deposition, including several species of imperiled catostomids.

  17. UF6 Density and Mass Flow Measurements for Enrichment Plants using Acoustic Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Good, Morris S.; Smith, Leon E.; Warren, Glen A.

    A key enabling capability for enrichment plant safeguards being considered by the International Atomic Energy Agency (IAEA) is high-accuracy, noninvasive, unattended measurement of UF6 gas density and mass flow rate. Acoustic techniques are currently used to noninvasively monitor gas flow in industrial applications; however, the operating pressures at gaseous centrifuge enrichment plants (GCEPs) are roughly two orders magnitude below the capabilities of commercial instrumentation. Pacific Northwest National Laboratory is refining acoustic techniques for estimating density and mass flow rate of UF6 gas in scenarios typical of GCEPs, with the goal of achieving 1% measurement accuracy. Proof-of-concept laboratory measurements using amore » surrogate gas for UF6 have demonstrated signatures sensitive to gas density at low operating pressures such as 10–50 Torr, which were observed over the background acoustic interference. Current efforts involve developing a test bed for conducting acoustic measurements on flowing SF6 gas at representative flow rates and pressures to ascertain the viability of conducting gas flow measurements under these conditions. Density and flow measurements will be conducted to support the evaluation. If successful, the approach could enable an unattended, noninvasive approach to measure mass flow in unit header pipes of GCEPs.« less

  18. Distant Speech Recognition Using a Microphone Array Network

    NASA Astrophysics Data System (ADS)

    Nakano, Alberto Yoshihiro; Nakagawa, Seiichi; Yamamoto, Kazumasa

    In this work, spatial information consisting of the position and orientation angle of an acoustic source is estimated by an artificial neural network (ANN). The estimated position of a speaker in an enclosed space is used to refine the estimated time delays for a delay-and-sum beamformer, thus enhancing the output signal. On the other hand, the orientation angle is used to restrict the lexicon used in the recognition phase, assuming that the speaker faces a particular direction while speaking. To compensate the effect of the transmission channel inside a short frame analysis window, a new cepstral mean normalization (CMN) method based on a Gaussian mixture model (GMM) is investigated and shows better performance than the conventional CMN for short utterances. The performance of the proposed method is evaluated through Japanese digit/command recognition experiments.

  19. 77 FR 43370 - TUV Rheinland of North America, Inc.; Application for Expansion of Recognition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-24

    ... scope of recognition has three elements: (1) The type of products the NRTL may test, with each type... Assistant Secretary will make the final decision on granting the application and, in making this decision... notice of this final decision in the Federal Register. Authority and Signature David Michaels, Ph.D., MPH...

  20. Classifiers utilized to enhance acoustic based sensors to identify round types of artillery/mortar

    NASA Astrophysics Data System (ADS)

    Grasing, David; Desai, Sachi; Morcos, Amir

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  1. Research on the Energy Characteristics of Battlefield Blasting Noise Based on Wavelet Packet

    NASA Astrophysics Data System (ADS)

    Ding, Kai; Yan, Shoucheng; Zhu, Yichao; Zhao, Ming; Mei, Bi

    2017-12-01

    When the acoustic fuse of smart landmines tries to detect and recognize a ground vehicle target, it is usually affected by gun shooting, explosive blasting or other similar noises on the actual battlefield. To improve the target recognition of smart landmines, it would be necessary to study the characteristics of these acoustic signals. Using sample data of the shooting noise of a certain type of rifle, the blasting noise of TNT, and the acoustic signals of a certain type of WAV, the energy characteristics of these noise signals are compared and analyzed. The result shows that the wavelet-packet energy method is effective in describing the characteristics of these acoustic signals with distinct intertype variations, and the frequency at the peak energy value can serve as a signature parameter for recognizing battlefield blasting noise signals from vehicle target signals.

  2. Method of Adjusting Acoustic Impedances for Impedance-Tunable Acoustic Segments

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H (Inventor); Nark, Douglas M. (Inventor); Jones, Michael G. (Inventor); Parrott, Tony L. (Inventor); Lodding, Kenneth N. (Inventor)

    2012-01-01

    A method is provided for making localized decisions and taking localized actions to achieve a global solution. In an embodiment of the present invention, acoustic impedances for impedance-tunable acoustic segments are adjusted. A first acoustic segment through an N-th acoustic segment are defined. To start the process, the first acoustic segment is designated as a leader and a noise-reducing impedance is determined therefor. This is accomplished using (i) one or more metrics associated with the acoustic wave at the leader, and (ii) the metric(s) associated with the acoustic wave at the N-th acoustic segment. The leader, the N-th acoustic segment, and each of the acoustic segments exclusive of the leader and the N-th acoustic segment, are tuned to the noise-reducing impedance. The current leader is then excluded from subsequent processing steps. The designation of leader is then given one of the remaining acoustic segments, and the process is repeated for each of the acoustic segments through an (N-1)-th one of the acoustic segments.

  3. Self-recognition of avatar motion: how do I know it's me?

    PubMed

    Cook, Richard; Johnston, Alan; Heyes, Cecilia

    2012-02-22

    When motion is isolated from form cues and viewed from third-person perspectives, individuals are able to recognize their own whole body movements better than those of friends. Because we rarely see our own bodies in motion from third-person viewpoints, this self-recognition advantage may indicate a contribution to perception from the motor system. Our first experiment provides evidence that recognition of self-produced and friends' motion dissociate, with only the latter showing sensitivity to orientation. Through the use of selectively disrupted avatar motion, our second experiment shows that self-recognition of facial motion is mediated by knowledge of the local temporal characteristics of one's own actions. Specifically, inverted self-recognition was unaffected by disruption of feature configurations and trajectories, but eliminated by temporal distortion. While actors lack third-person visual experience of their actions, they have a lifetime of proprioceptive, somatosensory, vestibular and first-person-visual experience. These sources of contingent feedback may provide actors with knowledge about the temporal properties of their actions, potentially supporting recognition of characteristic rhythmic variation when viewing self-produced motion. In contrast, the ability to recognize the motion signatures of familiar others may be dependent on configural topographic cues.

  4. Nonlinear acoustic propagation of launch vehicle and military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Gee, Kent L.

    2010-10-01

    The noise from launch vehicles and high-performance military jet aircraft has been shown to travel nonlinearly as a result of an amplitude-dependent speed of sound. Because acoustic pressure compressions travel faster than rarefactions, the waveform steepens and shocks form. This process results in a very different (and readily audible) noise signature and spectrum than predicted by linear models. On-going efforts to characterize the nonlinearity using statistical and spectral measures are described with examples from recent static tests of solid rocket boosters and the F-22 Raptor.

  5. Signature analysis of ballistic missile warhead with micro-nutation in terahertz band

    NASA Astrophysics Data System (ADS)

    Li, Ming; Jiang, Yue-song

    2013-08-01

    In recent years, the micro-Doppler effect has been proposed as a new technique for signature analysis and extraction of radar targets. The ballistic missile is known as a typical radar target and has been paid many attentions for the complexities of its motions in current researches. The trajectory of a ballistic missile can be generally divided into three stages: boost phase, midcourse phase and terminal phase. The midcourse phase is the most important phase for radar target recognition and interception. In this stage, the warhead forms a typical micro-motion called micro-nutation which consists of three basic micro-motions: spinning, coning and wiggle. This paper addresses the issue of signature analysis of ballistic missile warhead in terahertz band via discussing the micro-Doppler effect. We establish a simplified model (cone-shaped) for the missile warhead followed by the micro-motion models including of spinning, coning and wiggle. Based on the basic formulas of these typical micro-motions, we first derive the theoretical formula of micro-nutation which is the main micro-motion of the missile warhead. Then, we calculate the micro-Doppler frequency in both X band and terahertz band via these micro-Doppler formulas. The simulations are given to show the superiority of our proposed method for the recognition and detection of radar micro targets in terahertz band.

  6. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  7. Translational illusion of acoustic sources by transformation acoustics.

    PubMed

    Sun, Fei; Li, Shichao; He, Sailing

    2017-09-01

    An acoustic illusion of creating a translated acoustic source is designed by utilizing transformation acoustics. An acoustic source shifter (ASS) composed of layered acoustic metamaterials is designed to achieve such an illusion. A practical example where the ASS is made with naturally available materials is also given. Numerical simulations verify the performance of the proposed device. The designed ASS may have some applications in, e.g., anti-sonar detection.

  8. Obligatory and facultative brain regions for voice-identity recognition.

    PubMed

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is

  9. [Perception of emotional intonation of noisy speech signal with different acoustic parameters by adults of different age and gender].

    PubMed

    Dmitrieva, E S; Gel'man, V Ia

    2011-01-01

    The listener-distinctive features of recognition of different emotional intonations (positive, negative and neutral) of male and female speakers in the presence or absence of background noise were studied in 49 adults aged 20-79 years. In all the listeners noise produced the most pronounced decrease in recognition accuracy for positive emotional intonation ("joy") as compared to other intonations, whereas it did not influence the recognition accuracy of "anger" in 65-79-year-old listeners. The higher emotion recognition rates of a noisy signal were observed for speech emotional intonations expressed by female speakers. Acoustic characteristics of noisy and clear speech signals underlying perception of speech emotional prosody were found for adult listeners of different age and gender.

  10. Spectroscopic Signatures Related to a Sunquake

    NASA Astrophysics Data System (ADS)

    Matthews, S. A.; Harra, L. K.; Zharkov, S.; Green, L. M.

    2015-10-01

    The presence of flare-related acoustic emission (sunquakes (SQs)) in some flares, and only in specific locations within the flaring environment, represents a severe challenge to our current understanding of flare energy transport processes. In an attempt to contribute to understanding the origins of SQs we present a comparison of new spectral observations from Hinode’s EUV imaging Spectrometer (EIS) and the Interface Region Imaging Spectrograph (IRIS) of the chromosphere, transition region, and corona above an SQ, and compare them to the spectra observed in a part of the flaring region with no acoustic signature. Evidence for the SQ is determined using both time-distance and acoustic holography methods, and we find that unlike many previous SQ detections, the signal is rather dispersed, but that the time-distance and 6 and 7 mHz sources converge at the same spatial location. We also see some evidence for different evolution at different frequencies, with an earlier peak at 7 mHz than at 6 mHz. Using EIS and IRIS spectroscopic measurements we find that in this location, at the time of the 7 mHz peak the spectral emission is significantly more intense, shows larger velocity shifts and substantially broader profiles than in the location with no SQ, and there is a good correlation between blueshifted, hot coronal, hard X-ray (HXR), and redshifted chromospheric emission, consistent with the idea of a strong downward motion driven by rapid heating by nonthermal electrons and the formation of chromospheric shocks. Exploiting the diagnostic potential of the Mg ii triplet lines, we also find evidence for a single large temperature increase deep in the atmosphere, which is consistent with this scenario. The time of the 6 mHz and time-distance peak signal coincides with a secondary peak in the energy release process, but in this case we find no evidence of HXR emission in the quake location, instead finding very broad spectral lines, strongly shifted to the red, indicating

  11. Fine epitope signature of antibody neutralization breadth at the HIV-1 envelope CD4-binding site.

    PubMed

    Cheng, Hao D; Grimm, Sebastian K; Gilman, Morgan Sa; Gwom, Luc Christian; Sok, Devin; Sundling, Christopher; Donofrio, Gina; Hedestam, Gunilla B Karlsson; Bonsignori, Mattia; Haynes, Barton F; Lahey, Timothy P; Maro, Isaac; von Reyn, C Fordham; Gorny, Miroslaw K; Zolla-Pazner, Susan; Walker, Bruce D; Alter, Galit; Burton, Dennis R; Robb, Merlin L; Krebs, Shelly J; Seaman, Michael S; Bailey-Kellogg, Chris; Ackerman, Margaret E

    2018-03-08

    Major advances in donor identification, antigen probe design, and experimental methods to clone pathogen-specific antibodies have led to an exponential growth in the number of newly characterized broadly neutralizing antibodies (bnAbs) that recognize the HIV-1 envelope glycoprotein. Characterization of these bnAbs has defined new epitopes and novel modes of recognition that can result in potent neutralization of HIV-1. However, the translation of envelope recognition profiles in biophysical assays into an understanding of in vivo activity has lagged behind, and identification of subjects and mAbs with potent antiviral activity has remained reliant on empirical evaluation of neutralization potency and breadth. To begin to address this discrepancy between recombinant protein recognition and virus neutralization, we studied the fine epitope specificity of a panel of CD4-binding site (CD4bs) antibodies to define the molecular recognition features of functionally potent humoral responses targeting the HIV-1 envelope site bound by CD4. Whereas previous studies have used neutralization data and machine-learning methods to provide epitope maps, here, this approach was reversed, demonstrating that simple binding assays of fine epitope specificity can prospectively identify broadly neutralizing CD4bs-specific mAbs. Building on this result, we show that epitope mapping and prediction of neutralization breadth can also be accomplished in the assessment of polyclonal serum responses. Thus, this study identifies a set of CD4bs bnAb signature amino acid residues and demonstrates that sensitivity to mutations at signature positions is sufficient to predict neutralization breadth of polyclonal sera with a high degree of accuracy across cohorts and across clades.

  12. Design and Integration of a Rotor Alone Nacelle for Acoustic Fan Testing

    NASA Technical Reports Server (NTRS)

    Shook, Tony D.; Hughes, Christoper E.; Thompson, William K.; Tavernelli, Paul F.; Cunningham, Cameron C.; Shah, Ashwin

    2001-01-01

    A brief summary of the design, integration and testing of a rotor alone nacelle (RAN) in NASA Glenn's 9'x 15' Low Speed Wind Tunnel (LSWT) is presented. The purpose of the RAN system was to provide an "acoustically clean" flow path within the nacelle to isolate that portion of the total engine system acoustic signature attributed to fan noise. The RAN design accomplished this by removing the stators that provided internal support to the nacelle. In its place, two external struts mounted to a two-axis positioning table located behind the tunnel wall provided the support. Nacelle-mounted lasers and a closed-loop control system provided the input to the table to maintain nacelle to fan concentricity as thermal and thrust loads displaced the strut-mounted fan. This unique design required extensive analysis and verification testing to ensure the safety of the fan model, propulsion simulator drive rig, and facility, along with experimental consistency of acoustic data obtained while using the RAN system. Initial testing was used to optimize the positioning system and resulted in concentricity errors of +/- 0.0031 in. in the horizontal direction and +0.0035/-0.0013 in, in the vertical direction. As a result of successful testing, the RAN system will be transitioned into other acoustic research programs at NASA Glenn Research Center.

  13. 2D DOST based local phase pattern for face recognition

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2017-05-01

    A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.

  14. Numerical investigation and Uncertainty Quantification of the Impact of the geological and geomechanical properties on the seismo-acoustic responses of underground chemical explosions

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.; Pitarka, A.; Vorobiev, O.; Glenn, L.; Antoun, T.

    2017-12-01

    We have performed three-dimensional high resolution simulations of underground chemical explosions conducted recently in jointed rock outcrop as part of the Source Physics Experiments (SPE) being conducted at the Nevada National Security Site (NNSS). The main goal of the current study is to investigate the effects of the structural and geomechanical properties on the spall phenomena due to underground chemical explosions and its subsequent effect on the seismo-acoustic signature at far distances. Two parametric studies have been undertaken to assess the impact of different 1) conceptual geological models including a single layer and two layers model, with and without joints and with and without varying geomechanical properties, and 2) depth of bursts of the chemical explosions and explosion yields. Through these investigations we have explored not only the near-field response of the chemical explosions but also the far-field responses of the seismic and the acoustic signatures. The near-field simulations were conducted using the Eulerian and Lagrangian codes, GEODYN and GEODYN -L, respectively, while the far-field seismic simulations were conducted using the elastic wave propagation code, WPP, and the acoustic response using the Kirchhoff-Helmholtz-Rayleigh time-dependent approximation code, KHR. Though a series of simulations we have recorded the velocity field histories a) at the ground surface on an acoustic-source-patch for the acoustic simulations, and 2) on a seismic-source-box for the seismic simulations. We first analyzed the SPE3 experimental data and simulated results, then simulated SPE4-prime, SPE5, and SPE6 to anticipate their seismo-acoustic responses given conditions of uncertainties. SPE experiments were conducted in a granitic formation; we have extended the parametric study to include other geological settings such dolomite and alluvial formations. These parametric studies enabled us 1) investigating the geotechnical and geophysical key parameters

  15. Breakdown of the Debye approximation for the acoustic modes with nanometric wavelengths in glasses

    PubMed Central

    Monaco, Giulio; Giordano, Valentina M.

    2009-01-01

    On the macroscopic scale, the wavelengths of sound waves in glasses are large enough that the details of the disordered microscopic structure are usually irrelevant, and the medium can be considered as a continuum. On decreasing the wavelength this approximation must of course fail at one point. We show here that this takes place unexpectedly on the mesoscopic scale characteristic of the medium range order of glasses, where it still works well for the corresponding crystalline phases. Specifically, we find that the acoustic excitations with nanometric wavelengths show the clear signature of being strongly scattered, indicating the existence of a cross-over between well-defined acoustic modes for larger wavelengths and ill-defined ones for smaller wavelengths. This cross-over region is accompanied by a softening of the sound velocity that quantitatively accounts for the excess observed in the vibrational density of states of glasses over the Debye level at energies of a few milli-electronvolts. These findings thus highlight the acoustic contribution to the well-known universal low-temperature anomalies found in the specific heat of glasses. PMID:19240211

  16. Analysis of the acoustic spectral signature of prosthetic heart valves in patients experiencing atrial fibrillation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, D.D.; Jones, H.E.

    1994-05-06

    Prosthetic heart valves have increased the life span of many patients with life threatening heart conditions. These valves have proven extremely reliable adding years to what would have been weeks to a patient`s life. Prosthetic valves, like the heart however, can suffer from this constant work load. A small number of valves have experienced structural fractures of the outlet strut due to fatigue. To study this problem a non-intrusive method to classify valves has been developed. By extracting from an acoustic signal the opening sounds which directly contain information from the outlet strut and then developing features which are suppliedmore » to an adaptive classification scheme (neural network) the condition of the valve can be determined. The opening sound extraction process has proved to be a classification problem itself. Due to the uniqueness of each heart and the occasional irregularity of the acoustic pattern it is often questionable as to the integrity of a given signal (beat), especially one occurring during an irregular beat pattern. A common cause of these irregular patterns is a condition known as atrial fibrillation, a prevalent arrhythmia among patients with prosthetic hear valves. Atrial fibrillation is suspected when the ECG shows no obvious P-waves. The atria do not contract and relax correctly to help contribute to ventricular filling during a normal cardiac cycle. Sometimes this leads to irregular patterns in the acoustic data. This study compares normal beat patterns to irregular patterns of the same heart. By analyzing the spectral content of the beats it can be determined whether or not these irregular patterns can contribute to the classification of a heart valve or if they should be avoided. The results have shown that the opening sounds which occur during irregular beat patterns contain the same spectral information as the opening which occur during a normal beat pattern of the same heart and these beats can be used for classification.« less

  17. Acoustic Event Detection and Classification

    NASA Astrophysics Data System (ADS)

    Temko, Andrey; Nadeu, Climent; Macho, Dušan; Malkin, Robert; Zieger, Christian; Omologo, Maurizio

    The human activity that takes place in meeting rooms or classrooms is reflected in a rich variety of acoustic events (AE), produced either by the human body or by objects handled by humans, so the determination of both the identity of sounds and their position in time may help to detect and describe that human activity. Indeed, speech is usually the most informative sound, but other kinds of AEs may also carry useful information, for example, clapping or laughing inside a speech, a strong yawn in the middle of a lecture, a chair moving or a door slam when the meeting has just started. Additionally, detection and classification of sounds other than speech may be useful to enhance the robustness of speech technologies like automatic speech recognition.

  18. Elastomeric negative acoustic contrast particles for affinity capture assays.

    PubMed

    Cushing, Kevin W; Piyasena, Menake E; Carroll, Nick J; Maestas, Gian C; López, Beth Ann; Edwards, Bruce S; Graves, Steven W; López, Gabriel P

    2013-02-19

    This report describes the development of elastomeric capture microparticles (ECμPs) and their use with acoustophoretic separation to perform microparticle assays via flow cytometry.We have developed simple methods to form ECμPs by cross-linking droplets of common commercially available silicone precursors in suspension followed by surface functionalization with biomolecular recognition reagents. The ECμPs are compressible particles that exhibit negative acoustic contrast in ultrasound when suspended in aqueous media, blood serum, or diluted blood. In this study, these particles have been functionalized with antibodies to bind prostate specific antigen and immunoglobulin (IgG). Specific separation of the ECμPs from blood cells is achieved by flowing them through a microfluidic acoustophoretic device that uses an ultrasonic standing wave to align the blood cells, which exhibit positive acoustic contrast, at a node in the acoustic pressure distribution while aligning the negative acoustic contrast ECμPs at the antinodes. Laminar flow of the separated particles to downstream collection ports allows for collection of the separated negative contrast (ECμPs) and positive contrast particles (cells). Separated ECμPs were analyzed via flow cytometry to demonstrate nanomolar detection for prostate specific antigen in aqueous buffer and picomolar detection for IgG in plasma and diluted blood samples. This approach has potential applications in the development of rapid assays that detect the presence of low concentrations of biomarkers in a number of biological sample types.

  19. Elastomeric Negative Acoustic Contrast Particles for Affinity Capture Assays

    PubMed Central

    Cushing, Kevin W.; Piyasena, Menake E.; Carroll, Nick J.; Maestas, Gian C.; López, Beth Ann; Edwards, Bruce S.; Graves, Steven W.; López, Gabriel P.

    2013-01-01

    This report describes the development of elastomeric capture microparticles (ECμPs) and their use with acoustophoretic separation to perform microparticle assays via flow cytometry. We have developed simple methods to form ECμPsby crosslinking droplets of common commercially available silicone precursors in suspension followed by surface functionalization with biomolecular recognition reagents. The ECμPs are compressible particles that exhibit negative acoustic contrast in ultrasound when suspended in aqueous media, blood serum or diluted blood. In this study, these particles have been functionalized with antibodies to bind prostate specific antigen and immunoglobulin (IgG). Specific separation of the ECμPs from blood cells is achieved by flowing them through a microfluidic acoustophoretic device that uses an ultrasonic standing wave to align the blood cells, which exhibit positive acoustic contrast, at a node in the acoustic pressure distribution while aligning the negative acoustic contrast ECμPs at the antinodes. Laminar flow of the separated particles to downstream collection ports allows for collection of the separated negative contrast (ECμPs) and positive contrast particles (cells). Separated ECμPs were analyzed via flow cytometry to demonstrate nanomolar detection for prostate specific antigen in aqueous buffer and picomolar detection for IgG in plasma and diluted blood samples. This approach has potential applications in the development of rapid assays that detect the presence of low concentrations of biomarkers in a number of biological sample types. PMID:23331264

  20. Molecular recognition in gas sensing: Results from acoustic wave and in-situ FTIR measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hierlemann, A.; Ricco, A.J.; Bodenhoefer, K.

    Surface acoustic wave (SAW) measurements were combined with direct, in-situ molecular spectroscopy to understand the interactions of surface-confined sensing films with gas-phase analytes. This was accomplished by collecting Fourier-transform infrared external-reflectance spectra (FTIR-ERS) on operating SAW devices during dosing of their specifically coated surfaces with key analytes.

  1. Cosmological Signatures of a Mirror Twin Higgs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacko, Zackaria; Curtin, David; Geller, Michael

    We explore the cosmological signatures associated with the twin baryons, electrons, photons and neutrinos in the Mirror Twin Higgs framework. We consider a scenario in which the twin baryons constitute a subcomponent of dark matter, and the contribution of the twin photon and neutrinos to dark radiation is suppressed due to late asymmetric reheating, but remains large enough to be detected in future cosmic microwave background (CMB) experiments. We show that this framework can lead to distinctive signals in large scale structure and in the cosmic microwave background. Baryon acoustic oscillations in the mirror sector prior to recombination lead tomore » a suppression of structure on large scales, and leave a residual oscillatory pattern in the matter power spectrum. This pattern depends sensitively on the relative abundances and ionization energies of both twin hydrogen and helium, and is therefore characteristic of this class of models. Although both mirror photons and neutrinos constitute dark radiation in the early universe, their effects on the CMB are distinct. This is because prior to recombination the twin neutrinos free stream, while the twin photons are prevented from free streaming by scattering off twin electrons. In the Mirror Twin Higgs framework the relative contributions of these two species to the energy density in dark radiation is predicted, leading to testable effects in the CMB. These highly distinctive cosmological signatures may allow this class of models to be discovered, and distinguished from more general dark sectors.« less

  2. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.

    PubMed

    Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek

    2017-08-24

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.

  3. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †

    PubMed Central

    Choi, Jinwoo; Choi, Hyun-Taek

    2017-01-01

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status—i.e., the existence and identity (or name)—of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods—particle filtering and Bayesian feature estimation—are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented. PMID:28837068

  4. Fundamentals of Acoustics. Psychoacoustics and Hearing. Acoustical Measurements

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Ahumada, Al (Technical Monitor)

    1997-01-01

    These are 3 chapters that will appear in a book titled "Building Acoustical Design", edited by Charles Salter. They are designed to introduce the reader to fundamental concepts of acoustics, particularly as they relate to the built environment. "Fundamentals of Acoustics" reviews basic concepts of sound waveform frequency, pressure, and phase. "Psychoacoustics and Hearing" discusses the human interpretation sound pressure as loudness, particularly as a function of frequency. "Acoustic Measurements" gives a simple overview of the time and frequency weightings for sound pressure measurements that are used in acoustical work.

  5. Neuroscience-inspired computational systems for speech recognition under noisy conditions

    NASA Astrophysics Data System (ADS)

    Schafer, Phillip B.

    Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes

  6. Mandarin Chinese Tone Identification in Cochlear Implants: Predictions from Acoustic Models

    PubMed Central

    Morton, Kenneth D.; Torrione, Peter A.; Throckmorton, Chandra S.; Collins, Leslie M.

    2015-01-01

    It has been established that current cochlear implants do not supply adequate spectral information for perception of tonal languages. Comprehension of a tonal language, such as Mandarin Chinese, requires recognition of lexical tones. New strategies of cochlear stimulation such as variable stimulation rate and current steering may provide the means of delivering more spectral information and thus may provide the auditory fine structure required for tone recognition. Several cochlear implant signal processing strategies are examined in this study, the continuous interleaved sampling (CIS) algorithm, the frequency amplitude modulation encoding (FAME) algorithm, and the multiple carrier frequency algorithm (MCFA). These strategies provide different types and amounts of spectral information. Pattern recognition techniques can be applied to data from Mandarin Chinese tone recognition tasks using acoustic models as a means of testing the abilities of these algorithms to transmit the changes in fundamental frequency indicative of the four lexical tones. The ability of processed Mandarin Chinese tones to be correctly classified may predict trends in the effectiveness of different signal processing algorithms in cochlear implants. The proposed techniques can predict trends in performance of the signal processing techniques in quiet conditions but fail to do so in noise. PMID:18706497

  7. Raman spectroscopy and the search for life signatures in the ExoMars Mission*

    NASA Astrophysics Data System (ADS)

    Edwards, Howell G. M.; Hutchinson, Ian B.; Ingley, Richard

    2012-10-01

    The survival strategies of extremophilic organisms in terrestrially stressed locations and habitats are critically dependent on the production of protective chemicals in response to desiccation, low wavelength radiation insolation, temperature and the availability of nutrients. The adaptation of life to these harsh prevailing conditions involves the control of the substratal geology; the interaction between the rock and the organisms is critical and the biological modification of the geological matrix plays a very significant role in the overall survival strategy. Identification of these biological and biogeological chemical molecular signatures in the geological record is necessary for the recognition of the presence of extinct or extant life in terrestrial and extraterrestrial scenarios. Raman spectroscopic techniques have been identified as valuable instrumentation for the detection of life extra-terrestrially because of the use of non-invasive laser-based excitation of organic and inorganic molecules, and molecular ions with high discrimination characteristics; the interactions effected between biological organisms and their environments are detectable through the molecular entities produced at the interfaces, for which the vibrational spectroscopic band signatures are unique. A very important attribute of Raman spectroscopy is the acquisition of molecular experimental data non-destructively without the need for chemical or mechanical pre-treatment of the specimen; this has been a major factor in the proposal for the adoption of Raman instrumentation on robotic landers and rovers for planetary exploration, particularly for the forthcoming European Space Agency (ESA)/National Aeronautics and Space Administration (NASA) ExoMars mission. In this paper, the merits of using Raman spectroscopy for the recognition of key molecular biosignatures from several terrestrial extremophile specimens will be illustrated. The data and specimens used in this presentation have

  8. Rapid evolution of cuticular hydrocarbons in a species radiation of acoustically diverse Hawaiian crickets (Gryllidae: trigonidiinae: Laupala).

    PubMed

    Mullen, Sean P; Mendelson, Tamra C; Schal, Coby; Shaw, Kerry L

    2007-01-01

    Understanding the origin and maintenance of barriers to gene exchange is a central goal of speciation research. Hawaiian swordtail crickets (genus Laupala) represent one of the most rapidly speciating animal groups yet identified. Extensive acoustic diversity, strong premating isolation, and female preference for conspecific acoustic signals in laboratory phonotaxis trials have strongly supported divergence in mate recognition as the driving force behind the explosive speciation seen in this system. However, recent work has shown that female preference for conspecific male calling song does not extend to mate choice at close range among these crickets, leading to the hypothesis that additional sexual signals are involved in mate recognition and premating isolation. Here we examine patterns of variation in cuticular lipids among several species of Laupala from Maui and the Big Island of Hawaii. Results demonstrate (1) a rapid and dramatic evolution of cuticular lipid composition among species in this genus, (2) significant differences among males and females in cuticular lipid composition, and (3) a significant reduction in the complexity of cuticular lipid profiles in species from the Big Island of Hawaii as compared to two outgroup species from Maui. These results suggest that behavioral barriers to gene exchange in Laupala may be composed of multiple mate recognition signals, a pattern common in other cricket species.

  9. Acoustic source for generating an acoustic beam

    DOEpatents

    Vu, Cung Khac; Sinha, Dipen N.; Pantea, Cristian

    2016-05-31

    An acoustic source for generating an acoustic beam includes a housing; a plurality of spaced apart piezo-electric layers disposed within the housing; and a non-linear medium filling between the plurality of layers. Each of the plurality of piezoelectric layers is configured to generate an acoustic wave. The non-linear medium and the plurality of piezo-electric material layers have a matching impedance so as to enhance a transmission of the acoustic wave generated by each of plurality of layers through the remaining plurality of layers.

  10. Design and first tests of an acoustic positioning and detection system for KM3NeT

    NASA Astrophysics Data System (ADS)

    Simeone, F.; Ameli, F.; Ardid, M.; Bertin, V.; Bonori, M.; Bou-Cabo, M.; Calì, C.; D'Amico, A.; Giovanetti, G.; Imbesi, M.; Keller, P.; Larosa, G.; Llorens, C. D.; Masullo, R.; Randazzo, N.; Riccobene, G.; Speziale, F.; Viola, S.; KM3NeT Consortium

    2012-01-01

    In a deep-sea neutrino telescope it is mandatory to locate the position of the optical sensors with a precision of about 10 cm. To achieve this requirement, an innovative Acoustic Positioning System (APS) has been designed in the frame work of the KM3NeT neutrino telescope. The system will also be able to provide an acoustic guide during the deployment of the telescope’s components and seafloor infrastructures (junction boxes, cables, etc.). A prototype of the system based on the successful acoustic systems of ANTARES and NEMO is being developed. It will consist of an array of hydrophones and a network of acoustic transceivers forming the Long Baseline. All sensors are connected to the telescope data acquisition system and are in phase and synchronised with the telescope master clock. Data from the acoustic sensors, continuously sampled at 192 kHz, will be sent to shore where signal recognition and analysis will be carried out. The design and first tests of the system elements will be presented. This new APS is expected to have better precision compared to the systems used in ANTARES and NEMO, and can also be used as a real-time monitor of acoustic sources and environmental noise in deep sea.

  11. Acoustic calibration apparatus for calibrating plethysmographic acoustic pressure sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J. (Inventor); Davis, David C. (Inventor)

    1995-01-01

    An apparatus for calibrating an acoustic sensor is described. The apparatus includes a transmission material having an acoustic impedance approximately matching the acoustic impedance of the actual acoustic medium existing when the acoustic sensor is applied in actual in-service conditions. An elastic container holds the transmission material. A first sensor is coupled to the container at a first location on the container and a second sensor coupled to the container at a second location on the container, the second location being different from the first location. A sound producing device is coupled to the container and transmits acoustic signals inside the container.

  12. Acoustic calibration apparatus for calibrating plethysmographic acoustic pressure sensors

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J. (Inventor); Davis, David C. (Inventor)

    1994-01-01

    An apparatus for calibrating an acoustic sensor is described. The apparatus includes a transmission material having an acoustic impedance approximately matching the acoustic impedance of the actual acoustic medium existing when the acoustic sensor is applied in actual in-service conditions. An elastic container holds the transmission material. A first sensor is coupled to the container at a first location on the container and a second sensor coupled to the container at a second location on the container, the second location being different from the first location. A sound producing device is coupled to the container and transmits acoustic signals inside the container.

  13. North Pacific Acoustic Laboratory and Deep Water Acoustics

    DTIC Science & Technology

    2015-09-30

    range acoustic systems, whether for acoustic surveillance, communication, or remote sensing of the ocean interior . The data from the NPAL network, and...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. North Pacific Acoustic Laboratory and Deep Water... Acoustics PI James A. Mercer Applied Physics Laboratory, University of Washington 1013 NE 40th Street Seattle, WA 98105 phone: (206) 543-1361 fax

  14. Temporal and Spatial Comparisons of Underwater Sound Signatures of Different Reef Habitats in Moorea Island, French Polynesia.

    PubMed

    Bertucci, Frédéric; Parmentier, Eric; Berten, Laëtitia; Brooker, Rohan M; Lecchini, David

    2015-01-01

    As environmental sounds are used by larval fish and crustaceans to locate and orientate towards habitat during settlement, variations in the acoustic signature produced by habitats could provide valuable information about habitat quality, helping larvae to differentiate between potential settlement sites. However, very little is known about how acoustic signatures differ between proximate habitats. This study described within- and between-site differences in the sound spectra of five contiguous habitats at Moorea Island, French Polynesia: the inner reef crest, the barrier reef, the fringing reef, a pass and a coastal mangrove forest. Habitats with coral (inner, barrier and fringing reefs) were characterized by a similar sound spectrum with average intensities ranging from 70 to 78 dB re 1 μPa.Hz(-1). The mangrove forest had a lower sound intensity of 70 dB re 1 μPa.Hz(-1) while the pass was characterized by a higher sound level with an average intensity of 91 dB re 1 μPa.Hz(-1). Habitats showed significantly different intensities for most frequencies, and a decreasing intensity gradient was observed from the reef to the shore. While habitats close to the shore showed no significant diel variation in sound intensities, sound levels increased at the pass during the night and barrier reef during the day. These two habitats also appeared to be louder in the North than in the West. These findings suggest that daily variations in sound intensity and across-reef sound gradients could be a valuable source of information for settling larvae. They also provide further evidence that closely related habitats, separated by less than 1 km, can differ significantly in their spectral composition and that these signatures might be typical and conserved along the coast of Moorea.

  15. Model simulations of line-of-sight effects in airglow imaging of acoustic and fast gravity waves from ground and space

    NASA Astrophysics Data System (ADS)

    Aguilar Guerrero, J.; Snively, J. B.

    2017-12-01

    Acoustic waves (AWs) have been predicted to be detectable by imaging systems for the OH airglow layer [Snively, GRL, 40, 2013], and have been identified in spectrometer data [Pilger et al., JASP, 104, 2013]. AWs are weak in the mesopause region, but can attain large amplitudes in the F region [Garcia et al., GRL, 40, 2013] and have local impacts on the thermosphere and ionosphere. Similarly, fast GWs, with phase speeds over 100 m/s, may propagate to the thermosphere and impart significant local body forcing [Vadas and Fritts, JASTP, 66, 2004]. Both have been clearly identified in ionospheric total electron content (TEC), such as following the 2013 Moore, OK, EF5 tornado [Nishioka et al., GRL, 40, 2013] and following the 2011 Tohoku-Oki tsunami [e.g., Galvan et al., RS, 47, 2012, and references therein], but AWs have yet to be unambiguously imaged in MLT data and fast GWs have low amplitudes near the threshold of detection; nevertheless, recent imaging systems have sufficient spatial and temporal resolution and sensitivity to detect both AWs and fast GWs with short periods [e.g., Pautet et al., AO, 53, 2014]. The associated detectability challenges are related to the transient nature of their signatures and to systematic challenges due to line-of-sight (LOS) effects such as enhancements and cancelations due to integration along aligned or oblique wavefronts and geometric intensity enhancements. We employ a simulated airglow imager framework that incorporates 2D and 3D emission rate data and performs the necessary LOS integrations for synthetic imaging from ground- and space-based platforms to assess relative intensity and temperature perturbations. We simulate acoustic and fast gravity wave perturbations to the hydroxyl layer from a nonlinear, compressible model [e.g., Snively, 2013] for different idealized and realistic test cases. The results show clear signal enhancements when acoustic waves are imaged off-zenith or off-nadir and the temporal evolution of these

  16. Signature neural networks: definition and application to multidimensional sorting problems.

    PubMed

    Latorre, Roberto; de Borja Rodriguez, Francisco; Varona, Pablo

    2011-01-01

    In this paper we present a self-organizing neural network paradigm that is able to discriminate information locally using a strategy for information coding and processing inspired in recent findings in living neural systems. The proposed neural network uses: 1) neural signatures to identify each unit in the network; 2) local discrimination of input information during the processing; and 3) a multicoding mechanism for information propagation regarding the who and the what of the information. The local discrimination implies a distinct processing as a function of the neural signature recognition and a local transient memory. In the context of artificial neural networks none of these mechanisms has been analyzed in detail, and our goal is to demonstrate that they can be used to efficiently solve some specific problems. To illustrate the proposed paradigm, we apply it to the problem of multidimensional sorting, which can take advantage of the local information discrimination. In particular, we compare the results of this new approach with traditional methods to solve jigsaw puzzles and we analyze the situations where the new paradigm improves the performance.

  17. Platforms for hyperspectral imaging, in-situ optical and acoustical imaging in urbanized regions

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R.; Oney, Taylor

    2016-10-01

    Hyperspectral measurements of the water surface of urban coastal waters are presented. Oblique bidirectional reflectance factor imagery was acquired made in a turbid coastal sub estuary of the Indian River Lagoon, Florida and along coastal surf zone waters of the nearby Atlantic Ocean. Imagery was also collected using a pushbroom hyperspectral imager mounted on a fixed platform with a calibrated circular mechatronic rotation stage. Oblique imagery of the shoreline and subsurface features clearly shows subsurface bottom features and rip current features within the surf zone water column. In-situ hyperspectral optical signatures were acquired from a vessel as a function of depth to determine the attenuation spectrum in Palm Bay. A unique stationary platform methodology to acquire subsurface acoustic images showing the presence of moving bottom boundary nephelometric layers passing through the acoustic fan beam. The acoustic fan beam imagery indicated the presence of oscillatory subsurface waves in the urbanized coastal estuary. Hyperspectral imaging using the fixed platform techniques are being used to collect hyperspectral bidirectional reflectance factor (BRF) measurements from locations at buildings and bridges in order to provide new opportunities to advance our scientific understanding of aquatic environments in urbanized regions.

  18. Information-based approach to performance estimation and requirements allocation in multisensor fusion for target recognition

    NASA Astrophysics Data System (ADS)

    Harney, Robert C.

    1997-03-01

    A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.

  19. DARPA counter-sniper program: Phase 1 Acoustic Systems Demonstration results

    NASA Astrophysics Data System (ADS)

    Carapezza, Edward M.; Law, David B.; Csanadi, Christina J.

    1997-02-01

    During October 1995 through May 1996, the Defense Advanced Research Projects Agency sponsored the development of prototype systems that exploit acoustic muzzle blast and ballistic shock wave signatures to accurately predict the location of gunfire events and associated shooter locations using either single or multiple volumetric arrays. The output of these acoustic systems is an estimate of the shooter location and a classification estimate of the caliber of the shooter's weapon. A portable display and control unit provides both graphical and alphanumeric shooter location related information integrated on a two- dimensional digital map of the defended area. The final Phase I Acoustic Systems Demonstration field tests were completed in May. These these tests were held at USMC Base Camp Pendleton Military Operations Urban Training (MOUT) facility. These tests were structured to provide challenging gunfire related scenarios with significant reverberation and multi-path conditions. Special shot geometries and false alarms were included in these tests to probe potential system vulnerabilities and to determine the performance and robustness of the systems. Five prototypes developed by U.S. companies and one Israeli developed prototype were tested. This analysis quantifies the spatial resolution estimation capability (azimuth, elevation and range) of these prototypes and describes their ability to accurately classify the type of bullet fired in a challenging urban- like setting.

  20. Preliminary theoretical acoustic and rf sounding calculations for MILL RACE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warshaw, S.I.; Dubois, P.F.

    1981-11-02

    As participant in DOE/ISA's Ionospheric Monitoring Program, LLNL has the responsibility of providing theoretical understanding and calculational support for experimental activities carried out by Los Alamos National Laboratory in using ionospheric sounders to remotely detect violent atmospheric phenomena. We have developed a system of interconnected computer codes which simulate the entire range of atmospheric and ionospheric processes involved in this remote detection procedure. We are able to model the acoustic pulse shape from an atmospheric explosion, the subsequent nonlinear transport of this energy to all parts of the immediate atmosphere including the ionosphere, and the propagation of high-frequency ratio wavesmore » through the acoustically perturbed ionosphere. Los Alamos' coverage of DNA's MILL RACE event provided an excellent opportunity to assess the credibility of the calculational system to correctly predict how ionospheric sounders would respond to a surface-based chemical explosion. In this experiment, 600 tons of high explosive were detonated at White Sands Missile Range at 12:35:40 local time on 16 September 1981. Vertical incidence rf phase sounders and bistatic oblique incidence rf sounders fielded by Los Alamos and SRI International throughout New Mexico and southern Colorado detected the ionospheric perturbation that ensued. A brief account of preliminary calculations of the acoustic disturbance and the predicted ionospheric sounder signatures for MILL RACE is presented. (WHK)« less

  1. Wavefront modulation and subwavelength diffractive acoustics with an acoustic metasurface.

    PubMed

    Xie, Yangbo; Wang, Wenqi; Chen, Huanyang; Konneker, Adam; Popa, Bogdan-Ioan; Cummer, Steven A

    2014-11-24

    Metasurfaces are a family of novel wavefront-shaping devices with planar profile and subwavelength thickness. Acoustic metasurfaces with ultralow profile yet extraordinary wave manipulating properties would be highly desirable for improving the performance of many acoustic wave-based applications. However, designing acoustic metasurfaces with similar functionality to their electromagnetic counterparts remains challenging with traditional metamaterial design approaches. Here we present a design and realization of an acoustic metasurface based on tapered labyrinthine metamaterials. The demonstrated metasurface can not only steer an acoustic beam as expected from the generalized Snell's law, but also exhibits various unique properties such as conversion from propagating wave to surface mode, extraordinary beam-steering and apparent negative refraction through higher-order diffraction. Such designer acoustic metasurfaces provide a new design methodology for acoustic signal modulation devices and may be useful for applications such as acoustic imaging, beam steering, ultrasound lens design and acoustic surface wave-based applications.

  2. Acoustically Induced Vibration of Structures: Reverberant Vs. Direct Acoustic Testing

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; O'Connell, Michael R.; Tsoi, Wan B.

    2009-01-01

    Large reverberant chambers have been used for several decades in the aerospace industry to test larger structures such as solar arrays and reflectors to qualify and to detect faults in the design and fabrication of spacecraft and satellites. In the past decade some companies have begun using direct near field acoustic testing, employing speakers, for qualifying larger structures. A limited test data set obtained from recent acoustic tests of the same hardware exposed to both direct and reverberant acoustic field testing has indicated some differences in the resulting structural responses. In reverberant acoustic testing, higher vibration responses were observed at lower frequencies when compared with the direct acoustic testing. In the case of direct near field acoustic testing higher vibration responses appeared to occur at higher frequencies as well. In reverberant chamber testing and direct acoustic testing, standing acoustic modes of the reverberant chamber or the speakers and spacecraft parallel surfaces can strongly couple with the fundamental structural modes of the test hardware. In this paper data from recent acoustic testing of flight hardware, that yielded evidence of acoustic standing wave coupling with structural responses, are discussed in some detail. Convincing evidence of the acoustic standing wave/structural coupling phenomenon will be discussed, citing observations from acoustic testing of a simple aluminum plate. The implications of such acoustic coupling to testing of sensitive flight hardware will be discussed. The results discussed in this paper reveal issues with over or under testing of flight hardware that could pose unanticipated structural and flight qualification issues. Therefore, it is of paramount importance to understand the structural modal coupling with standing acoustic waves that has been observed in both methods of acoustic testing. This study will assist the community to choose an appropriate testing method and test setup in

  3. [Creating language model of the forensic medicine domain for developing a autopsy recording system by automatic speech recognition].

    PubMed

    Niijima, H; Ito, N; Ogino, S; Takatori, T; Iwase, H; Kobayashi, M

    2000-11-01

    For the purpose of practical use of speech recognition technology for recording of forensic autopsy, a language model of the speech recording system, specialized for the forensic autopsy, was developed. The language model for the forensic autopsy by applying 3-gram model was created, and an acoustic model for Japanese speech recognition by Hidden Markov Model in addition to the above were utilized to customize the speech recognition engine for forensic autopsy. A forensic vocabulary set of over 10,000 words was compiled and some 300,000 sentence patterns were made to create the forensic language model, then properly mixing with a general language model to attain high exactitude. When tried by dictating autopsy findings, this speech recognition system was proved to be about 95% of recognition rate that seems to have reached to the practical usability in view of speech recognition software, though there remains rooms for improving its hardware and application-layer software.

  4. Application of pattern recognition techniques to acousto-ultrasonic testing of Kevlar composite panels

    NASA Astrophysics Data System (ADS)

    Hinton, Yolanda L.

    An acousto-ultrasonic evaluation of panels fabricated from woven Kevlar and PVB/phenolic resin is being conducted. The panels were fabricated with various simulated defects. They were examined by pulsing with one acoustic emission sensor, and detecting the signal with another sensor, on the same side of the panel at a fixed distance. The acoustic emission signals were filtered through high (400-600 KHz), low (100-300 KHz) and wide (100-1200 KHz) bandpass filters. Acoustic emission signal parameters, including amplitude, counts, rise time, duration, 'energy', rms, and counts to peak, were recorded. These were statistically analyzed to determine which of the AE parameters best characterize the simulated defects. The wideband filtered acoustic emission signal was also digitized and recorded for further processing. Seventy-one features of the signals in both the time and frequency domains were calculated and compared to determine which subset of these features uniquely characterize the defects in the panels. The objective of the program is to develop a database of AE signal parameters and features to be used in pattern recognition as an inspection tool for material fabricated from these materials.

  5. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    ERIC Educational Resources Information Center

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  6. Snapshot recordings provide a first description of the acoustic signatures of deeper habitats adjacent to coral reefs of Moorea.

    PubMed

    Bertucci, Frédéric; Parmentier, Eric; Berthe, Cécile; Besson, Marc; Hawkins, Anthony D; Aubin, Thierry; Lecchini, David

    2017-01-01

    Acoustic recording has been recognized as a valuable tool for non-intrusive monitoring of the marine environment, complementing traditional visual surveys. Acoustic surveys conducted on coral ecosystems have so far been restricted to barrier reefs and to shallow depths (10-30 m). Since they may provide refuge for coral reef organisms, the monitoring of outer reef slopes and describing of the soundscapes of deeper environment could provide insights into the characteristics of different biotopes of coral ecosystems. In this study, the acoustic features of four different habitats, with different topographies and substrates, located at different depths from 10 to 100 m, were recorded during day-time on the outer reef slope of the north Coast of Moorea Island (French Polynesia). Barrier reefs appeared to be the noisiest habitats whereas the average sound levels at other habitats decreased with their distance from the reef and with increasing depth. However, sound levels were higher than expected by propagation models, supporting that these habitats possess their own sound sources. While reef sounds are known to attract marine larvae, sounds from deeper habitats may then also have a non-negligible attractive potential, coming into play before the reef itself.

  7. Snapshot recordings provide a first description of the acoustic signatures of deeper habitats adjacent to coral reefs of Moorea

    PubMed Central

    Parmentier, Eric; Berthe, Cécile; Besson, Marc; Hawkins, Anthony D.; Aubin, Thierry; Lecchini, David

    2017-01-01

    Acoustic recording has been recognized as a valuable tool for non-intrusive monitoring of the marine environment, complementing traditional visual surveys. Acoustic surveys conducted on coral ecosystems have so far been restricted to barrier reefs and to shallow depths (10–30 m). Since they may provide refuge for coral reef organisms, the monitoring of outer reef slopes and describing of the soundscapes of deeper environment could provide insights into the characteristics of different biotopes of coral ecosystems. In this study, the acoustic features of four different habitats, with different topographies and substrates, located at different depths from 10 to 100 m, were recorded during day-time on the outer reef slope of the north Coast of Moorea Island (French Polynesia). Barrier reefs appeared to be the noisiest habitats whereas the average sound levels at other habitats decreased with their distance from the reef and with increasing depth. However, sound levels were higher than expected by propagation models, supporting that these habitats possess their own sound sources. While reef sounds are known to attract marine larvae, sounds from deeper habitats may then also have a non-negligible attractive potential, coming into play before the reef itself. PMID:29158970

  8. Motorcyclists safety system to avoid rear end collisions based on acoustic signatures

    NASA Astrophysics Data System (ADS)

    Muzammel, M.; Yusoff, M. Zuki; Malik, A. Saeed; Mohamad Saad, M. Naufal; Meriaudeau, F.

    2017-03-01

    In many Asian countries, motorcyclists have a higher fatality rate as compared to other vehicles. Among many other factors, rear end collisions are also contributing for these fatalities. Collision detection systems can be useful to minimize these accidents. However, the designing of efficient and cost effective collision detection system for motorcyclist is still a major challenge. In this paper, an acoustic information based, cost effective and efficient collision detection system is proposed for motorcycle applications. The proposed technique uses the Short time Fourier Transform (STFT) to extract the features from the audio signal and Principal component analysis (PCA) has been used to reduce the feature vector length. The reduction of feature length, further increases the performance of this technique. The proposed technique has been tested on self recorded dataset and gives accuracy of 97.87%. We believe that this method can help to reduce a significant number of motorcycle accidents.

  9. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    NASA Astrophysics Data System (ADS)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system

  10. Time-frequency feature representation using multi-resolution texture analysis and acoustic activity detector for real-life speech emotion recognition.

    PubMed

    Wang, Kun-Ching

    2015-01-14

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.

  11. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    PubMed Central

    Wang, Kun-Ching

    2015-01-01

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590

  12. Audibility-based predictions of speech recognition for children and adults with normal hearing.

    PubMed

    McCreery, Ryan W; Stelmachowicz, Patricia G

    2011-12-01

    This study investigated the relationship between audibility and predictions of speech recognition for children and adults with normal hearing. The Speech Intelligibility Index (SII) is used to quantify the audibility of speech signals and can be applied to transfer functions to predict speech recognition scores. Although the SII is used clinically with children, relatively few studies have evaluated SII predictions of children's speech recognition directly. Children have required more audibility than adults to reach maximum levels of speech understanding in previous studies. Furthermore, children may require greater bandwidth than adults for optimal speech understanding, which could influence frequency-importance functions used to calculate the SII. Speech recognition was measured for 116 children and 19 adults with normal hearing. Stimulus bandwidth and background noise level were varied systematically in order to evaluate speech recognition as predicted by the SII and derive frequency-importance functions for children and adults. Results suggested that children required greater audibility to reach the same level of speech understanding as adults. However, differences in performance between adults and children did not vary across frequency bands. © 2011 Acoustical Society of America

  13. Innate recognition of water bodies in echolocating bats.

    PubMed

    Greif, Stefan; Siemers, Björn M

    2010-11-02

    In the course of their lives, most animals must find different specific habitat and microhabitat types for survival and reproduction. Yet, in vertebrates, little is known about the sensory cues that mediate habitat recognition. In free flying bats the echolocation of insect-sized point targets is well understood, whereas how they recognize and classify spatially extended echo targets is currently unknown. In this study, we show how echolocating bats recognize ponds or other water bodies that are crucial for foraging, drinking and orientation. With wild bats of 15 different species (seven genera from three phylogenetically distant, large bat families), we found that bats perceived any extended, echo-acoustically smooth surface to be water, even in the presence of conflicting information from other sensory modalities. In addition, naive juvenile bats that had never before encountered a water body showed spontaneous drinking responses from smooth plates. This provides the first evidence for innate recognition of a habitat cue in a mammal.

  14. Target recognition and phase acquisition by using incoherent digital holographic imaging

    NASA Astrophysics Data System (ADS)

    Lee, Munseob; Lee, Byung-Tak

    2017-05-01

    In this study, we proposed the Incoherent Digital Holographic Imaging (IDHI) for recognition and phase information of dedicated target. Although recent development of a number of target recognition techniques such as LIDAR, there have limited success in target discrimination, in part due to low-resolution, low scanning speed, and computation power. In the paper, the proposed system consists of the incoherent light source, such as LED, Michelson interferometer, and digital CCD for acquisition of four phase shifting image. First of all, to compare with relative coherence, we used a source as laser and LED, respectively. Through numerical reconstruction by using the four phase shifting method and Fresnel diffraction method, we recovered the intensity and phase image of USAF resolution target apart from about 1.0m distance. In this experiment, we show 1.2 times improvement in resolution compared to conventional imaging. Finally, to confirm the recognition result of camouflaged targets with the same color from background, we carry out to test holographic imaging in incoherent light. In this result, we showed the possibility of a target detection and recognition that used three dimensional shape and size signatures, numerical distance from phase information of obtained holographic image.

  15. Nonlinear Acoustic and Ultrasonic NDT of Aeronautical Components

    NASA Astrophysics Data System (ADS)

    Van Den Abeele, Koen; Katkowski, Tomasz; Mattei, Christophe

    2006-05-01

    In response to the demand for innovative microdamage inspection systems, with high sensitivity and undoubted accuracy, we are currently investigating the use and robustness of several acoustic and ultrasonic NDT techniques based on Nonlinear Elastic Wave Spectroscopy (NEWS) for the characterization of microdamage in aeronautical components. In this report, we illustrate the results of an amplitude dependent analysis of the resonance behaviour, both in time (signal reverberation) and in frequency (sweep) domain. The technique is applied to intact and damaged samples of Carbon Fiber Reinforced Plastics (CFRP) composites after thermal loading or mechanical fatigue. The method shows a considerable gain in sensitivity and an incontestable interpretation of the results for nonlinear signatures in comparison with the linear characteristics. For highly fatigued samples, slow dynamical effects are observed.

  16. Pose-oblivious shape signature.

    PubMed

    Gal, Ran; Shamir, Ariel; Cohen-Or, Daniel

    2007-01-01

    A 3D shape signature is a compact representation for some essence of a shape. Shape signatures are commonly utilized as a fast indexing mechanism for shape retrieval. Effective shape signatures capture some global geometric properties which are scale, translation, and rotation invariant. In this paper, we introduce an effective shape signature which is also pose-oblivious. This means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations. Although some topology-based matching methods can be considered pose-oblivious as well, our new signature retains the simplicity and speed of signature indexing. Moreover, contrary to topology-based methods, the new signature is also insensitive to the topology change of the shape, allowing us to match similar shapes with different genus. Our shape signature is a 2D histogram which is a combination of the distribution of two scalar functions defined on the boundary surface of the 3D shape. The first is a definition of a novel function called the local-diameter function. This function measures the diameter of the 3D shape in the neighborhood of each vertex. The histogram of this function is an informative measure of the shape which is insensitive to pose changes. The second is the centricity function that measures the average geodesic distance from one vertex to all other vertices on the mesh. We evaluate and compare a number of methods for measuring the similarity between two signatures, and demonstrate the effectiveness of our pose-oblivious shape signature within a 3D search engine application for different databases containing hundreds of models.

  17. Optical implementation of neocognitron and its applications to radar signature discrimination

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1991-01-01

    A feature-extraction-based optoelectronic neural network is introduced. The system implementation approach applies the principle of the neocognitron paradigm first introduced by Fukushima et al. (1983). A multichannel correlator is used as a building block of a generic single layer of the neocognitron for shift-invariant feature correlation. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator. Successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved using this optoelectronic neocognitron. Detailed system analysis is described. Experimental demonstration of radar signature processing is also provided.

  18. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    PubMed

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  19. Studies in automatic speech recognition and its application in aerospace

    NASA Astrophysics Data System (ADS)

    Taylor, Michael Robinson

    Human communication is characterized in terms of the spectral and temporal dimensions of speech waveforms. Electronic speech recognition strategies based on Dynamic Time Warping and Markov Model algorithms are described and typical digit recognition error rates are tabulated. The application of Direct Voice Input (DVI) as an interface between man and machine is explored within the context of civil and military aerospace programmes. Sources of physical and emotional stress affecting speech production within military high performance aircraft are identified. Experimental results are reported which quantify fundamental frequency and coarse temporal dimensions of male speech as a function of the vibration, linear acceleration and noise levels typical of aerospace environments; preliminary indications of acoustic phonetic variability reported by other researchers are summarized. Connected whole-word pattern recognition error rates are presented for digits spoken under controlled Gz sinusoidal whole-body vibration. Correlations are made between significant increases in recognition error rate and resonance of the abdomen-thorax and head subsystems of the body. The phenomenon of vibrato style speech produced under low frequency whole-body Gz vibration is also examined. Interactive DVI system architectures and avionic data bus integration concepts are outlined together with design procedures for the efficient development of pilot-vehicle command and control protocols.

  20. Automated acoustic analysis in detection of spontaneous swallows in Parkinson's disease.

    PubMed

    Golabbakhsh, Marzieh; Rajaei, Ali; Derakhshan, Mahmoud; Sadri, Saeed; Taheri, Masoud; Adibi, Peyman

    2014-10-01

    Acoustic monitoring of swallow frequency has become important as the frequency of spontaneous swallowing can be an index for dysphagia and related complications. In addition, it can be employed as an objective quantification of ingestive behavior. Commonly, swallowing complications are manually detected using videofluoroscopy recordings, which require expensive equipment and exposure to radiation. In this study, a noninvasive automated technique is proposed that uses breath and swallowing recordings obtained via a microphone located over the laryngopharynx. Nonlinear diffusion filters were used in which a scale-space decomposition of recorded sound at different levels extract swallows from breath sounds and artifacts. This technique was compared to manual detection of swallows using acoustic signals on a sample of 34 subjects with Parkinson's disease. A speech language pathologist identified five subjects who showed aspiration during the videofluoroscopic swallowing study. The proposed automated method identified swallows with a sensitivity of 86.67 %, a specificity of 77.50 %, and an accuracy of 82.35 %. These results indicate the validity of automated acoustic recognition of swallowing as a fast and efficient approach to objectively estimate spontaneous swallow frequency.

  1. Biological Significance of Acoustic Impacts on Marine Mammals: Examples Using an Acoustic Recording tag to Define Acoustic Exposure of Sperm Whales, Physeter catodon, Exposed to Airgun Sounds in Controlled Exposure Experiments

    NASA Astrophysics Data System (ADS)

    Tyack, P. L.; Johnson, M. P.; Madsen, P. T.; Miller, P. J.; Lynch, J.

    2006-05-01

    There has been considerable debate about how to regulate behavioral disruption in marine mammals. The U.S. Marine Mammal Protection Act prohibits "taking" marine mammals, including harassment, which is defined as injury or disruption of behavioral patterns. A 2005 report by the National Academy of Sciences focuses on the need to analyze acoustic impacts on marine mammal behavior in terms of biological significance. The report develops a model for predicting population consequences of acoustic impacts. One of the key data gaps involves methods to estimate the impact of disruption on an animal's ability to complete life functions critical for growth, survival, and reproduction. One of the few areas where theory and data are available involves foraging energetics. Patrick Miller in the next talk and I will discuss an example study designed to evaluate the impact of exposure to seismic survey on the foraging energetics of sperm whales. As petroleum exploration moves offshore to deep water, there is increasing overlap between seismic exploration and deep diving toothed whales such as the sperm whale which is listed by the US as an endangered species. With support from the US Minerals Management Service and the Industry Research Funding Coalition, we tagged sperm whales with tags that can record sound, orientation, acceleration, temperature and depth. Eight whales tagged in the Gulf of Mexico during 2002-2003 were subjects in 5 controlled experiments involving exposure to sounds of an airgun array. One critical component of evaluating effects involves quantifying exposure at the animal. While the on-axis signature of airgun arrays has been well quantified, there are few broadband calibrated measurements in the water column displaced horizontally away from the downward-directed beam. The acoustic recording tags provide direct data on sounds as received at the animals. Due to multipath propagation, multiple sound pulses were recorded on the tagged whales for each firing of

  2. Pattern recognition monitoring of PEM fuel cell

    DOEpatents

    Meltser, M.A.

    1999-08-31

    The CO-concentration in the H{sub 2} feed stream to a PEM fuel cell stack is monitored by measuring current and voltage behavior patterns from an auxiliary cell attached to the end of the stack. The auxiliary cell is connected to the same oxygen and hydrogen feed manifolds that supply the stack, and discharges through a constant load. Pattern recognition software compares the current and voltage patterns from the auxiliary cell to current and voltage signature determined from a reference cell similar to the auxiliary cell and operated under controlled conditions over a wide range of CO-concentrations in the H{sub 2} fuel stream. 4 figs.

  3. Pattern recognition monitoring of PEM fuel cell

    DOEpatents

    Meltser, Mark Alexander

    1999-01-01

    The CO-concentration in the H.sub.2 feed stream to a PEM fuel cell stack is monitored by measuring current and voltage behavior patterns from an auxiliary cell attached to the end of the stack. The auxiliary cell is connected to the same oxygen and hydrogen feed manifolds that supply the stack, and discharges through a constant load. Pattern recognition software compares the current and voltage patterns from the auxiliary cell to current and voltage signature determined from a reference cell similar to the auxiliary cell and operated under controlled conditions over a wide range of CO-concentrations in the H.sub.2 fuel stream.

  4. Unconditionally Secure Blind Signatures

    NASA Astrophysics Data System (ADS)

    Hara, Yuki; Seito, Takenobu; Shikata, Junji; Matsumoto, Tsutomu

    The blind signature scheme introduced by Chaum allows a user to obtain a valid signature for a message from a signer such that the message is kept secret for the signer. Blind signature schemes have mainly been studied from a viewpoint of computational security so far. In this paper, we study blind signatures in unconditional setting. Specifically, we newly introduce a model of unconditionally secure blind signature schemes (USBS, for short). Also, we propose security notions and their formalization in our model. Finally, we propose a construction method for USBS that is provably secure in our security notions.

  5. Recognition of surface lithologic and topographic patterns in southwest Colorado with ADP techniques

    NASA Technical Reports Server (NTRS)

    Melhorn, W. N.; Sinnock, S.

    1973-01-01

    Analysis of ERTS-1 multispectral data by automatic pattern recognition procedures is applicable toward grappling with current and future resource stresses by providing a means for refining existing geologic maps. The procedures used in the current analysis already yield encouraging results toward the eventual machine recognition of extensive surface lithologic and topographic patterns. Automatic mapping of a series of hogbacks, strike valleys, and alluvial surfaces along the northwest flank of the San Juan Basin in Colorado can be obtained by minimal man-machine interaction. The determination of causes for separable spectral signatures is dependent upon extensive correlation of micro- and macro field based ground truth observations and aircraft underflight data with the satellite data.

  6. Pattern Recognition Analysis of Age-Related Retinal Ganglion Cell Signatures in the Human Eye

    PubMed Central

    Yoshioka, Nayuta; Zangerl, Barbara; Nivison-Smith, Lisa; Khuu, Sieu K.; Jones, Bryan W.; Pfeiffer, Rebecca L.; Marc, Robert E.; Kalloniatis, Michael

    2017-01-01

    Purpose To characterize macular ganglion cell layer (GCL) changes with age and provide a framework to assess changes in ocular disease. This study used data clustering to analyze macular GCL patterns from optical coherence tomography (OCT) in a large cohort of subjects without ocular disease. Methods Single eyes of 201 patients evaluated at the Centre for Eye Health (Sydney, Australia) were retrospectively enrolled (age range, 20–85); 8 × 8 grid locations obtained from Spectralis OCT macular scans were analyzed with unsupervised classification into statistically separable classes sharing common GCL thickness and change with age. The resulting classes and gridwise data were fitted with linear and segmented linear regression curves. Additionally, normalized data were analyzed to determine regression as a percentage. Accuracy of each model was examined through comparison of predicted 50-year-old equivalent macular GCL thickness for the entire cohort to a true 50-year-old reference cohort. Results Pattern recognition clustered GCL thickness across the macula into five to eight spatially concentric classes. F-test demonstrated segmented linear regression to be the most appropriate model for macular GCL change. The pattern recognition–derived and normalized model revealed less difference between the predicted macular GCL thickness and the reference cohort (average ± SD 0.19 ± 0.92 and −0.30 ± 0.61 μm) than a gridwise model (average ± SD 0.62 ± 1.43 μm). Conclusions Pattern recognition successfully identified statistically separable macular areas that undergo a segmented linear reduction with age. This regression model better predicted macular GCL thickness. The various unique spatial patterns revealed by pattern recognition combined with core GCL thickness data provide a framework to analyze GCL loss in ocular disease. PMID:28632847

  7. Proximate and ultimate aspects of acoustic and multimodal communication in butterflyfishes

    NASA Astrophysics Data System (ADS)

    Boyle, Kelly S.

    Communication in social animals is shaped by natural selection on both sender and receiver. Diurnal butterflyfishes use a combination of visual cues like bright color patterns and motor pattern driven displays, acoustic communication, and olfactory cues that may advertise territorial behavior, facilitate recognition of individuals, and provide cues for courtship. This dissertation examines proximate and multimodal communication in several butterflyfishes, with an emphasis on acoustic communication which has recently garnered attention within the Chaetodontidae. Sound production in the genus Forcipiger involves a novel mechanism with synchronous contractions of opposing head muscles at the onset of sound emission and rapid cranial rotation that lags behind sound emission. Acoustic signals in F. flavissimus provide an accurate indicator of body size, and to a lesser extent cranial rotation velocity and acceleration. The closely related Hemitaurichthys polylepis produces rapid pulse trains of similar duration and spectral content to F. flavissimus, but with a dramatically different mechanism which involves contractions of hypaxial musculature at the anterior end of the swim bladder that occur with synchronous muscle action potentials. Both H. polylepis sonic and hypaxial trunk muscle fibers have triads at the z-line, but sonic fibers have smaller cross-sectional areas, more developed sarcoplasmic reticula, longer sarcomere lengths, and wider t-tubules. Sonic motor neurons are located along a long motor column entirely within the spinal cord and are composed of large and small types. Forcipiger flavissimus and F. longirostris are site attached and territorial, with F. flavissimus engaged in harem polygyny and F. longirostris in social monogamy. Both produce similar pulse sounds to conspecifics during territoriality that vary little with respect to communicative context. Chaetodon multicinctus can discriminate between mates and non-mate intruders, but require combined

  8. Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.

    PubMed

    Goldman, Geoffrey H

    2013-02-01

    A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds.

  9. Factor models for cancer signatures

    NASA Astrophysics Data System (ADS)

    Kakushadze, Zura; Yu, Willie

    2016-11-01

    We present a novel method for extracting cancer signatures by applying statistical risk models (http://ssrn.com/abstract=2732453) from quantitative finance to cancer genome data. Using 1389 whole genome sequenced samples from 14 cancers, we identify an ;overall; mode of somatic mutational noise. We give a prescription for factoring out this noise and source code for fixing the number of signatures. We apply nonnegative matrix factorization (NMF) to genome data aggregated by cancer subtype and filtered using our method. The resultant signatures have substantially lower variability than those from unfiltered data. Also, the computational cost of signature extraction is cut by about a factor of 10. We find 3 novel cancer signatures, including a liver cancer dominant signature (96% contribution) and a renal cell carcinoma signature (70% contribution). Our method accelerates finding new cancer signatures and improves their overall stability. Reciprocally, the methods for extracting cancer signatures could have interesting applications in quantitative finance.

  10. Topological Acoustics

    NASA Astrophysics Data System (ADS)

    Yang, Zhaoju; Gao, Fei; Shi, Xihang; Lin, Xiao; Gao, Zhen; Chong, Yidong; Zhang, Baile

    2015-03-01

    The manipulation of acoustic wave propagation in fluids has numerous applications, including some in everyday life. Acoustic technologies frequently develop in tandem with optics, using shared concepts such as waveguiding and metamedia. It is thus noteworthy that an entirely novel class of electromagnetic waves, known as "topological edge states," has recently been demonstrated. These are inspired by the electronic edge states occurring in topological insulators, and possess a striking and technologically promising property: the ability to travel in a single direction along a surface without backscattering, regardless of the existence of defects or disorder. Here, we develop an analogous theory of topological fluid acoustics, and propose a scheme for realizing topological edge states in an acoustic structure containing circulating fluids. The phenomenon of disorder-free one-way sound propagation, which does not occur in ordinary acoustic devices, may have novel applications for acoustic isolators, modulators, and transducers.

  11. Acoustic sensors using microstructures tunable with energy other than acoustic energy

    DOEpatents

    Datskos, Panagiotis G.

    2003-11-25

    A sensor for detecting acoustic energy includes a microstructure tuned to a predetermined acoustic frequency and a device for detecting movement of the microstructure. A display device is operatively linked to the movement detecting device. When acoustic energy strikes the acoustic sensor, acoustic energy having a predetermined frequency moves the microstructure, where the movement is detected by the movement detecting device.

  12. Acoustic sensors using microstructures tunable with energy other than acoustic energy

    DOEpatents

    Datskos, Panagiotis G.

    2005-06-07

    A sensor for detecting acoustic energy includes a microstructure tuned to a predetermined acoustic frequency and a device for detecting movement of the microstructure. A display device is operatively linked to the movement detecting device. When acoustic energy strikes the acoustic sensor, acoustic energy having a predetermined frequency moves the microstructure, where the movement is detected by the movement detecting device.

  13. Acoustic dispersive prism.

    PubMed

    Esfahlani, Hussein; Karkar, Sami; Lissek, Herve; Mosig, Juan R

    2016-01-07

    The optical dispersive prism is a well-studied element, which allows separating white light into its constituent spectral colors, and stands in nature as water droplets. In analogy to this definition, the acoustic dispersive prism should be an acoustic device with capability of splitting a broadband acoustic wave into its constituent Fourier components. However, due to the acoustical nature of materials as well as the design and fabrication difficulties, there is neither any natural acoustic counterpart of the optical prism, nor any artificial design reported so far exhibiting an equivalent acoustic behaviour. Here, based on exotic properties of the acoustic transmission-line metamaterials and exploiting unique physical behaviour of acoustic leaky-wave radiation, we report the first acoustic dispersive prism, effective within the audible frequency range 800 Hz-1300 Hz. The dispersive nature, and consequently the frequency-dependent refractive index of the metamaterial are exploited to split the sound waves towards different and frequency-dependent directions. Meanwhile, the leaky-wave nature of the structure facilitates the sound wave radiation into the ambient medium.

  14. A Glider-Assisted Link Disruption Restoration Mechanism in Underwater Acoustic Sensor Networks.

    PubMed

    Jin, Zhigang; Wang, Ning; Su, Yishan; Yang, Qiuling

    2018-02-07

    Underwater acoustic sensor networks (UASNs) have become a hot research topic. In UASNs, nodes can be affected by ocean currents and external forces, which could result in sudden link disruption. Therefore, designing a flexible and efficient link disruption restoration mechanism to ensure the network connectivity is a challenge. In the paper, we propose a glider-assisted restoration mechanism which includes link disruption recognition and related link restoring mechanism. In the link disruption recognition mechanism, the cluster heads collect the link disruption information and then schedule gliders acting as relay nodes to restore the disrupted link. Considering the glider's sawtooth motion, we design a relay location optimization algorithm with a consideration of both the glider's trajectory and acoustic channel attenuation model. The utility function is established by minimizing the channel attenuation and the optimal location of glider is solved by a multiplier method. The glider-assisted restoration mechanism can greatly improve the packet delivery rate and reduce the communication energy consumption and it is more general for the restoration of different link disruption scenarios. The simulation results show that glider-assisted restoration mechanism can improve the delivery rate of data packets by 15-33% compared with cooperative opportunistic routing (OVAR), the hop-by-hop vector-based forwarding (HH-VBF) and the vector based forward (VBF) methods, and reduce communication energy consumption by 20-58% for a typical network's setting.

  15. Free-jet acoustic investigation of high-radius-ratio coannular plug nozzles. Comprehensive data report, volume 1

    NASA Technical Reports Server (NTRS)

    Knott, P. R.; Janardan, B. A.; Majjigi, R. K.; Shutiani, P. K.; Vogt, P. G.

    1981-01-01

    Six coannular plug nozzle configurations having inverted velocity and temperature profiles, and a baseline convergent conical nozzle were tested for simulated flight acoustic evaluation in General Electric's Anechoic Free-Jet Acoustic Facility. The nozzles were tested over a range of test conditions that are typical of a Variable Cycle Engine for application to advanced high speed aircraft. The outer stream radius ratio for most of the configurations was 0.853, and the inner-stream-outer-stream area ratio was tested in the range of 0.54. Other variables investigated were the influence of bypass struts, a simple noncontoured convergent-divergent outer stream nozzle for forward quadrant shock noise control, and the effects of varying outer stream radius and inner-stream-to-outer-stream velocity ratios on the flight noise signatures of the nozzles. It was found that in simulated flight, the high-radius-ratio coannular plug nozzles maintain their jet noise and shock noise reduction features previously observed in static testing. The presence of nozzle bypass structs will not significantly effect the acoustic noise reduction features of a General Electric-type nozzle design. A unique coannular plug nozzle flight acoustic spectral prediction method was identified and found to predict the measured results quite well. Special laser velocimeter and acoustic measurements were performed which have given new insight into the jet and shock noise reduction mechanisms of coannular plug nozzles with regard to identifying further beneficial research efforts.

  16. Characterizing riverbed sediment using high-frequency acoustics 1: spectral properties of scattering

    USGS Publications Warehouse

    Buscombe, Daniel D.; Grams, Paul E.; Kaplinski, Matt A.

    2014-01-01

    Bed-sediment classification using high-frequency hydro-acoustic instruments is challenging when sediments are spatially heterogeneous, which is often the case in rivers. The use of acoustic backscatter to classify sediments is an attractive alternative to analysis of topography because it is potentially sensitive to grain-scale roughness. Here, a new method is presented which uses high-frequency acoustic backscatter from multibeam sonar to classify heterogeneous riverbed sediments by type (sand, gravel,rock) continuously in space and at small spatial resolution. In this, the first of a pair of papers that examine the scattering signatures from a heterogeneous riverbed, methods are presented to construct spatially explicit maps of spectral properties from geo-referenced point clouds of geometrically and radiometrically corrected echoes. Backscatter power spectra are computed to produce scale and amplitude metrics that collectively characterize the length scales of stochastic measures of riverbed scattering, termed ‘stochastic geometries’. Backscatter aggregated over small spatial scales have spectra that obey a power-law. This apparently self-affine behavior could instead arise from morphological- and grain-scale roughnesses over multiple overlapping scales, or riverbed scattering being transitional between Rayleigh and geometric regimes. Relationships exist between stochastic geometries of backscatter and areas of rough and smooth sediments. However, no one parameter can uniquely characterize a particular substrate, nor definitively separate the relative contributions of roughness and acoustic impedance (hardness). Combinations of spectral quantities do, however, have the potential to delineate riverbed sediment patchiness, in a data-driven approach comparing backscatter with bed-sediment observations (which is the subject of part two of this manuscript).

  17. Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise

    PubMed Central

    Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther

    2016-01-01

    Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18–35 years) and 22 older (60–78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults’ poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access

  18. Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.

    PubMed

    Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther

    2016-01-01

    Vocabulary size has been suggested as a useful measure of "verbal abilities" that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18-35 years) and 22 older (60-78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults' poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an

  19. Electronic Signature Policy

    EPA Pesticide Factsheets

    Establishes the United States Environmental Protection Agency's approach to adopting electronic signature technology and best practices to ensure electronic signatures applied to official Agency documents are legally valid and enforceable

  20. Acoustic structure of the five perceptual dimensions of timbre in orchestral instrument tones

    PubMed Central

    Elliott, Taffeta M.; Hamilton, Liberty S.; Theunissen, Frédéric E.

    2013-01-01

    Attempts to relate the perceptual dimensions of timbre to quantitative acoustical dimensions have been tenuous, leading to claims that timbre is an emergent property, if measurable at all. Here, a three-pronged analysis shows that the timbre space of sustained instrument tones occupies 5 dimensions and that a specific combination of acoustic properties uniquely determines gestalt perception of timbre. Firstly, multidimensional scaling (MDS) of dissimilarity judgments generated a perceptual timbre space in which 5 dimensions were cross-validated and selected by traditional model comparisons. Secondly, subjects rated tones on semantic scales. A discriminant function analysis (DFA) accounting for variance of these semantic ratings across instruments and between subjects also yielded 5 significant dimensions with similar stimulus ordination. The dimensions of timbre space were then interpreted semantically by rotational and reflectional projection of the MDS solution into two DFA dimensions. Thirdly, to relate this final space to acoustical structure, the perceptual MDS coordinates of each sound were regressed with its joint spectrotemporal modulation power spectrum. Sound structures correlated significantly with distances in perceptual timbre space. Contrary to previous studies, most perceptual timbre dimensions are not the result of purely temporal or spectral features but instead depend on signature spectrotemporal patterns. PMID:23297911

  1. Study of environmental sound source identification based on hidden Markov model for robust speech recognition

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2003-10-01

    Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.

  2. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  3. An archaeal genomic signature

    NASA Technical Reports Server (NTRS)

    Graham, D. E.; Overbeek, R.; Olsen, G. J.; Woese, C. R.

    2000-01-01

    Comparisons of complete genome sequences allow the most objective and comprehensive descriptions possible of a lineage's evolution. This communication uses the completed genomes from four major euryarchaeal taxa to define a genomic signature for the Euryarchaeota and, by extension, the Archaea as a whole. The signature is defined in terms of the set of protein-encoding genes found in at least two diverse members of the euryarchaeal taxa that function uniquely within the Archaea; most signature proteins have no recognizable bacterial or eukaryal homologs. By this definition, 351 clusters of signature proteins have been identified. Functions of most proteins in this signature set are currently unknown. At least 70% of the clusters that contain proteins from all the euryarchaeal genomes also have crenarchaeal homologs. This conservative set, which appears refractory to horizontal gene transfer to the Bacteria or the Eukarya, would seem to reflect the significant innovations that were unique and fundamental to the archaeal "design fabric." Genomic protein signature analysis methods may be extended to characterize the evolution of any phylogenetically defined lineage. The complete set of protein clusters for the archaeal genomic signature is presented as supplementary material (see the PNAS web site, www.pnas.org).

  4. Waveform inversion of oscillatory signatures in long-period events beneath volcanoes

    USGS Publications Warehouse

    Kumagai, H.; Chouet, B.A.; Nakano, M.

    2002-01-01

    The source mechanism of long-period (LP) events is examined using synthetic waveforms generated by the acoustic resonance of a fluid-filled crack. We perform a series of numerical tests in which the oscillatory signatures of synthetic LP waveforms are used to determine the source time functions of the six moment tensor components from waveform inversions assuming a point source. The results indicate that the moment tensor representation is valid for the odd modes of crack resonance with wavelengths 2L/n, 2W/n, n = 3, 5, 7, ..., where L and W are the crack length and width, respectively. For the even modes with wavelengths 2L/n, 2W/n, n = 2, 4, 6,..., a generalized source representation using higher-order tensors is required, although the efficiency of seismic waves radiated by the even modes is expected to be small. We apply the moment tensor inversion to the oscillatory signatures of an LP event observed at Kusatsu-Shirane Volcano, central Japan. Our results point to the resonance of a subhorizontal crack located a few hundred meters beneath the summit crater lakes. The present approach may be useful to quantify the source location, geometry, and force system of LP events, and opens the way for moment tensor inversions of tremor.

  5. An automatic speech recognition system with speaker-independent identification support

    NASA Astrophysics Data System (ADS)

    Caranica, Alexandru; Burileanu, Corneliu

    2015-02-01

    The novelty of this work relies on the application of an open source research software toolkit (CMU Sphinx) to train, build and evaluate a speech recognition system, with speaker-independent support, for voice-controlled hardware applications. Moreover, we propose to use the trained acoustic model to successfully decode offline voice commands on embedded hardware, such as an ARMv6 low-cost SoC, Raspberry PI. This type of single-board computer, mainly used for educational and research activities, can serve as a proof-of-concept software and hardware stack for low cost voice automation systems.

  6. Retrospective Analysis of Clinical Performance of an Estonian Speech Recognition System for Radiology: Effects of Different Acoustic and Language Models.

    PubMed

    Paats, A; Alumäe, T; Meister, E; Fridolin, I

    2018-04-30

    The aim of this study was to analyze retrospectively the influence of different acoustic and language models in order to determine the most important effects to the clinical performance of an Estonian language-based non-commercial radiology-oriented automatic speech recognition (ASR) system. An ASR system was developed for Estonian language in radiology domain by utilizing open-source software components (Kaldi toolkit, Thrax). The ASR system was trained with the real radiology text reports and dictations collected during development phases. The final version of the ASR system was tested by 11 radiologists who dictated 219 reports in total, in spontaneous manner in a real clinical environment. The audio files collected in the final phase were used to measure the performance of different versions of the ASR system retrospectively. ASR system versions were evaluated by word error rate (WER) for each speaker and modality and by WER difference for the first and the last version of the ASR system. Total average WER for the final version throughout all material was improved from 18.4% of the first version (v1) to 5.8% of the last (v8) version which corresponds to relative improvement of 68.5%. WER improvement was strongly related to modality and radiologist. In summary, the performance of the final ASR system version was close to optimal, delivering similar results to all modalities and being independent on user, the complexity of the radiology reports, user experience, and speech characteristics.

  7. Magneto-acoustic wave energy in sunspots: observations and numerical simulations

    NASA Astrophysics Data System (ADS)

    Felipe, T.; Khomenko, E.; Collados, M.; Beck, C.

    2011-11-01

    We have reproduced some sunspot wave signatures obtained from spectropolarimetric observations through 3D MHD numericalsimulations. The results of the simulations arecompared with the oscillations observed simultaneously at different heights from the SiI lambda10827Å line, HeI lambda10830Å line, the CaII H core and the FeI blends at the wings of the CaII H line. The simulations show a remarkable agreement with the observations, and we have used them to quantify the energy contribution of the magneto-acoustic waves to the chromospheric heating in sunspots. Our findings indicate that the energy supplied by these waves is 5-10 times lower than the amount needed to balance the chromospheric radiative losses.

  8. Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces.

    PubMed

    Liu, Yuhao; Norton, James J S; Qazi, Raza; Zou, Zhanan; Ammann, Kaitlyn R; Liu, Hank; Yan, Lingqing; Tran, Phat L; Jang, Kyung-In; Lee, Jung Woo; Zhang, Douglas; Kilian, Kristopher A; Jung, Sung Hee; Bretl, Timothy; Xiao, Jianliang; Slepian, Marvin J; Huang, Yonggang; Jeong, Jae-Woong; Rogers, John A

    2016-11-01

    Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, provide information of great clinical utility. Stethoscopes and digital accelerometers in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, and both have shortcomings associated with mechanical transduction of signals through the skin. We report a soft, conformal class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording. Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technologies that can capture human body acoustics.

  9. Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces

    PubMed Central

    Liu, Yuhao; Norton, James J. S.; Qazi, Raza; Zou, Zhanan; Ammann, Kaitlyn R.; Liu, Hank; Yan, Lingqing; Tran, Phat L.; Jang, Kyung-In; Lee, Jung Woo; Zhang, Douglas; Kilian, Kristopher A.; Jung, Sung Hee; Bretl, Timothy; Xiao, Jianliang; Slepian, Marvin J.; Huang, Yonggang; Jeong, Jae-Woong; Rogers, John A.

    2016-01-01

    Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, provide information of great clinical utility. Stethoscopes and digital accelerometers in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, and both have shortcomings associated with mechanical transduction of signals through the skin. We report a soft, conformal class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording. Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technologies that can capture human body acoustics. PMID:28138529

  10. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the seven quarter of the project the research team analyzed some of the acoustic velocity data and rock deformation data. The goal is to create a series of ''deformation-velocity maps'' which can outline the types of rock deformational mechanisms which can occur at high pressures and then associate those with specific compressional or shear wave velocity signatures. During this quarter, we began to analyze both the acoustical and deformational properties of the various rock types. Some of the preliminary velocity data from the Danian chalk will be presented in this report. This rock type was selected for the initialmore » efforts as it will be used in the tomographic imaging study outlined in Task 10. This is one of the more important rock types in the study as the Danian chalk is thought to represent an excellent analog to the Ekofisk chalk that has caused so many problems in the North Sea. Some of the preliminary acoustic velocity data obtained during this phase of the project indicates that during pore collapse and compaction of this chalk, the acoustic velocities can change by as much as 200 m/s. Theoretically, this significant velocity change should be detectable during repeated successive 3-D seismic images. In addition, research continues with an analysis of the unconsolidated sand samples at high confining pressures obtained in Task 9. The analysis of the results indicate that sands with 10% volume of fines can undergo liquefaction at lower stress conditions than sand samples which do not have fines added. This liquefaction and/or sand flow is similar to ''shallow water'' flows observed during drilling in the offshore Gulf of Mexico.« less

  11. The multipath propagation effect in gunshot acoustics and its impact on the design of sniper positioning systems

    NASA Astrophysics Data System (ADS)

    Ramos, António L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald

    2013-06-01

    Counter sniper systems rely on the detection and parameter estimation of the shockwave and the muzzle blast in order to determine the sniper location. In real-world situations, these acoustical signals can be disturbed by natural phenomena like weather and climate conditions, multipath propagation effect, and background noise. While some of these issues have received some attention in recent publications with application to gunshot acoustics, the multipath propagation phenomenon whose effect can not be neglected, specially in urban environments, has not yet been discussed in details in the technical literature in the same context. Propagating sound waves can be reflected at the boundaries in the vicinity of sound sources or receivers, whenever there is a difference in acoustical impedance between the reflective material and the air. Therefore, the received signal can be composed of a direct-path signal plus N scaled delayed copies of that signal. This paper presents a discussion on the multipath propagation effect and its impact on the performance and reliability of sniper positioning systems. In our formulation, propagation models for both the shockwave and the muzzle blast are considered and analyzed. Conclusions following the theoretical analysis of the problem are fully supported by actual gunshots acoustical signatures.

  12. Improved Open-Microphone Speech Recognition

    NASA Astrophysics Data System (ADS)

    Abrash, Victor

    2002-12-01

    dialog manager extra flexibility to recognize the signal with no audio gaps between recognition requests, as well as to rerecognize portions of the signal, or to rerecognize speech with different grammars, acoustic models, recognizers, start times, and so on. SRI expects that this new open-mic functionality will enable NASA to develop better error-correction mechanisms for spoken dialog systems, and may also enable new interaction strategies.

  13. Improved Open-Microphone Speech Recognition

    NASA Technical Reports Server (NTRS)

    Abrash, Victor

    2002-01-01

    dialog manager extra flexibility to recognize the signal with no audio gaps between recognition requests, as well as to rerecognize portions of the signal, or to rerecognize speech with different grammars, acoustic models, recognizers, start times, and so on. SRI expects that this new open-mic functionality will enable NASA to develop better error-correction mechanisms for spoken dialog systems, and may also enable new interaction strategies.

  14. Spectral pattern recognition of controlled substances in street samples using artificial neural network system

    NASA Astrophysics Data System (ADS)

    Poryvkina, Larisa; Aleksejev, Valeri; Babichenko, Sergey M.; Ivkina, Tatjana

    2011-04-01

    The NarTest fluorescent technique is aimed at the detection of analyte of interest in street samples by recognition of its specific spectral patterns in 3-dimentional Spectral Fluorescent Signatures (SFS) measured with NTX2000 analyzer without chromatographic or other separation of controlled substances from a mixture with cutting agents. The illicit drugs have their own characteristic SFS features which can be used for detection and identification of narcotics, however typical street sample consists of a mixture with cutting agents: adulterants and diluents. Many of them interfere the spectral shape of SFS. The expert system based on Artificial Neural Networks (ANNs) has been developed and applied for such pattern recognition in SFS of street samples of illicit drugs.

  15. Corona Phase Molecular Recognition (CoPhMoRe) to Enable New Nanosensor Interfaces

    NASA Astrophysics Data System (ADS)

    Strano, Michael

    2015-03-01

    Our lab at MIT has been interested in how the 1D and 2D electronic structures of carbon nanotubes and graphene respectively can be utilized to advance new concepts in molecular detection. We introduce CoPhMoRe or corona phase molecular recognition as a method of discovering synthetic antibodies, or nanotube-templated recognition sites from a heteropolymer library. We show that certain synthetic heteropolymers, once constrained onto a single-walled carbon nanotube by chemical adsorption, also form a new corona phase that exhibits highly selective recognition for specific molecules. To prove the generality of this phenomenon, we report three examples of heteropolymers-nanotube recognition complexes for riboflavin, L-thyroxine and estradiol. The platform opens new opportunities to create synthetic recognition sites for molecular detection. We have also extended this molecular recognition technique to neurotransmitters, producing the first fluorescent sensor for dopamine. Another area of advancement in biosensor development is the use of near infrared fluorescent carbon nanotube sensors for in-vivo detection. Here, we show that PEG-ligated d(AAAT)7 DNA wrapped SWNT are selective for nitric oxide, a vasodilator of blood vessels, and can be tail vein injected into mice and localized within the viable mouse liver. We use an SJL mouse model to study liver inflammation in vivo using the spatially and spectrally resolved nIR signature of the localized SWNT sensors.

  16. Acoustic energy harvesting based on a planar acoustic metamaterial

    NASA Astrophysics Data System (ADS)

    Qi, Shuibao; Oudich, Mourad; Li, Yong; Assouar, Badreddine

    2016-06-01

    We theoretically report on an innovative and practical acoustic energy harvester based on a defected acoustic metamaterial (AMM) with piezoelectric material. The idea is to create suitable resonant defects in an AMM to confine the strain energy originating from an acoustic incidence. This scavenged energy is converted into electrical energy by attaching a structured piezoelectric material into the defect area of the AMM. We show an acoustic energy harvester based on a meta-structure capable of producing electrical power from an acoustic pressure. Numerical simulations are provided to analyze and elucidate the principles and the performances of the proposed system. A maximum output voltage of 1.3 V and a power density of 0.54 μW/cm3 are obtained at a frequency of 2257.5 Hz. The proposed concept should have broad applications on energy harvesting as well as on low-frequency sound isolation, since this system acts as both acoustic insulator and energy harvester.

  17. 1 CFR 18.7 - Signature.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Signature. 18.7 Section 18.7 General Provisions... PREPARATION AND TRANSMITTAL OF DOCUMENTS GENERALLY § 18.7 Signature. The original and each duplicate original... stamped beneath the signature. Initialed or impressed signatures will not be accepted. Documents submitted...

  18. Vertical Coupling and Observable Effects of Evanescent Acoustic-Gravity Waves in the Mesosphere and Thermosphere

    NASA Astrophysics Data System (ADS)

    Snively, J. B.

    2017-12-01

    Our understanding of acoustic-gravity wave (AGW) dynamics at short periods ( minutes to hour) and small scales ( 10s to 100s km) in the mesosphere, thermosphere, and ionosphere (MTI) has benefited considerably from horizontally- and vertically-resolved measurements of layered species. These include, for example, imagery of the mesopause ( 80-100 km) airglow layers and vertical profiles of the sodium layer via lidar [e.g., Taylor and Hapgood, PSS, 36(10), 1988; Miller et al., PNAS, 112(49), 2015; Cao et al., JGR, 121, 2016]. In the thermosphere-ionosphere, AGW perturbations are also revealed in electron density profiles [Livneh et al., JGR, 112, 2007] and maps of total electron content (TEC) from global positioning system (GPS) receivers [Nishioka et al., GRL, 40(21), 2013]. To the extent that AGW signatures in layered species can be quantified, and the ambient atmospheric state measured or estimated, numerical models enable investigations of dynamics at intermediate altitudes that cannot readily be measured (e.g., above and below the 80-100 km mesopause region). Here, new 2D and 3D versions of the Model for Acoustic-Gravity Wave Interactions and Coupling (MAGIC) [e.g., Snively and Pasko, JGR, 113(A6), 2008, and references therein] are introduced and applied to investigate spectra of short-period AGW that can pass through the mesopause region to reach and impact the thermosphere. Simulation case studies are constructed to investigate both their signatures through the hydroxyl airglow layer [e.g., Snively et al., JGR 115(A11), 2010] and their effects above. These waves, with large vertical wavelengths and fast horizontal phase speeds, also include those that may be subject to evanescence at mesopause or in the middle-thermosphere, with potential for ducting or dissipation between where static stability is higher. Despite complicating interpretations of momentum fluxes, evanescence plays an under-appreciated role in vertical coupling by AGW [Walterscheid and Hecht

  19. Structural and Thermodynamic Signatures of DNA Recognition by Mycobacterium tuberculosis DnaA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsodikov, Oleg V.; Biswas, Tapan

    An essential protein, DnaA, binds to 9-bp DNA sites within the origin of replication oriC. These binding events are prerequisite to forming an enigmatic nucleoprotein scaffold that initiates replication. The number, sequences, positions, and orientations of these short DNA sites, or DnaA boxes, within the oriCs of different bacteria vary considerably. To investigate features of DnaA boxes that are important for binding Mycobacterium tuberculosis DnaA (MtDnaA), we have determined the crystal structures of the DNA binding domain (DBD) of MtDnaA bound to a cognate MtDnaA-box (at 2.0 {angstrom} resolution) and to a consensus Escherichia coli DnaA-box (at 2.3 {angstrom}). Thesemore » structures, complemented by calorimetric equilibrium binding studies of MtDnaA DBD in a series of DnaA-box variants, reveal the main determinants of DNA recognition and establish the [T/C][T/A][G/A]TCCACA sequence as a high-affinity MtDnaA-box. Bioinformatic and calorimetric analyses indicate that DnaA-box sequences in mycobacterial oriCs generally differ from the optimal binding sequence. This sequence variation occurs commonly at the first 2 bp, making an in vivo mycobacterial DnaA-box effectively a 7-mer and not a 9-mer. We demonstrate that the decrease in the affinity of these MtDnaA-box variants for MtDnaA DBD relative to that of the highest-affinity box TTGTCCACA is less than 10-fold. The understanding of DnaA-box recognition by MtDnaA and E. coli DnaA enables one to map DnaA-box sequences in the genomes of M. tuberculosis and other eubacteria.« less

  20. Panel acoustic contribution analysis.

    PubMed

    Wu, Sean F; Natarajan, Logesh Kumar

    2013-02-01

    Formulations are derived to analyze the relative panel acoustic contributions of a vibrating structure. The essence of this analysis is to correlate the acoustic power flow from each panel to the radiated acoustic pressure at any field point. The acoustic power is obtained by integrating the normal component of the surface acoustic intensity, which is the product of the surface acoustic pressure and normal surface velocity reconstructed by using the Helmholtz equation least squares based nearfield acoustical holography, over each panel. The significance of this methodology is that it enables one to analyze and rank relative acoustic contributions of individual panels of a complex vibrating structure to acoustic radiation anywhere in the field based on a single set of the acoustic pressures measured in the near field. Moreover, this approach is valid for both interior and exterior regions. Examples of using this method to analyze and rank the relative acoustic contributions of a scaled vehicle cabin are demonstrated.

  1. Is dust acoustic wave a new plasma acoustic mode?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwivedi, C.B.

    1997-09-01

    In this Brief Communication, the claim of the novelty of the dust acoustic wave in a dusty plasma within the constant dust charge model is questioned. Conceptual lacunas behind the claim have been highlighted and appropriate physical arguments have been forwarded against the claim. It is demonstrated that the so-called dust acoustic wave could better be termed as a general acoustic fluctuation response with a dominant characteristic feature of the acoustic-like mode (ALM) fluctuation response reported by Dwivedi {ital et al.} [J. Plasma Phys. {bold 41}, 219 (1989)]. It is suggested that both correct and more usable nomenclature of themore » ALM should be the so-called acoustic mode. {copyright} {ital 1997 American Institute of Physics.}« less

  2. Real time recognition of explosophorous group and explosive material using laser induced photoacoustic spectroscopy associated with novel algorithm for time and frequency domain analysis.

    PubMed

    El-Sharkawy, Yasser H; Elbasuney, Sherif

    2018-06-07

    Energy-rich bonds such as nitrates (NO 3 - ) and percholorates (ClO 4 - ) have an explosive nature; they are frequently encountered in high energy materials. These bonds encompass two highly electronegative atoms competing for electrons. Common explosive materials including urea nitrate, ammonium nitrate, and ammonium percholorates were subjected to photoacoustic spectroscopy. The captured signal was processed using novel digital algorithm designed for time and frequency domain analysis. Frequency domain analysis offered not only characteristic frequencies for NO 3 - and ClO 4 - groups; but also characteristic fingerprint spectra (based on thermal, acoustical, and optical properties) for different materials. The main outcome of this study is that phase-shift domain analysis offered an outstanding signature for each explosive material, with novel discrimination between explosive and similar non-explosive material. Photoacoustic spectroscopy offered different characteristic signatures that can be employed for real time detection with stand-off capabilities. There is no two materials could have the same optical, thermal, and acoustical properties. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  4. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    PubMed

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  5. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition

    PubMed Central

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition. PMID:28937987

  6. Five Guidelines for Selecting Hydrological Signatures

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Westerberg, I.; Branger, F.

    2017-12-01

    Hydrological signatures are index values derived from observed or modeled series of hydrological data such as rainfall, flow or soil moisture. They are designed to extract relevant information about hydrological behavior, such as to identify dominant processes, and to determine the strength, speed and spatiotemporal variability of the rainfall-runoff response. Hydrological signatures play an important role in model evaluation. They allow us to test whether particular model structures or parameter sets accurately reproduce the runoff generation processes within the watershed of interest. Most modeling studies use a selection of different signatures to capture different aspects of the catchment response, for example evaluating overall flow distribution as well as high and low flow extremes and flow timing. Such studies often choose their own set of signatures, or may borrow subsets of signatures used in multiple other works. The link between signature values and hydrological processes is not always straightforward, leading to uncertainty and variability in hydrologists' signature choices. In this presentation, we aim to encourage a more rigorous approach to hydrological signature selection, which considers the ability of signatures to represent hydrological behavior and underlying processes for the catchment and application in question. To this end, we propose a set of guidelines for selecting hydrological signatures. We describe five criteria that any hydrological signature should conform to: Identifiability, Robustness, Consistency, Representativeness, and Discriminatory Power. We describe an example of the design process for a signature, assessing possible signature designs against the guidelines above. Due to their ubiquity, we chose a signature related to the Flow Duration Curve, selecting the FDC mid-section slope as a proposed signature to quantify catchment overall behavior and flashiness. We demonstrate how assessment against each guideline could be used to

  7. On the Time Course of Vocal Emotion Recognition

    PubMed Central

    Pell, Marc D.; Kotz, Sonja A.

    2011-01-01

    How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing. PMID:22087275

  8. Acoustic Seal

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M. (Inventor)

    2006-01-01

    The invention relates to a sealing device having an acoustic resonator. The acoustic resonator is adapted to create acoustic waveforms to generate a sealing pressure barrier blocking fluid flow from a high pressure area to a lower pressure area. The sealing device permits noncontacting sealing operation. The sealing device may include a resonant-macrosonic-synthesis (RMS) resonator.

  9. Acoustic seal

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M. (Inventor)

    2006-01-01

    The invention relates to a sealing device having an acoustic resonator. The acoustic resonator is adapted to create acoustic waveforms to generate a sealing pressure barrier blocking fluid flow from a high pressure area to a lower pressure area. The sealing device permits noncontacting sealing operation. The sealing device may include a resonant-macrosonic-synthesis (RMS) resonator.

  10. Novel Methods for Sensing Acoustical Emissions From the Knee for Wearable Joint Health Assessment.

    PubMed

    Teague, Caitlin N; Hersek, Sinan; Toreyin, Hakan; Millard-Stafford, Mindy L; Jones, Michael L; Kogler, Geza F; Sawka, Michael N; Inan, Omer T

    2016-08-01

    We present the framework for wearable joint rehabilitation assessment following musculoskeletal injury. We propose a multimodal sensing (i.e., contact based and airborne measurement of joint acoustic emission) system for at-home monitoring. We used three types of microphones-electret, MEMS, and piezoelectric film microphones-to obtain joint sounds in healthy collegiate athletes during unloaded flexion/extension, and we evaluated the robustness of each microphone's measurements via: 1) signal quality and 2) within-day consistency. First, air microphones acquired higher quality signals than contact microphones (signal-to-noise-and-interference ratio of 11.7 and 12.4 dB for electret and MEMS, respectively, versus 8.4 dB for piezoelectric). Furthermore, air microphones measured similar acoustic signatures on the skin and 5 cm off the skin (∼4.5× smaller amplitude). Second, the main acoustic event during repetitive motions occurred at consistent joint angles (intra-class correlation coefficient ICC(1, 1) = 0.94 and ICC(1, k) = 0.99). Additionally, we found that this angular location was similar between right and left legs, with asymmetry observed in only a few individuals. We recommend using air microphones for wearable joint sound sensing; for practical implementation of contact microphones in a wearable device, interface noise must be reduced. Importantly, we show that airborne signals can be measured consistently and that healthy left and right knees often produce a similar pattern in acoustic emissions. These proposed methods have the potential for enabling knee joint acoustics measurement outside the clinic/lab and permitting long-term monitoring of knee health for patients rehabilitating an acute knee joint injury.

  11. The coupling technique: A two-wave acoustic method for the study of dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Gremaud, G.; Bujard, M.; Benoit, W.

    1987-03-01

    Progress in the study of dislocation dynamics has been achieved using a two-wave acoustic method, which has been called the coupling technique. In this method, the attenuation α and the velocity v of ultrasonic waves are measured in a sample submitted simultaneously to a harmonic stress σ of low frequency. Closed curves Δα(σ) and Δv/v(σ) are drawn during each cycle of the applied stress. The shapes of these curves and their evolution are characteristic of each dislocation motion mechanism which is activated by the low-frequency applied stress. For this reason, the closed curves Δα(σ) and Δv/v(σ) can be considered as signatures of the interaction mechanism which controls the low-frequency dislocation motion. In this paper, the concept of signature is presented and explained with some experimental examples. It will also be shown that theoretical models can be developed which explain very well the experimental results.

  12. Rate and onset cues can improve cochlear implant synthetic vowel recognition in noise

    PubMed Central

    Mc Laughlin, Myles; Reilly, Richard B.; Zeng, Fan-Gang

    2013-01-01

    Understanding speech-in-noise is difficult for most cochlear implant (CI) users. Speech-in-noise segregation cues are well understood for acoustic hearing but not for electric hearing. This study investigated the effects of stimulation rate and onset delay on synthetic vowel-in-noise recognition in CI subjects. In experiment I, synthetic vowels were presented at 50, 145, or 795 pulse/s and noise at the same three rates, yielding nine combinations. Recognition improved significantly if the noise had a lower rate than the vowel, suggesting that listeners can use temporal gaps in the noise to detect a synthetic vowel. This hypothesis is supported by accurate prediction of synthetic vowel recognition using a temporal integration window model. Using lower rates a similar trend was observed in normal hearing subjects. Experiment II found that for CI subjects, a vowel onset delay improved performance if the noise had a lower or higher rate than the synthetic vowel. These results show that differing rates or onset times can improve synthetic vowel-in-noise recognition, indicating a need to develop speech processing strategies that encode or emphasize these cues. PMID:23464025

  13. Speaker verification system using acoustic data and non-acoustic data

    DOEpatents

    Gable, Todd J [Walnut Creek, CA; Ng, Lawrence C [Danville, CA; Holzrichter, John F [Berkeley, CA; Burnett, Greg C [Livermore, CA

    2006-03-21

    A method and system for speech characterization. One embodiment includes a method for speaker verification which includes collecting data from a speaker, wherein the data comprises acoustic data and non-acoustic data. The data is used to generate a template that includes a first set of "template" parameters. The method further includes receiving a real-time identity claim from a claimant, and using acoustic data and non-acoustic data from the identity claim to generate a second set of parameters. The method further includes comparing the first set of parameters to the set of parameters to determine whether the claimant is the speaker. The first set of parameters and the second set of parameters include at least one purely non-acoustic parameter, including a non-acoustic glottal shape parameter derived from averaging multiple glottal cycle waveforms.

  14. High-acoustic-impedance tantalum oxide layers for insulating acoustic reflectors.

    PubMed

    Capilla, Jose; Olivares, Jimena; Clement, Marta; Sangrador, Jesús; Iborra, Enrique; Devos, Arnaud

    2012-03-01

    This work describes the assessment of the acoustic properties of sputtered tantalum oxide films intended for use as high-impedance films of acoustic reflectors for solidly mounted resonators operating in the gigahertz frequency range. The films are grown by sputtering a metallic tantalum target under different oxygen and argon gas mixtures, total pressures, pulsed dc powers, and substrate biases. The structural properties of the films are assessed through infrared absorption spectroscopy and X-ray diffraction measurements. Their acoustic impedance is assessed by deriving the mass density from X-ray reflectometry measurements and the acoustic velocity from picosecond acoustic spectroscopy and the analysis of the frequency response of the test resonators.

  15. Auditory orientation in crickets: Pattern recognition controls reactive steering

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2005-10-01

    Many groups of insects are specialists in exploiting sensory cues to locate food resources or conspecifics. To achieve orientation, bees and ants analyze the polarization pattern of the sky, male moths orient along the females' odor plume, and cicadas, grasshoppers, and crickets use acoustic signals to locate singing conspecifics. In comparison with olfactory and visual orientation, where learning is involved, auditory processing underlying orientation in insects appears to be more hardwired and genetically determined. In each of these examples, however, orientation requires a recognition process identifying the crucial sensory pattern to interact with a localization process directing the animal's locomotor activity. Here, we characterize this interaction. Using a sensitive trackball system, we show that, during cricket auditory behavior, the recognition process that is tuned toward the species-specific song pattern controls the amplitude of auditory evoked steering responses. Females perform small reactive steering movements toward any sound patterns. Hearing the male's calling song increases the gain of auditory steering within 2-5 s, and the animals even steer toward nonattractive sound patterns inserted into the speciesspecific pattern. This gain control mechanism in the auditory-to-motor pathway allows crickets to pursue species-specific sound patterns temporarily corrupted by environmental factors and may reflect the organization of recognition and localization networks in insects. localization | phonotaxis

  16. Detection and Classification of Whale Acoustic Signals

    NASA Astrophysics Data System (ADS)

    Xian, Yin

    vocalization data set. The word error rate of the DCTNet feature is similar to the MFSC in speech recognition tasks, suggesting that the convolutional network is able to reveal acoustic content of speech signals.

  17. Methodical principles of recognition different source types in an acoustic-emission testing of metal objects

    NASA Astrophysics Data System (ADS)

    Bobrov, A. L.

    2017-08-01

    This paper presents issues of identification of various AE sources in order to increase the information value of AE method. This task is especially relevant for complex objects, when factors that affect an acoustic path on an object of testing significantly affect parameters of signals recorded by sensor. Correlation criteria, sensitive to type of AE source in metal objects is determined in the article.

  18. Recognition intent and visual word recognition.

    PubMed

    Wang, Man-Ying; Ching, Chi-Le

    2009-03-01

    This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.

  19. Impact of Acoustic Standing Waves on Structural Responses: Reverberant Acoustic Testing (RAT) vs. Direct Field Acoustic Testing (DFAT)

    NASA Technical Reports Server (NTRS)

    Kolaini, Ali R.; Doty, Benjamin; Chang, Zensheu

    2012-01-01

    Loudspeakers have been used for acoustic qualification of spacecraft, reflectors, solar panels, and other acoustically responsive structures for more than a decade. Limited measurements from some of the recent speaker tests used to qualify flight hardware have indicated significant spatial variation of the acoustic field within the test volume. Also structural responses have been reported to differ when similar tests were performed using reverberant chambers. To address the impact of non-uniform acoustic field on structural responses, a series of acoustic tests were performed using a flat panel and a 3-ft cylinder exposed to the field controlled by speakers and repeated in a reverberant chamber. The speaker testing was performed using multi-input-single-output (MISO) and multi-input-multi-output (MIMO) control schemes with and without the test articles. In this paper the spatial variation of the acoustic field due to acoustic standing waves and their impacts on the structural responses in RAT and DFAT (both using MISO and MIMO controls for DFAT) are discussed in some detail.

  20. An acoustic switch.

    PubMed

    Vanhille, Christian; Campos-Pozuelo, Cleofé

    2014-01-01

    The benefits derived from the development of acoustic transistors which act as switches or amplifiers have been reported in the literature. Here we propose a model of acoustic switch. We theoretically demonstrate that the device works: the input signal is totally restored at the output when the switch is on whereas the output signal nulls when the switch is off. The switch, on or off, depends on a secondary acoustic field capable to manipulate the main acoustic field. The model relies on the attenuation effect of many oscillating bubbles on the main travelling wave in the liquid, as well as on the capacity of the secondary acoustic wave to move the bubbles. This model evidences the concept of acoustic switch (transistor) with 100% efficiency. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Acoustic tweezers via sub-time-of-flight regime surface acoustic waves.

    PubMed

    Collins, David J; Devendran, Citsabehsan; Ma, Zhichao; Ng, Jia Wei; Neild, Adrian; Ai, Ye

    2016-07-01

    Micrometer-scale acoustic waves are highly useful for refined optomechanical and acoustofluidic manipulation, where these fields are spatially localized along the transducer aperture but not along the acoustic propagation direction. In the case of acoustic tweezers, such a conventional acoustic standing wave results in particle and cell patterning across the entire width of a microfluidic channel, preventing selective trapping. We demonstrate the use of nanosecond-scale pulsed surface acoustic waves (SAWs) with a pulse period that is less than the time of flight between opposing transducers to generate localized time-averaged patterning regions while using conventional electrode structures. These nodal positions can be readily and arbitrarily positioned in two dimensions and within the patterning region itself through the imposition of pulse delays, frequency modulation, and phase shifts. This straightforward concept adds new spatial dimensions to which acoustic fields can be localized in SAW applications in a manner analogous to optical tweezers, including spatially selective acoustic tweezers and optical waveguides.

  2. Acoustic levitation of an object larger than the acoustic wavelength.

    PubMed

    Andrade, Marco A B; Okina, Fábio T A; Bernassau, Anne L; Adamowski, Julio C

    2017-06-01

    Levitation and manipulation of objects by sound waves have a wide range of applications in chemistry, biology, material sciences, and engineering. However, the current acoustic levitation techniques are mainly restricted to particles that are much smaller than the acoustic wavelength. In this work, it is shown that acoustic standing waves can be employed to stably levitate an object much larger than the acoustic wavelength in air. The levitation of a large slightly curved object weighting 2.3 g is demonstrated by using a device formed by two 25 kHz ultrasonic Langevin transducers connected to an aluminum plate. The sound wave emitted by the device provides a vertical acoustic radiation force to counteract gravity and a lateral restoring force that ensure horizontal stability to the levitated object. In order to understand the levitation stability, a numerical model based on the finite element method is used to determine the acoustic radiation force that acts on the object.

  3. A Glider-Assisted Link Disruption Restoration Mechanism in Underwater Acoustic Sensor Networks

    PubMed Central

    Wang, Ning; Su, Yishan; Yang, Qiuling

    2018-01-01

    Underwater acoustic sensor networks (UASNs) have become a hot research topic. In UASNs, nodes can be affected by ocean currents and external forces, which could result in sudden link disruption. Therefore, designing a flexible and efficient link disruption restoration mechanism to ensure the network connectivity is a challenge. In the paper, we propose a glider-assisted restoration mechanism which includes link disruption recognition and related link restoring mechanism. In the link disruption recognition mechanism, the cluster heads collect the link disruption information and then schedule gliders acting as relay nodes to restore the disrupted link. Considering the glider’s sawtooth motion, we design a relay location optimization algorithm with a consideration of both the glider’s trajectory and acoustic channel attenuation model. The utility function is established by minimizing the channel attenuation and the optimal location of glider is solved by a multiplier method. The glider-assisted restoration mechanism can greatly improve the packet delivery rate and reduce the communication energy consumption and it is more general for the restoration of different link disruption scenarios. The simulation results show that glider-assisted restoration mechanism can improve the delivery rate of data packets by 15–33% compared with cooperative opportunistic routing (OVAR), the hop-by-hop vector-based forwarding (HH-VBF) and the vector based forward (VBF) methods, and reduce communication energy consumption by 20–58% for a typical network’s setting. PMID:29414898

  4. Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

    PubMed

    Jürgens, Tim; Brand, Thomas

    2009-11-01

    This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

  5. An Efficient Audio Coding Scheme for Quantitative and Qualitative Large Scale Acoustic Monitoring Using the Sensor Grid Approach

    PubMed Central

    Gontier, Félix; Lagrange, Mathieu; Can, Arnaud; Lavandier, Catherine

    2017-01-01

    The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1) the estimation of standard acoustic indicators; and (2) the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens. PMID:29186021

  6. ACOUSTICS IN ARCHITECTURAL DESIGN, AN ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS.

    ERIC Educational Resources Information Center

    DOELLE, LESLIE L.

    THE PURPOSE OF THIS ANNOTATED BIBLIOGRAPHY ON ARCHITECTURAL ACOUSTICS WAS--(1) TO COMPILE A CLASSIFIED BIBLIOGRAPHY, INCLUDING MOST OF THOSE PUBLICATIONS ON ARCHITECTURAL ACOUSTICS, PUBLISHED IN ENGLISH, FRENCH, AND GERMAN WHICH CAN SUPPLY A USEFUL AND UP-TO-DATE SOURCE OF INFORMATION FOR THOSE ENCOUNTERING ANY ARCHITECTURAL-ACOUSTIC DESIGN…

  7. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty

  8. Digital Signature Management.

    ERIC Educational Resources Information Center

    Hassler, Vesna; Biely, Helmut

    1999-01-01

    Describes the Digital Signature Project that was developed in Austria to establish an infrastructure for applying smart card-based digital signatures in banking and electronic-commerce applications. Discusses the need to conform to international standards, an international certification infrastructure, and security features for a public directory…

  9. AST Launch Vehicle Acoustics

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Counter, D.; Giacomoni, D.

    2015-01-01

    The liftoff phase induces acoustic loading over a broad frequency range for a launch vehicle. These external acoustic environments are then used in the prediction of internal vibration responses of the vehicle and components which result in the qualification levels. Thus, predicting these liftoff acoustic (LOA) environments is critical to the design requirements of any launch vehicle. If there is a significant amount of uncertainty in the predictions or if acoustic mitigation options must be implemented, a subscale acoustic test is a feasible pre-launch test option to verify the LOA environments. The NASA Space Launch System (SLS) program initiated the Scale Model Acoustic Test (SMAT) to verify the predicted SLS LOA environments and to determine the acoustic reduction with an above deck water sound suppression system. The SMAT was conducted at Marshall Space Flight Center and the test article included a 5% scale SLS vehicle model, tower and Mobile Launcher. Acoustic and pressure data were measured by approximately 250 instruments. The SMAT liftoff acoustic results are presented, findings are discussed and a comparison is shown to the Ares I Scale Model Acoustic Test (ASMAT) results.

  10. Perception of pathogenic or beneficial bacteria and their evasion of host immunity: pattern recognition receptors in the frontline

    PubMed Central

    Trdá, Lucie; Boutrot, Freddy; Claverie, Justine; Brulé, Daphnée; Dorey, Stephan; Poinssot, Benoit

    2015-01-01

    Plants are continuously monitoring the presence of microorganisms to establish an adapted response. Plants commonly use pattern recognition receptors (PRRs) to perceive microbe- or pathogen-associated molecular patterns (MAMPs/PAMPs) which are microorganism molecular signatures. Located at the plant plasma membrane, the PRRs are generally receptor-like kinases (RLKs) or receptor-like proteins (RLPs). MAMP detection will lead to the establishment of a plant defense program called MAMP-triggered immunity (MTI). In this review, we overview the RLKs and RLPs that assure early recognition and control of pathogenic or beneficial bacteria. We also highlight the crucial function of PRRs during plant-microbe interactions, with a special emphasis on the receptors of the bacterial flagellin and peptidoglycan. In addition, we discuss the multiple strategies used by bacteria to evade PRR-mediated recognition. PMID:25904927

  11. Acoustic neuroma

    MedlinePlus

    ... Cerebellopontine angle tumor; Angle tumor; Hearing loss - acoustic; Tinnitus - acoustic ... that makes it hard to hear conversations Ringing ( tinnitus ) in the affected ear Less common symptoms include: ...

  12. Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients

    PubMed Central

    Peng, Shu-Chen; Lu, Hui-Ping; Lu, Nelson; Lin, Yung-Song; Deroche, Mickael L. D.

    2017-01-01

    Purpose The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers. Method Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally uttered words that were contrastive in lexical tones. For Task 2, a disyllabic word (yanjing) was manipulated orthogonally, varying in fundamental-frequency (F0) contours and duration patterns. Participants identified each token with the second syllable jing pronounced with Tone 1 (a high level tone) as eyes or with Tone 4 (a high falling tone) as eyeglasses. Results CI participants' recognition accuracy was significantly lower than NH listeners' in Task 1. In Task 2, CI participants' reliance on F0 contours was significantly less than that of NH listeners; their reliance on duration patterns, however, was significantly higher than that of NH listeners. Both CI and NH listeners' performance in Task 1 was significantly correlated with their reliance on F0 contours in Task 2. Conclusion For pediatric CI recipients, lexical-tone recognition using naturally uttered words is primarily related to their reliance on F0 contours, although duration patterns may be used as an additional cue. PMID:28388709

  13. Acoustic, elastic and poroelastic simulations of CO2 sequestration crosswell monitoring based on spectral-element and adjoint methods

    NASA Astrophysics Data System (ADS)

    Morency, Christina; Luo, Yang; Tromp, Jeroen

    2011-05-01

    The key issues in CO2 sequestration involve accurate monitoring, from the injection stage to the prediction and verification of CO2 movement over time, for environmental considerations. '4-D seismics' is a natural non-intrusive monitoring technique which involves 3-D time-lapse seismic surveys. Successful monitoring of CO2 movement requires a proper description of the physical properties of a porous reservoir. We investigate the importance of poroelasticity by contrasting poroelastic simulations with elastic and acoustic simulations. Discrepancies highlight a poroelastic signature that cannot be captured using an elastic or acoustic theory and that may play a role in accurately imaging and quantifying injected CO2. We focus on time-lapse crosswell imaging and model updating based on Fréchet derivatives, or finite-frequency sensitivity kernels, which define the sensitivity of an observable to the model parameters. We compare results of time-lapse migration imaging using acoustic, elastic (with and without the use of Gassmann's formulae) and poroelastic models. Our approach highlights the influence of using different physical theories for interpreting seismic data, and, more importantly, for extracting the CO2 signature from seismic waveforms. We further investigate the differences between imaging with the direct compressional wave, as is commonly done, versus using both direct compressional (P) and shear (S) waves. We conclude that, unlike direct P-wave traveltimes, a combination of direct P- and S-wave traveltimes constrains most parameters. Adding P- and S-wave amplitude information does not drastically improve parameter sensitivity, but it does improve spatial resolution of the injected CO2 zone. The main advantage of using a poroelastic theory lies in direct sensitivity to fluid properties. Simulations are performed using a spectral-element method, and finite-frequency sensitivity kernels are calculated using an adjoint method.

  14. Target detection and localization in shallow water: an experimental demonstration of the acoustic barrier problem at the laboratory scale.

    PubMed

    Marandet, Christian; Roux, Philippe; Nicolas, Barbara; Mars, Jérôme

    2011-01-01

    This study demonstrates experimentally at the laboratory scale the detection and localization of a wavelength-sized target in a shallow ultrasonic waveguide between two source-receiver arrays at 3 MHz. In the framework of the acoustic barrier problem, at the 1/1000 scale, the waveguide represents a 1.1-km-long, 52-m-deep ocean acoustic channel in the kilohertz frequency range. The two coplanar arrays record in the time-domain the transfer matrix of the waveguide between each pair of source-receiver transducers. Invoking the reciprocity principle, a time-domain double-beamforming algorithm is simultaneously performed on the source and receiver arrays. This array processing projects the multireverberated acoustic echoes into an equivalent set of eigenrays, which are defined by their launch and arrival angles. Comparison is made between the intensity of each eigenray without and with a target for detection in the waveguide. Localization is performed through tomography inversion of the acoustic impedance of the target, using all of the eigenrays extracted from double beamforming. The use of the diffraction-based sensitivity kernel for each eigenray provides both the localization and the signature of the target. Experimental results are shown in the presence of surface waves, and methodological issues are discussed for detection and localization.

  15. Negative Effect of Acoustic Panels on Listening Effort in a Classroom Environment.

    PubMed

    Amlani, Amyn M; Russo, Timothy A

    monosyllabic words. After each list in the primary task was completed, participants were asked to recall the string of five digits verbatim. Word-recognition and digit-recall performance decreased with the presence of acoustic panels and as the distance from the target signal to a given seat location increased. The results were validated using the STI, as indicated by a decrease in the transmission of the target signal in the presence of acoustic panel and as the distance to a given seat location increased. The inclusion of acoustic panels reduced the negative effects of noise and reverberation in a classroom environment, resulting in an acoustic climate that complied with the ANSI-recommended guidelines for classroom design. Results, however, revealed that participants required an increased amount of mental effort when the classroom was modified with acoustic treatment compared to no acoustic treatment. Independent of acoustic treatment, mental effort was greatest at seat locations beyond the critical distance (CD). With the addition of acoustic panels, mental effort was found to increase significantly at seat locations beyond the CD compared to the unmodified room condition. Overall, results indicate that increasing the distance between the teacher and child has a detrimental impact on mental effort and, ultimately, academic performance. American Academy of Audiology

  16. Passive Acoustic Leak Detection for Sodium Cooled Fast Reactors Using Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Marklund, A. Riber; Kishore, S.; Prakash, V.; Rajan, K. K.; Michel, F.

    2016-06-01

    Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970s and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), the proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control.

  17. Double negative acoustic metastructure for attenuation of acoustic emissions

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjay; Bhushan, Pulak; Prakash, Om; Bhattacharya, Shantanu

    2018-03-01

    Acoustic metamaterials hold great potential for attenuation of low frequency acoustic emissions. However, a fundamental challenge is achieving high transmission loss over a broad frequency range. In this work, we report a double negative acoustic metastructure for absorption of low frequency acoustic emissions in an aircraft. This is achieved by utilizing a periodic array of hexagonal cells interconnected with a neck and mounted with an elastic membrane on both ends. An average transmission loss of 56 dB under 500 Hz and an overall absorption of over 48% have been realized experimentally. The negative mass density is derived from the dipolar resonances created as a result of the in-phase movement of the membranes. Further, the negative bulk modulus is ascribed to the combined effect of out-of-phase acceleration of the membranes and the Helmholtz resonator. The proposed metastructure enables absorption of low frequency acoustic emissions with improved functionality that is highly desirable for varied applications.

  18. Effects of blocking and presentation on the recognition of word and nonsense syllables in noise

    NASA Astrophysics Data System (ADS)

    Benkí, José R.

    2003-10-01

    Listener expectations may have significant effects on spoken word recognition, modulating word similarity effects from the lexicon. This study investigates the effect of blocking by lexical status on the recognition of word and nonsense syllables in noise. 240 phonemically matched word and nonsense CVC syllables [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84, 101-108 (1988)] were presented to listeners at different S/N ratios for identification. In the mixed condition, listeners were presented with blocks containing both words and nonwords, while listeners in the blocked condition were presented with the trials in blocks containing either words or nonwords. The targets were presented in isolation with 50 ms of preceding and following noise. Preliminary results indicate no effect of blocking on accuracy for either word or nonsense syllables; results from neighborhood density analyses will be presented. Consistent with previous studies, a j-factor analysis indicates that words are perceived as containing at least 0.5 fewer independent units than nonwords in both conditions. Relative to previous work on syllables presented in a frame sentence [Benkí, J. Acoust. Soc. Am. 113, 1689-1705 (2003)], initial consonants were perceived significantly less accurately, while vowels and final consonants were perceived at comparable rates.

  19. Landscape cultivation alters δ30Si signature in terrestrial ecosystems.

    NASA Astrophysics Data System (ADS)

    Vandevenne, F. I.; Delvaux, C.; Huyghes, H.; Ronchi, B.; Govers, G.; Barão, A. L.; Clymans, W.; Meire, P.; André, L.; Struyf, E.

    2014-12-01

    Despite increasing recognition of the importance of biological Si cycling in controlling dissolved Si (DSi) in soil and stream water, effects of human cultivation on the Si cycle remain poorly understood. Sensitive tracer techniques to identify and quantify Si in the soil-plant-water system could be highly relevant in addressing these uncertainties. Stable Si isotopes are promising tools to define Si sources and sinks along the ecosystem flow path, as intense fractionation occurs during chemical weathering and uptake of dissolved Si in plants. Yet they remain underexploited in the end product of the soil-plant system: the soil water. Here, stable Si isotope ratios (δ30Si) of dissolved Si in soil water were measured along a land use gradient (continuous forest, continuous pasture, young cropland and continuous cropland) with similar parent material (loess) and homogenous bulk mineralogical and climatological properties (Belgium). Soil water δ30Si signatures are clearly separated along the gradient, with highest average signatures in continuous cropland (+1.61‰), intermediate in pasture (+1.05‰) and young cropland (+0.89 ‰) and lowest in forest soil water (+0.62‰). Our data do not allow distinguishing biological from pedogenic/lithogenic processes, but point to a strong interaction of both. We expect that increasing export of light isotopes in disturbed land uses (i.e. through agricultural harvest), and higher recycling of 28Si and elevated weathering intensity (including clay dissolution) in forest systems will largely determine soil water δ30Si signatures of our systems. Our results imply that soil water δ30Si signature is biased through land management before it reaches rivers and coastal zones, where other fractionation processes take over (e.g. diatom uptake and reverse weathering in floodplains). In particular, a direct role of agriculture systems in lowering export Si fluxes towards rivers and coastal systems has been shown. Stable Si isotopes have

  20. Landscape cultivation alters δ30Si signature in terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Vandevenne, Floor; Delvaux, Claire; Hughes, Harold; Ronchi, Benedicta; Clymans, Wim; Barao, Ana Lucia; Govers, Gerard; Cornelis, Jean Thomas; André, Luc; Struyf, Eric

    2015-04-01

    Despite increasing recognition of the importance of biological Si cycling in controlling dissolved Si (DSi) in soil and stream water, effects of human cultivation on the Si cycle remain poorly understood. Sensitive tracer techniques to identify and quantify Si in the soil-plant-water system could be highly relevant in addressing these uncertainties. Stable Si isotopes are promising tools to define Si sources and sinks along the ecosystem flow path, as intense fractionation occurs during chemical weathering and uptake of dissolved Si in plants. Yet they remain underexploited in the end product of the soil-plant system: the soil water. Here, stable Si isotope ratios (δ30Si) of dissolved Si in soil water were measured along a land use gradient (continuous forest, continuous pasture, young cropland and continuous cropland) with similar parent material (loess) and homogenous bulk mineralogical and climatological (Belgium). Soil water δ30Si signatures are clearly separated along the gradient, with highest average signatures in continuous cropland (+1.61%), intermediate in pasture (+1.05%) and young cropland (+0.89%) and lowest in forest soil water (+0.62%). Our data do not allow distinguishing biological from pedogenic/lithogenic processes, but point to a strong interaction of both. We expect that increasing export of light isotopes in disturbed land uses (i.e. through agricultural harvest), and higher recycling of 28Si and elevated weathering intensity (including clay dissolution) in forest systems will largely determine soil water δ30Si signatures of our systems. Our results imply that soil water δ30Si signature is biased through land management before it reaches rivers and coastal zones, where other fractionation processes take over (e.g. diatom uptake and reverse weathering in floodplains). In particular, a direct role of agriculture systems in lowering export Si fluxes towards rivers and coastal systems has been shown. Stable Si isotopes have a large potential

  1. Reducing the dimensions of acoustic devices using anti-acoustic-null media

    NASA Astrophysics Data System (ADS)

    Li, Borui; Sun, Fei; He, Sailing

    2018-02-01

    An anti-acoustic-null medium (anti-ANM), a special homogeneous medium with anisotropic mass density, is designed by transformation acoustics (TA). Anti-ANM can greatly compress acoustic space along the direction of its main axis, where the size compression ratio is extremely large. This special feature can be utilized to reduce the geometric dimensions of classic acoustic devices. For example, the height of a parabolic acoustic reflector can be greatly reduced. We also design a brass-air structure on the basis of the effective medium theory to materialize the anti-ANM in a broadband frequency range. Numerical simulations verify the performance of the proposed anti-ANM.

  2. North Pacific Acoustic Laboratory: Deep Water Acoustic Propagation in the Philippine Sea

    DTIC Science & Technology

    2016-06-21

    the "Special Issue on Deep-water Ocean Acoustics" in the Journal of the Acoustical Society of America (Vol. 134, No . 4, Pt. 2 of 2 , October20 13...also listed. Fourteen (14) of these publications appeared in the " Special Issue on Deep-water Ocean Acoustics" in the Journal of the Acoustical

  3. Identification of temporal and spatial signatures of broadband shock-associated noise

    NASA Astrophysics Data System (ADS)

    Pérez Arroyo, C.; Daviller, G.; Puigt, G.; Airiau, C.; Moreau, S.

    2018-02-01

    Broadband shock-associated noise (BBSAN) is a particular high-frequency noise that is generated in imperfectly expanded jets. BBSAN results from the interaction of turbulent structures and the series of expansion and compression waves which appears downstream of the convergent nozzle exit of moderately under-expanded jets. This paper focuses on the impact of the pressure waves generated by BBSAN from a large eddy simulation of a non-screeching supersonic round jet in the near-field. The flow is under-expanded and is characterized by a high Reynolds number Re_j = 1.25× 10^6 and a transonic Mach number M_j=1.15 . It is shown that BBSAN propagates upstream outside the jet and enters the supersonic region leaving a characteristic pattern in the physical plane. This pattern, also called signature, travels upstream through the shock-cell system with a group velocity between the acoustic speed Uc-a_∞ and the sound speed a_∞ in the frequency-wavenumber domain (U_c is the convective jet velocity). To investigate these characteristic patterns, the pressure signals in the jet and the near-field are decomposed into waves traveling downstream (p^+ ) and waves traveling upstream (p^- ). A novel study based on a wavelet technique is finally applied on such signals in order to extract the BBSAN signatures generated by the most energetic events of the supersonic jet.

  4. Relationships between objective acoustic indices and acoustic comfort evaluation in nonacoustic spaces

    NASA Astrophysics Data System (ADS)

    Kang, Jian

    2004-05-01

    Much attention has been paid to acoustic spaces such as concert halls and recording studios, whereas research on nonacoustic buildings/spaces has been rather limited, especially from the viewpoint of acoustic comfort. In this research a series of case studies has been carried out on this topic, considering various spaces including shopping mall atrium spaces, library reading rooms, football stadia, swimming spaces, churches, dining spaces, as well as urban open public spaces. The studies focus on the relationships between objective acoustic indices such as sound pressure level and reverberation time and perceptions of acoustic comfort. The results show that the acoustic atmosphere is an important consideration in such spaces and the evaluation of acoustic comfort may vary considerably even if the objective acoustic indices are the same. It is suggested that current guidelines and technical regulations are insufficient in terms of acoustic design of these spaces, and the relationships established from the case studies between objective and subjective aspects would be useful for developing further design guidelines. [Work supported partly by the British Academy.

  5. Signature extension studies

    NASA Technical Reports Server (NTRS)

    Vincent, R. K.; Thomas, G. S.; Nalepka, R. F.

    1974-01-01

    The importance of specific spectral regions to signature extension is explored. In the recent past, the signature extension task was focused on the development of new techniques. Tested techniques are now used to investigate this spectral aspect of the large area survey. Sets of channels were sought which, for a given technique, were the least affected by several sources of variation over four data sets and yet provided good object class separation on each individual data set. Using sets of channels determined as part of this study, signature extension was accomplished between data sets collected over a six-day period and over a range of about 400 kilometers.

  6. Quantifying loss of acoustic communication space for right whales in and around a U.S. National Marine Sanctuary.

    PubMed

    Hatch, Leila T; Clark, Christopher W; Van Parijs, Sofie M; Frankel, Adam S; Ponirakis, Dimitri W

    2012-12-01

    The effects of chronic exposure to increasing levels of human-induced underwater noise on marine animal populations reliant on sound for communication are poorly understood. We sought to further develop methods of quantifying the effects of communication masking associated with human-induced sound on contact-calling North Atlantic right whales (Eubalaena glacialis) in an ecologically relevant area (~10,000 km(2) ) and time period (peak feeding time). We used an array of temporary, bottom-mounted, autonomous acoustic recorders in the Stellwagen Bank National Marine Sanctuary to monitor ambient noise levels, measure levels of sound associated with vessels, and detect and locate calling whales. We related wind speed, as recorded by regional oceanographic buoys, to ambient noise levels. We used vessel-tracking data from the Automatic Identification System to quantify acoustic signatures of large commercial vessels. On the basis of these integrated sound fields, median signal excess (the difference between the signal-to-noise ratio and the assumed recognition differential) for contact-calling right whales was negative (-1 dB) under current ambient noise levels and was further reduced (-2 dB) by the addition of noise from ships. Compared with potential communication space available under historically lower noise conditions, calling right whales may have lost, on average, 63-67% of their communication space. One or more of the 89 calling whales in the study area was exposed to noise levels ≥120 dB re 1 μPa by ships for 20% of the month, and a maximum of 11 whales were exposed to noise at or above this level during a single 10-min period. These results highlight the limitations of exposure-threshold (i.e., dose-response) metrics for assessing chronic anthropogenic noise effects on communication opportunities. Our methods can be used to integrate chronic and wide-ranging noise effects in emerging ocean-planning forums that seek to improve management of cumulative effects

  7. Virtual acoustics displays

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-03-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  8. Virtual acoustics displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.

    1991-01-01

    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.

  9. Simulation of Acoustics for Ares I Scale Model Acoustic Tests

    NASA Technical Reports Server (NTRS)

    Putnam, Gabriel; Strutzenberg, Louise L.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity acoustic measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Results from ASMAT simulations with the rocket in both held down and elevated configurations, as well as with and without water suppression have been compared to acoustic data collected from similar live-fire tests. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure.

  10. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    PubMed

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  11. Watch what you say, your computer might be listening: A review of automated speech recognition

    NASA Technical Reports Server (NTRS)

    Degennaro, Stephen V.

    1991-01-01

    Spoken language is the most convenient and natural means by which people interact with each other and is, therefore, a promising candidate for human-machine interactions. Speech also offers an additional channel for hands-busy applications, complementing the use of motor output channels for control. Current speech recognition systems vary considerably across a number of important characteristics, including vocabulary size, speaking mode, training requirements for new speakers, robustness to acoustic environments, and accuracy. Algorithmically, these systems range from rule-based techniques through more probabilistic or self-learning approaches such as hidden Markov modeling and neural networks. This tutorial begins with a brief summary of the relevant features of current speech recognition systems and the strengths and weaknesses of the various algorithmic approaches.

  12. Speech Recognition in Nonnative versus Native English-Speaking College Students in a Virtual Classroom.

    PubMed

    Neave-DiToro, Dorothy; Rubinstein, Adrienne; Neuman, Arlene C

    2017-05-01

    Limited attention has been given to the effects of classroom acoustics at the college level. Many studies have reported that nonnative speakers of English are more likely to be affected by poor room acoustics than native speakers. An important question is how classroom acoustics affect speech perception of nonnative college students. The combined effect of noise and reverberation on the speech recognition performance of college students who differ in age of English acquisition was evaluated under conditions simulating classrooms with reverberation times (RTs) close to ANSI recommended RTs. A mixed design was used in this study. Thirty-six native and nonnative English-speaking college students with normal hearing, ages 18-28 yr, participated. Two groups of nine native participants (native monolingual [NM] and native bilingual) and two groups of nine nonnative participants (nonnative early and nonnative late) were evaluated in noise under three reverberant conditions (0.03, 0.06, and 0.08 sec). A virtual test paradigm was used, which represented a signal reaching a student at the back of a classroom. Speech recognition in noise was measured using the Bamford-Kowal-Bench Speech-in-Noise (BKB-SIN) test and signal-to-noise ratio required for correct repetition of 50% of the key words in the stimulus sentences (SNR-50) was obtained for each group in each reverberant condition. A mixed-design analysis of variance was used to determine statistical significance as a function of listener group and RT. SNR-50 was significantly higher for nonnative listeners as compared to native listeners, and a more favorable SNR-50 was needed as RT increased. The most dramatic effect on SNR-50 was found in the group with later acquisition of English, whereas the impact of early introduction of a second language was subtler. At the ANSI standard's maximum recommended RT (0.6 sec), all groups except the NM group exhibited a mild signal-to-noise ratio (SNR) loss. At the 0.8 sec RT, all groups

  13. Determining suspended sediment particle size information from acoustical and optical backscatter measurements

    NASA Astrophysics Data System (ADS)

    Lynch, James F.; Irish, James D.; Sherwood, Christopher R.; Agrawal, Yogesh C.

    1994-08-01

    sectional area of an equivalent sphere is a very good first approximation whereas for acoustics, which is most sensitive in the region ka ˜ 1, the particle volume itself is best sensed. In concluding, we briefly interpret the history of some STRESS transport events in light of the size distribution and other information available. For one of the events "anomalous" suspended particle size distributions are noted, i.e. larger particles are seen suspended before finer ones. Speculative hypotheses for why this signature is observed are presented.

  14. Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals.

    PubMed

    Zhuang, Ning; Zeng, Ying; Yang, Kai; Zhang, Chi; Tong, Li; Yan, Bin

    2018-03-12

    Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods.

  15. Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals

    PubMed Central

    Zeng, Ying; Yang, Kai; Tong, Li; Yan, Bin

    2018-01-01

    Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods. PMID:29534515

  16. Subwavelength diffractive acoustics and wavefront manipulation with a reflective acoustic metasurface

    NASA Astrophysics Data System (ADS)

    Wang, Wenqi; Xie, Yangbo; Popa, Bogdan-Ioan; Cummer, Steven A.

    2016-11-01

    Acoustic metasurfaces provide useful wavefront shaping capabilities, such as beam steering, acoustic focusing, and asymmetric transmission, in a compact structure. Most acoustic metasurfaces described in the literature are transmissive devices and focus their performance on steering sound beam of the fundamental diffractive order. In addition, the range of incident angles studied is usually below the critical incidence predicted by generalized Snell's law of reflection. In this work, we comprehensively analyze the wave interaction with a generic periodic phase-modulating structure in order to predict the behavior of all diffractive orders, especially for cases beyond critical incidence. Under the guidance of the presented analysis, a broadband reflective metasurface is designed based on an expanded library of labyrinthine acoustic metamaterials. Various local and nonlocal wavefront shaping properties are experimentally demonstrated, and enhanced absorption of higher order diffractive waves is experimentally shown for the first time. The proposed methodology provides an accurate approach for predicting practical diffracted wave behaviors and opens a new perspective for the study of acoustic periodic structures. The designed metasurface extends the functionalities of acoustic metasurfaces and paves the way for the design of thin planar reflective structures for broadband acoustic wave manipulation and extraordinary absorption.

  17. A Numerical Investigation of Turbine Noise Source Hierarchy and Its Acoustic Transmission Characteristics

    NASA Technical Reports Server (NTRS)

    VanZante, Dale; Envia, Edmane

    2008-01-01

    Understanding the relative importance of the various turbine noise generation mechanisms and the characteristics of the turbine acoustic transmission loss are essential ingredients in developing robust reduced-order models for predicting the turbine noise signature. A computationally based investigation has been undertaken to help guide the development of a turbine noise prediction capability that does not rely on empiricism. The investigation relies on highly detailed numerical simulations of the unsteady flowfield inside a modern high-pressure turbine (HPT). The simulations are developed using TURBO, which is an unsteady Reynolds-averaged Navier-Stokes (URANS) code capable of multi-stage simulations. The purpose of this study is twofold. First, to determine an estimate of the relative importance of the contributions to the coherent part of the acoustic signature of a turbine from the three potential sources of turbine noise generation, namely, blade-row viscous interaction, potential field interaction, and entropic source associated with the interaction of the blade rows with the temperature nonuniformities caused by the incomplete mixing of the hot fluid and the cooling flow. Second, to develop an understanding of the turbine acoustic transmission characteristics and to assess the applicability of existing empirical and analytical transmission loss models to realistic geometries and flow conditions for modern turbine designs. The investigation so far has concentrated on two simulations: (1) a single-stage HPT and (2) a two-stage HPT and the associated inter-turbine duct/strut segment. The simulations are designed to resolve up to the second harmonic of the blade passing frequency tone in accordance with accepted rules for second order solvers like TURBO. The calculations include blade and vane cooling flows and a radial profile of pressure and temperature at the turbine inlet. The calculation can be modified later to include the combustor pattern factor at the

  18. Lesson 6: Signature Validation

    EPA Pesticide Factsheets

    Checklist items 13 through 17 are grouped under the Signature Validation Process, and represent CROMERR requirements that the system must satisfy as part of ensuring that electronic signatures it receives are valid.

  19. 21 CFR 11.50 - Signature manifestations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Signature manifestations. 11.50 Section 11.50 Food... RECORDS; ELECTRONIC SIGNATURES Electronic Records § 11.50 Signature manifestations. (a) Signed electronic...: (1) The printed name of the signer; (2) The date and time when the signature was executed; and (3...

  20. Spoken Word Recognition in Toddlers Who Use Cochlear Implants

    PubMed Central

    Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.

    2010-01-01

    Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921